CN115063614A - Image matching method and device and electronic equipment - Google Patents

Image matching method and device and electronic equipment Download PDF

Info

Publication number
CN115063614A
CN115063614A CN202210597229.2A CN202210597229A CN115063614A CN 115063614 A CN115063614 A CN 115063614A CN 202210597229 A CN202210597229 A CN 202210597229A CN 115063614 A CN115063614 A CN 115063614A
Authority
CN
China
Prior art keywords
image
edge image
determining
saliency
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210597229.2A
Other languages
Chinese (zh)
Inventor
汪二虎
李飞
赵兵
欧倩
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LCFC Hefei Electronics Technology Co Ltd
Original Assignee
LCFC Hefei Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LCFC Hefei Electronics Technology Co Ltd filed Critical LCFC Hefei Electronics Technology Co Ltd
Priority to CN202210597229.2A priority Critical patent/CN115063614A/en
Publication of CN115063614A publication Critical patent/CN115063614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image matching method, an image matching device and electronic equipment; the method comprises the following steps: determining a second candidate edge image on the image to be detected corresponding to the template image based on the first edge image on the template image; obtaining a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image; and determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image. The image matching method provided by the application can improve the matching accuracy of the edge images.

Description

Image matching method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image matching method and apparatus, and an electronic device.
Background
In the packing box detection process, the printed icons at the edge positions are interfered by edge information, so that a large number of false detections are caused, the production line efficiency is low, and the production line cannot be streamlined normally. Therefore, the detection accuracy of the edge icon is improved, and the key for ensuring the attaching quality of the printing carton and the normal streamline of the production line is provided.
Disclosure of Invention
The embodiment of the application provides an image matching method, an image matching device and electronic equipment, and improves the detection accuracy of edge images.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an icon matching method, including:
determining a second candidate edge image on the image to be detected corresponding to the template image based on the first edge image on the template image;
acquiring a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image;
and determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image.
In the foregoing solution, the determining, based on a first edge image on a template image, a second candidate edge image on an image to be detected corresponding to the template image includes:
determining a rectangular frame corresponding to the first edge image;
expanding the rectangular frame to obtain a first region of interest corresponding to the first edge image;
and determining a second candidate edge image based on the first region of interest and the image to be detected.
In the foregoing solution, the determining a second candidate edge image based on the first region of interest and the image to be measured includes:
determining a candidate region corresponding to the first region of interest in the image to be detected;
segmenting the candidate region from the image to be detected;
and determining the segmented candidate region as the second candidate edge image.
In the foregoing solution, the obtaining a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image includes:
and determining a binary image corresponding to the second candidate edge image according to the color information of the second candidate edge image.
In the foregoing solution, the obtaining a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image includes:
determining a saliency map corresponding to the second candidate edge image according to the second candidate edge image;
counting a significance histogram based on the significance of the pixel points in the significance map;
determining a saliency threshold based on the saliency histogram;
determining the saliency binary map based on the saliency threshold and the saliency map.
In the foregoing solution, the determining a target edge image corresponding to the first edge image based on the binarized map and the saliency binary map includes:
performing AND operation on the binary image and the saliency binary image to obtain a binary image of a target edge image;
and determining the target edge image based on the target position information of the binary image of the target edge image.
In the foregoing solution, the determining a target edge image on an image to be measured based on target position information of a binarized map of the target edge image includes:
acquiring first position information of the second candidate edge image on the image to be detected;
acquiring second position information of the target edge image on the second candidate edge image;
determining target position information of the target edge image on the image to be detected based on the first position information and the second position information;
and determining a target edge image on the image to be detected based on the target position information.
In a second aspect, an embodiment of the present application provides an image matching apparatus, including:
and the candidate edge image determining module is used for determining a second candidate edge image on the image to be detected corresponding to the template image based on the first edge image on the template image.
A binarization image and saliency binary image determining module, configured to obtain a binarization image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image;
and the target edge image determining module is used for determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the image matching method provided by the embodiment of the application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the storage medium includes a set of computer-executable instructions, and when the instructions are executed, the storage medium is configured to perform the image matching method provided by embodiments of the present application.
The image matching method provided by the embodiment of the application determines a second candidate edge image on the image to be detected corresponding to the template image based on the first edge image on the template image; acquiring a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image; and determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image. According to the image matching method, the accurate position information of the target edge image on the image to be detected is determined, and the detection accuracy rate of the edge icon can be improved.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be considered limiting of the present application. Wherein:
FIG. 1 is a schematic diagram of an alternative processing flow of an image matching method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an edge icon false-detection image provided by an embodiment of the present application;
FIG. 3 is a saliency map of an edge icon image provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a statistical histogram of saliency of an edge icon image according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a print matching effect of an image matching method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a workflow of an image matching system provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an alternative apparatus structure of an image matching apparatus provided in an embodiment of the present application;
fig. 8 is a block diagram of an electronic device of an image matching method according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first," "second," and the like, are intended only to distinguish similar objects and not to imply a particular order to the objects, it being understood that "first," "second," and the like may be interchanged under appropriate circumstances or a sequential order, such that the embodiments of the application described herein may be practiced in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is a schematic view of an alternative processing flow of the image matching method provided in the embodiment of the present application, and the following description will be provided with reference to steps S101 to S103 shown in fig. 1 and with reference to fig. 2 to fig. 5.
Step S101, determining a second candidate edge image on the image to be measured corresponding to the template image based on the first edge image on the template image.
In some embodiments, the template image may be a material Document in pdf Format (Portable Document Format); if the image to be detected is a printing carton, the template image can be understood as a design drawing which is one to one with the image to be detected on the printing carton.
In some application scenarios, edge interference information is usually generated at the edge position of the edge icon of the packing box, and the edge interference information can cause interference in the detection of the edge icon by production detection equipment. Under the interference of the edge interference information, if the production detection equipment identifies the packaging box with no missing or abnormal edge icon as unqualified quality, false detection occurs. If false detection is continuously carried out for multiple times, the streamline cannot be normally carried out, and the efficiency is influenced. The edge disturbance information may be a blackish shaded portion of the edge icon at the edge position, or a blackish edge portion generated at the edge due to uneven exposure of the edge icon by a crease at the edge position. False detection of an edge icon is shown in fig. 2, fig. 2 is a schematic diagram of a false detection image of an edge icon, in fig. 2, a missed detection occurs to the icon at the leftmost edge of the image to be detected, and the icon is not detected by the detection device, so that the periphery of the icon is not selected by the rectangular area frame. The reason is that the icon is located at the edge position of the image to be detected, the edge position of the edge icon is interfered by the edge information of the gray part, and when the edge icon is detected by production equipment, the edge icon is not detected, so that the packing box can be detected as a box with the missing edge icon and the quality of the packing box is not qualified. Therefore, how to accurately detect the printed icons at the edge positions on the packing box is a key step for ensuring that the laminating quality of the printed carton is qualified and ensuring normal streamline.
In some embodiments, the edge icon in the template image is determined based on whether the icon position in the template image is close to the left edge or the right edge, the position Of the edge icon is determined by using a rectangular frame, and the Region Of Interest (Region Of Interest) Of the edge icon on the template image is determined based on the rectangular frame corresponding to the edge icon position.
As an example, the edge icon may be determined based on the icon position in the template image. When the left coordinate x of the icon is 0, the characterization icon is located at the left edge of the template image. In the following description, an edge icon on a template image is represented as an edge image.
In some embodiments, the specific implementation process of determining the first edge image region of interest may include: and extending a preset number of pixel points outwards to the periphery of the rectangular frame corresponding to the image position of the first edge image to obtain a first interested area of the first edge image. The first edge image may be any edge image on the template image, and the preset number may be flexibly set according to an actual application scene, for example, the first edge image may be extended by 200 pixels. Since the image on the template image and the image on the image to be detected present a one-to-one correspondence relationship, the candidate region of the second candidate edge image corresponding to the first region of interest in the image to be detected can be determined based on the first region of interest.
As an example, if the position information corresponding to the first region of interest of the first edge image is (x, y, w, h), where x is a coordinate value of the upper left corner of the first region of interest in the x-axis direction on the template image, y is a coordinate value of the upper left corner of the first region of interest in the y-axis direction on the template image, w is the first region of interest width, and h is the first region of interest height. Since the image on the template image corresponds to the image on the image to be detected one by one, the position information of the candidate region corresponding to the first region of interest in the image to be detected on the image to be detected is also (x, y, w, h), and the position information can be recorded as the first position information, and the second candidate edge image is obtained by matting from the image to be detected based on the first position information.
And step S102, acquiring a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image.
In some embodiments, after the second candidate edge image is obtained in the above step, HSV (Hue, Saturation, brightness) color information of the first edge image in the corresponding template image is extracted for the second candidate edge image, and the Hue, Saturation and brightness color information may also be referred to as HSV. In the template image, the color information is directly extractable, and the threshold value of the color information is also known. And obtaining a binary image corresponding to the second candidate edge image based on the color information.
For the second candidate edge image, a fine-grained method in an Opencv Library (Open source Computer Vision Library) may be used to obtain a saliency map corresponding to the second candidate edge image, as shown in fig. 3, where fig. 3 is a saliency map of an edge icon image provided in an embodiment of the present application. As can be seen from the saliency map, the edge information is less salient than the edge image information, and based on this feature, a saliency threshold can be determined. And the significance threshold is higher than the significance of the edge information and lower than the significance of the edge icon, and is used for filtering the edge information of the second candidate edge image.
In some embodiments, the saliency of the pixel point in the saliency map can be obtained through the saliency map corresponding to the image, a saliency histogram can be further counted based on the saliency of the pixel point in the saliency map, and a saliency threshold can be further determined based on the saliency histogram. As shown in fig. 4, fig. 4 is a schematic diagram of a significance statistical histogram of an edge icon image provided in an embodiment of the present application, fig. 4 is a significance statistical histogram corresponding to an edge image significance map in a leftmost edge box of fig. 3, the significance statistical histogram is a statistical feature of the edge image significance map, where an x-axis is a significance value in the significance map, a y-axis is a number of pixel points in the significance map, W is a significance map width, and H is a significance map height. It can be determined that, in fig. 4, a region from the origin (0, 0) to the first trough is an insignificant region and includes many pixel points, and a region from the first trough to the second trough is an edge significant information region. It can be determined that the x-coordinate position where the black dotted line is located, i.e., the second trough position, is the edge image saliency information. Therefore, the x-coordinate position of the black dotted line is a saliency threshold, and the saliency threshold is used for distinguishing edge information and image information of the edge image.
And obtaining a saliency binary image based on the saliency threshold value of the edge image and the saliency map. The process of determining the significance binary image comprises the following steps: and setting the color information value of the pixel point of which the numerical value information of the pixel point in the saliency map is greater than the saliency threshold value to be 255 (white), and setting the color information value of the pixel point of which the numerical value information of the pixel point in the saliency map is less than or equal to the saliency threshold value to be 0 (black), thereby obtaining the saliency binary map.
Step S103, determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image.
In some embodiments, the determined significance binary image corresponding to the second candidate edge image and the determined binary image corresponding to the second candidate edge image are subjected to and operation, so that edge interference information of the second candidate edge can be filtered, and a pure binary image of the target edge image can be obtained. Use the significance map as bin _ img saliency Representing, bin _ img for binary graph hsv A binarized map representing a clean target edge image is represented by bin _ img.
The calculation method is as formula (1);
bin_img=bin_img hsv &bin_img saliency (1)
after the binarized image of the target edge image is obtained, the second position information of the rectangular frame of the region where the binarized image of the target edge image is located in the second candidate edge image can be obtained as (x) by using a findContours method in the Opencv library 1 ,y 1 ,w 1 ,h 1 ) Wherein x is 1 Coordinate value y of the rectangular frame in the x-axis direction at the upper left corner of the second candidate edge image 1 Is the coordinate value of the rectangular frame in the y-axis direction at the upper left corner of the second candidate edge image, w 1 Is the width of a rectangular frame, h 1 Is the height of the rectangular frame. Combining the first position information (x, y, w, h) of the second candidate edge image on the image to be detected obtained in the step S101 to obtain the target position information (x) of the image of the final target edge image on the image to be detected 2 ,y 2 ,w 2 ,h 2 ). And representing the second position information as cable _ region _ roi, the first position information as cable _ roi, and the target position information of the final target edge image on the image to be detected as cable _ location. The calculation formula is shown in formula (2) to formula (5).
lable_location(x 2 )=lable_region_roi(x 1 )+lable_roi(x) (2)
lable_location(y 2 )=lable_region_roi(y 1 )+lable_roi(y) (3)
lable_location(w 2 )=lable_roi(w) (4)
lable_location(h 2 )=lable_roi(h) (5)
In some embodiments, by performing an and operation on the binary image corresponding to the second candidate edge image and the saliency binary image corresponding to the second candidate edge image, edge interference information of the second candidate edge image can be filtered out, a final pure target edge image is obtained, and accurate target position information of the target edge image is further obtained. And then, based on the target position information, matching the first edge image on the template image with the target edge image on the image to be detected, so that the detection accuracy of the edge image can be improved, and the problems that the edge image detection is wrong due to edge interference information of the edge image and the pipeline efficiency is reduced can be solved.
Based on the image matching methods shown in fig. 1 to 4, the obtained schematic diagram of the matching effect of the print content is shown in fig. 5, and fig. 5 is a schematic diagram of the matching effect of the print content of the image matching method provided in the embodiment of the present application.
The workflow of the image matching system provided by the embodiment of the present application is explained below. Referring to fig. 6, fig. 6 is a schematic diagram of a workflow of an image matching system provided in an embodiment of the present application.
In some embodiments, the template image and the image to be measured are in one-to-one correspondence, and therefore, based on the first edge image on the template image 601, the second candidate edge image 604 on the image to be measured 602 corresponding to the template image 601 is obtained.
The process of determining the second candidate edge image is as follows: and acquiring a rectangular frame of the position of the first edge image in the template image, and expanding the rectangular frame of the position of the first edge image outwards by a certain size of pixel value to obtain an interested area of the first edge image. As an example, a rectangular frame where the first edge image is located may be expanded outward by 200 pixels. Determining a candidate region on the image to be detected corresponding to the template image based on the region of interest of the first edge image, recording the position information of the candidate region on the image to be detected as first position information, acquiring a candidate region 603 and the image to be detected based on the first position information, segmenting the candidate region from the image to be detected, and determining the segmented region as a second candidate edge image 604.
The process of extracting the binarized map 605 is as follows: and determining a binary image corresponding to the second post-selected edge image according to the color information of the second candidate edge image. As an example, HSV color information of the second candidate edge image may be extracted, resulting in a corresponding binary map.
Because the second candidate edge image is located at the edge position of the image to be detected, the obtained binary image has edge interference information, and the edge interference information needs to be filtered.
For the second candidate edge image, an image saliency map can be extracted by using a finegained method in an Opencv library, the saliency of a pixel point in the image can be determined based on the saliency map, and the image saliency 606 is extracted. Next, a saliency histogram 607 is calculated based on the saliency of the pixel points on the saliency map. According to the histogram feature of the saliency map, a saliency threshold 608 is obtained, pixels with saliency values of corresponding pixels in the saliency map larger than the threshold are set to be 255 (white), and pixels with saliency values of corresponding pixels in the saliency map smaller than or equal to the saliency threshold are set to be 0 (black), so that a saliency binary map 609 is obtained. Wherein the significance threshold is determined as the x-coordinate position of the statistically significant histogram at the second trough position.
And after obtaining the significance binary image and the binary image of the second candidate edge image, performing AND operation on the significance binary image and the binary image, filtering edge interference information to obtain a pure target edge image binary image, and determining second position information of a rectangular frame at the position of the target edge image on the second candidate edge image. And obtaining the position of the target edge image on the image to be detected based on the first position information of the second candidate edge image on the image to be detected and the second position information of the binary image of the target edge image on the second candidate edge image. Based on the first position information and the second position information, the target edge image accurate position 610 is obtained. And performing image matching on the target edge image based on the accurate position of the finally obtained target edge image.
Fig. 7 is a schematic diagram of an alternative apparatus structure of an image matching apparatus according to an embodiment of the present application, where the image matching apparatus 700 includes a candidate edge image determining module 701, a binarized map and saliency binarized map determining module 702, and a target edge image determining module 703. Wherein,
a candidate edge image determining module 701, configured to determine, based on a first edge image on a template image, a second candidate edge image on an image to be detected corresponding to the template image;
a binarized map and saliency binary map determining module 702, configured to obtain a binarized map corresponding to the second candidate edge image and a saliency binary map corresponding to the second candidate edge image;
a target edge image determining module 703, configured to determine a target edge image corresponding to the first edge image based on the binarized map and the saliency binary map.
In some embodiments, the candidate edge image determination module 701 is specifically configured to: determining a rectangular frame corresponding to the first edge image; expanding the rectangular frame to obtain a first region of interest corresponding to the first edge image; determining a second candidate edge image based on the first region of interest and the image to be tested;
the candidate edge image determining module 701 is specifically configured to: determining a candidate region corresponding to the first region of interest in the image to be detected; segmenting the candidate region from the image to be detected; and determining the segmented candidate region as the second candidate edge image.
In some embodiments, the binarized map and saliency binary map determining module 702 is specifically configured to: determining a binary image corresponding to the second candidate edge image according to the color information of the second candidate edge image; determining a saliency map corresponding to the second candidate edge image according to the second candidate edge image; counting a significance histogram based on the significance of the pixel points in the significance map; determining a saliency threshold based on the saliency histogram; determining the saliency binary map based on the saliency threshold and the saliency map.
In some embodiments, the target edge image determination module 703 is specifically configured to: performing AND operation on the binary image and the saliency binary image to obtain a binary image of a target edge image; and determining the target edge image on the image to be detected based on the target position information of the binary image of the target edge image.
The target edge image determining module 703 is specifically configured to: acquiring first position information of the second candidate edge image on the image to be detected; acquiring second position information of a binary image of the target edge image on the second candidate edge image; determining target position information of the target edge image on the image to be detected based on the first position information and the second position information; and determining a target edge image on the image to be detected based on the target position information.
It should be noted that the image matching apparatus in the embodiment of the present application is similar to the description of the embodiment of the image matching method, and has similar beneficial effects to the embodiment of the method, and therefore, the description is omitted here. The inexhaustible technical details in the image matching apparatus provided in the embodiment of the present application can be understood from the description of any one of fig. 1 to 6.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. The electronic device 800 is used to implement the image matching method of the disclosed embodiments. In some alternative embodiments, the electronic device 800 may implement the image matching method provided by the embodiments of the present application by running a computer program, for example, the computer program may be a software module in an operating system; may be a native (native) Application (APP), i.e. a program that needs to be installed in an operating system to run; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
In practical applications, the electronic device 800 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a Cloud server providing basic Cloud computing services such as a Cloud service, a Cloud database, Cloud computing, a Cloud function, Cloud storage, a network service, Cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, where Cloud Technology (Cloud Technology) refers to a hosting Technology for unifying series resources such as hardware, software, and a network in a wide area network or a local area network to implement computing, storage, processing, and sharing of data. The electronic device 800 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like.
Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, in-vehicle terminals, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the electronic apparatus 800 can also be stored. The calculation unit 801, the ROM802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as an image matching method. For example, in some alternative embodiments, the image matching method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some alternative embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM802 and/or the communication unit 809. When the computer program is loaded into the RAM803 and executed by the computing unit 801, one or more steps of the image matching method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured as an image matching method by any other suitable means (e.g., by means of firmware).
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to execute the image matching method provided by the embodiments of the present application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image matching apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image matching apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image matching apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each implementation process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (10)

1. An image matching method, characterized in that the method comprises:
determining a second candidate edge image on the image to be detected corresponding to the template image based on the first edge image on the template image;
acquiring a binary image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image;
and determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image.
2. The method according to claim 1, wherein determining a second candidate edge image on the image to be tested corresponding to the template image based on the first edge image on the template image comprises:
determining a rectangular frame corresponding to the first edge image;
expanding the rectangular frame to obtain a first region of interest corresponding to the first edge image;
and determining a second candidate edge image based on the first region of interest and the image to be detected.
3. The method of claim 2, wherein determining a second candidate edge image based on the first region of interest and the image under test comprises
Determining a candidate region corresponding to the first region of interest in the image to be detected;
segmenting the candidate region from the image to be detected;
and determining the segmented candidate region as the second candidate edge image.
4. The method according to claim 1, wherein the obtaining of the binary image corresponding to the second candidate edge image and the saliency binary image corresponding to the second candidate edge image comprises:
and determining a binary image corresponding to the second candidate edge image according to the color information of the second candidate edge image.
5. The method according to claim 1, wherein the obtaining of the binary image corresponding to the second candidate edge image and the saliency binary image corresponding to the second candidate edge image comprises:
determining a saliency map corresponding to the second candidate edge image according to the second candidate edge image;
counting a significance histogram based on the significance of the pixel points in the significance map;
determining a saliency threshold based on the saliency histogram;
determining the saliency binary map based on the saliency threshold and the saliency map.
6. The method according to claim 1, wherein the determining a target edge image corresponding to the first edge image based on the binarization map and the saliency binary map comprises:
performing AND operation on the binary image and the saliency binary image to obtain a binary image of a target edge image;
and determining the target edge image based on the target position information of the binary image of the target edge image.
7. The method according to claim 6, wherein the determining the target edge image based on the target position information of the binarized map of the target edge image, further comprises:
acquiring first position information of the second candidate edge image on the image to be detected;
acquiring second position information of a binary image of the target edge image on the second candidate edge image;
determining target position information of the target edge image on the image to be detected based on the first position information and the second position information;
and determining a target edge image on the image to be detected based on the target position information.
8. An image matching apparatus, characterized in that the apparatus comprises:
the second candidate edge image determining module is used for determining a second candidate edge image on the image to be detected corresponding to the template image based on the first edge image on the template image;
a binarization image and saliency binary image determining module, configured to obtain a binarization image corresponding to the second candidate edge image and a saliency binary image corresponding to the second candidate edge image;
and the target edge image determining module is used for determining a target edge image corresponding to the first edge image based on the binary image and the saliency binary image.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the image matching method of any of claims 1-7.
CN202210597229.2A 2022-05-27 2022-05-27 Image matching method and device and electronic equipment Pending CN115063614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210597229.2A CN115063614A (en) 2022-05-27 2022-05-27 Image matching method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210597229.2A CN115063614A (en) 2022-05-27 2022-05-27 Image matching method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115063614A true CN115063614A (en) 2022-09-16

Family

ID=83197546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210597229.2A Pending CN115063614A (en) 2022-05-27 2022-05-27 Image matching method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115063614A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024119327A1 (en) * 2022-12-05 2024-06-13 深圳华大生命科学研究院 Image processing method and apparatus, device, and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024119327A1 (en) * 2022-12-05 2024-06-13 深圳华大生命科学研究院 Image processing method and apparatus, device, and medium

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
CN109753953B (en) Method and device for positioning text in image, electronic equipment and storage medium
CN109583299B (en) Electronic device, certificate identification method, and storage medium
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN111340796B (en) Defect detection method and device, electronic equipment and storage medium
WO2023115409A1 (en) Pad detection method and apparatus, and computer device and storage medium
CN116168351B (en) Inspection method and device for power equipment
CN110135421A (en) Licence plate recognition method, device, computer equipment and computer readable storage medium
CN111680680B (en) Target code positioning method and device, electronic equipment and storage medium
CN109344824A (en) A kind of line of text method for detecting area, device, medium and electronic equipment
CN112464785A (en) Target detection method and device, computer equipment and storage medium
CN115063614A (en) Image matching method and device and electronic equipment
CN111461143A (en) Picture copying identification method and device and electronic equipment
CN115311237A (en) Image detection method and device and electronic equipment
CN110232381A (en) License Plate Segmentation method, apparatus, computer equipment and computer readable storage medium
CN111860687A (en) Image identification method and device, electronic equipment and storage medium
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN115187831A (en) Model training and smoke detection method and device, electronic equipment and storage medium
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN114936395A (en) Household type graph recognition method and device, computer equipment and storage medium
CN114663418A (en) Image processing method and device, storage medium and electronic equipment
CN114418951A (en) Pad detection method and device, computer equipment and storage medium
CN114782433B (en) Keycap detection method and device and electronic equipment
CN116403098B (en) Bill tampering detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination