CN110135442B - Evaluation system and method of feature point extraction algorithm - Google Patents

Evaluation system and method of feature point extraction algorithm Download PDF

Info

Publication number
CN110135442B
CN110135442B CN201910419047.4A CN201910419047A CN110135442B CN 110135442 B CN110135442 B CN 110135442B CN 201910419047 A CN201910419047 A CN 201910419047A CN 110135442 B CN110135442 B CN 110135442B
Authority
CN
China
Prior art keywords
pictures
group
feature
feature point
extraction algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910419047.4A
Other languages
Chinese (zh)
Other versions
CN110135442A (en
Inventor
戚悦
冯威
蔡少骏
林伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN201910419047.4A priority Critical patent/CN110135442B/en
Publication of CN110135442A publication Critical patent/CN110135442A/en
Application granted granted Critical
Publication of CN110135442B publication Critical patent/CN110135442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method for evaluating a feature point extraction algorithm and electronic equipment, wherein the method is applied to the electronic equipment, and the method comprises the following steps: and acquiring the characteristic points of each picture in a plurality of groups of pictures, wherein each group of pictures comprises at least two pictures in the same place under different illumination conditions, and the characteristic points are extracted and obtained by the same characteristic point extraction algorithm according to the same characteristic extraction rule. And determining the overall characteristic point distribution of each group of pictures according to the characteristic points in each picture. And determining the illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures.

Description

Evaluation system and method of feature point extraction algorithm
Technical Field
The application relates to the field of computer vision, in particular to a system and a method for evaluating a feature point extraction algorithm.
Background
With the development of computer vision technology, the feature points extracted by using the feature point extraction algorithm are widely used in visual positioning and map construction. In general, the feature points should have illumination invariance and scale invariance. The current evaluation method for the feature point extraction algorithm generally selects a reference picture to perform repetition rate detection with other pictures, and then evaluates and scores the feature point extraction algorithm according to the repetition rate. However, this method has many disadvantages, for example, the evaluation method of the feature point extraction algorithm has too high dependency on the reference picture, and is easy to introduce noise; there is no description or the like for invalid feature points and feature points with high illumination robustness. Therefore, a method capable of more comprehensively evaluating the feature point extraction algorithm is required.
Disclosure of Invention
Based on the problems, the application provides a new technical scheme, and can solve the problem of excessive dependence on a reference picture when a feature point extraction algorithm is evaluated.
The first aspect of the present application provides an evaluation method for a feature point extraction algorithm, including: acquiring feature points of each picture in a plurality of groups of pictures, wherein each group of pictures comprises at least two pictures in the same place under different illumination conditions, and the feature points are extracted and obtained by the same feature point extraction algorithm; determining the overall characteristic point distribution of each group of pictures according to the characteristic points in each picture; and determining the illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures.
In some embodiments, the determining the overall feature point distribution of each group of pictures according to the feature points of each picture includes: determining the neighborhood of each feature point in each picture; marking pixel points in the neighborhood of each picture as feature points; marking the pixel points of each picture based on the characteristic points; and determining the overall characteristic point distribution of each group of pictures according to the marks in each picture.
In some embodiments, the determining the overall feature point distribution of each group of pictures according to the marks in each picture includes: marking and superposing pixel points corresponding to each picture in each group of pictures to obtain the overall characteristic point distribution of each group of pictures; and the overall characteristic point distribution comprises the times that each pixel point in each group of pictures is a characteristic point.
In some embodiments, each pixel point in each group of pictures is scored based on the overall feature point distribution.
In some embodiments, scoring each pixel point in each group of pictures comprises: scoring each pixel point in each group of pictures based on the number of times that each pixel point in each group of pictures is a feature point; and the fraction of each pixel point and the occurrence frequency of the characteristic point corresponding to each pixel point form a nonlinear relation.
In some embodiments, the illumination robustness score of the feature point extraction algorithm comprises at least one of an overall performance score of the feature point extraction algorithm over the plurality of groups of pictures or an average number of highly robust feature points extracted by the feature point extraction algorithm in each group of pictures in the plurality of groups of pictures.
In some embodiments, the determining the illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures comprises: determining the average confidence of the feature points of each group of pictures according to the overall feature point distribution of each group of pictures; and determining the overall performance score of the feature point extraction algorithm on the multiple groups of pictures according to the average confidence of the feature points of each group of pictures.
In some embodiments, the determining the average confidence of the feature points of each group of pictures according to the overall feature point distribution of each group of pictures includes: determining a pixel score for each of the feature points in each group of pictures; determining the number of the feature points of each group of pictures; and determining the average confidence of the feature points of each group of pictures according to the pixel score of each feature point in each group of pictures and the number of the feature points of each group of pictures.
In some embodiments, the illumination robustness score of the feature point extraction algorithm is included in the plurality of groups of pictures, each group of pictures having an average number of highly robust feature points extracted by the feature point extraction algorithm.
In some embodiments, the determining the illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures comprises: determining the number of high robustness feature points in each group of pictures according to the overall feature point distribution of each group of pictures; and determining the average number of the high-robustness feature points extracted by the feature point extraction algorithm in each group of the plurality of groups of pictures according to the number of the high-robustness feature points in each group of pictures.
In some embodiments, the high-robustness feature points in each group of pictures are included in each group of pictures, and the feature points extracted by the feature point extraction algorithm for a number of times exceeding an extraction number threshold value.
A second aspect of the present application proposes an electronic device; comprising a memory, a processor and a computer program stored on said memory and executable on said processor, said processor implementing the steps of the evaluation method of the feature point extraction algorithm as described hereinbefore when executing said computer program.
A third aspect of the present application provides an evaluation device of a feature point extraction algorithm, including a feature point acquisition unit, a total feature point distribution determination unit, and an evaluation unit. The characteristic point acquisition unit is used for acquiring the characteristic point of each picture in the plurality of groups of pictures; the overall characteristic point distribution determining unit is used for determining overall characteristic point distribution of each group of pictures according to the characteristic points in each picture; and the evaluation unit is used for determining the illumination robustness score of the characteristic point extraction algorithm according to the overall characteristic point distribution of each group of pictures.
A fourth aspect of the present application proposes a computer-readable storage medium having a computer program stored thereon. The computer program may, when being executed by a processor, implement the steps of the evaluation method of the feature point extraction algorithm as described hereinbefore.
The evaluation method and the system of the feature point extraction algorithm provided by the application evaluate the overall performance score of the feature point extraction algorithm on the multiple groups of pictures and the average number of the high-robustness feature points extracted by the feature point extraction algorithm in the multiple groups of pictures, so that the advantage and the disadvantage of the feature point extraction algorithm are evaluated, the dependence on a reference picture is avoided, meanwhile, the invalid feature points and the high-illumination-robustness feature points are described, and the feature point extraction algorithm can be evaluated more comprehensively.
Drawings
The following drawings describe in detail exemplary embodiments disclosed in the present application. Wherein like reference numerals represent similar structures throughout the several views of the drawings. Those of ordinary skill in the art will understand that the present embodiments are non-limiting, exemplary embodiments and that the accompanying drawings are for illustrative and descriptive purposes only and are not intended to limit the scope of the present application, as other embodiments may equally fulfill the inventive intent of the present application. Wherein:
FIG. 1 is a scene schematic of an evaluation system of a feature point extraction algorithm according to some embodiments of the present application;
FIG. 2 is a schematic diagram of exemplary hardware and software components of an electronic device according to some embodiments of the present application;
FIG. 3 is an exemplary flow chart of an evaluation method of a feature point extraction algorithm according to some embodiments of the present application;
FIG. 4 is an exemplary flow diagram for determining an overall feature point distribution for each group of pictures according to some embodiments of the present application;
FIGS. 5a and 5b are schematic diagrams of pixel dilation operations according to some embodiments of the present application;
6a-6d are schematic diagrams of determining an overall feature point distribution for each group of pictures according to some embodiments of the present application;
FIG. 7 is an exemplary flow diagram for determining overall performance scores of a feature point extraction algorithm across multiple groups of pictures according to some embodiments of the present application;
FIG. 8 is an exemplary flow chart for determining an average confidence for feature points for each group of pictures according to some embodiments of the present application;
FIG. 9 is an exemplary flow chart for determining an average number of highly robust feature points according to some embodiments of the present application; and
FIG. 10 is a schematic diagram of an evaluation device of a feature point extraction algorithm according to some embodiments of the present application.
Detailed Description
The application discloses an evaluation system and method of a feature point extraction algorithm, which superposes feature points extracted in each group of pictures according to the same feature point extraction algorithm, and evaluates the illumination robustness score of the feature point extraction algorithm according to the feature point superposition result of each group of pictures in a plurality of groups of pictures. The illumination robustness score may be a score corresponding to the average confidence of the extracted feature point, which reflects the overall performance of the feature point extraction algorithm on the plurality of groups of pictures, and the higher the score is, the higher the average illumination robustness of the feature point extracted by the feature extraction algorithm is represented. The illumination robustness score can also be the average number of the feature points extracted by the feature extraction algorithm with the frequency exceeding a certain threshold, and the higher the average number is, the more the feature points extracted by the feature extraction algorithm with high robustness are.
The system and the method use a plurality of groups of pictures for evaluation, evaluate all the feature points extracted by the feature extraction algorithm, not only reflect the average illumination robustness of the feature points extracted by the feature extraction algorithm, but also add the average number of the feature points with high robustness extracted by the feature extraction algorithm. The system and the method can judge the advantages and disadvantages of different feature point extraction algorithms, and can also design or preprocess the parameters of the same feature point extraction algorithm according to the evaluation result, thereby improving the robustness of the feature point extraction algorithm.
In the following detailed description, specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure to those of ordinary skill in the art. However, the disclosure should be understood to be consistent with the scope of the claims and not limited to the specific inventive details. For example, various modifications to the embodiments disclosed herein will be readily apparent to those skilled in the art; and those skilled in the art may now apply the general principles defined herein to other embodiments and applications without departing from the spirit and scope of the present application. For another example, it will be apparent to one skilled in the art that the present application may be practiced without these specific details. In other instances, well known methods, procedures, systems, components, and/or circuits have been described in general terms, but not in detail so as not to unnecessarily obscure aspects of the present application. Accordingly, the disclosure is not limited to the illustrated embodiments, but is consistent with the scope of the claims.
The terminology used in the description presented herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, if a claim element is referred to in the present application as comprising a singular form (e.g., "a," "an," and/or the like), then that claim element can also include plural of that claim element unless the context clearly dictates otherwise. The terms "comprising" and/or "including" as used in this application are intended to be open ended concepts. For example, the inclusion of B in a merely indicates the presence of B in a, but does not exclude the possibility that other elements (such as C) may be present or added to a.
It is to be understood that the terms "system", "unit", "module" and/or "block" as used herein are a way of distinguishing between different components, elements, parts, portions or assemblies at different levels. However, other terms may be used in the present application instead of the above terms if they can achieve the same purpose.
The modules (or units, blocks, units) described in this application may be implemented as software and/or hardware modules. Unless the context clearly indicates otherwise, when a unit or module is described as being "on", "connected to", or "coupled to" another unit or module, the expression may mean that the unit or module is directly on, linked, or coupled to the other unit or module, or that the unit or module is indirectly on, connected, or coupled to the other unit or module. In this application, the term "and/or" includes any and all combinations of one or more of the associated listed items.
These and other features of the present application, as well as the operation and function of the related elements of structure and the combination of parts and economies of manufacture, may be significantly improved upon consideration of the following description. All of which form a part of this application, with reference to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the application. It should be understood that the drawings are not to scale.
The flow charts used in this application illustrate the operation of system implementations according to some embodiments of the present application. It should be clearly understood that the operations of the flow diagrams may be performed out of order. Rather, the operations may be performed in reverse order or simultaneously. In addition, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
Fig. 1 is a scenario diagram of an evaluation system of a feature point extraction algorithm according to some embodiments of the present application. As shown in fig. 1, the evaluation system of the feature point extraction algorithm may include an image acquisition apparatus 110, a network 120, an electronic device 130, and a memory 140.
The image capture device 110 may capture an image. In some embodiments, the image capturing device 110 may capture multiple sets of pictures, and each set of pictures may be at least two pictures in different lighting conditions of the same location. For example, the image capturing device 110 may be a monitoring camera installed at a fixed location for capturing an image of a scene at the fixed location. The monitoring camera can acquire a plurality of pictures of the fixed place under different illumination conditions to form a group of pictures. For another example, the image capturing device 110 may be an on-vehicle camera mounted on the vehicle for capturing images of the surroundings of the vehicle while the vehicle is stopped or running. The vehicle-mounted camera can shoot a plurality of pictures of the place under different lighting conditions when the vehicle runs to the same place.
In some embodiments, image capture device 110 may send the captured sets of pictures to one or more electronic devices via network 120 for storage or processing of the sets of pictures. For example, the image capturing apparatus 110 may transmit the captured sets of pictures to the electronic device 130 via the network 120. In some embodiments, the electronic device 130 may process the plurality of sets of pictures. For example, the electronic device 130 may use the same feature point extraction algorithm to extract feature points in the multiple groups of pictures for instant positioning and mapping. As another example, the electronic device 130 may evaluate the merits of the feature point extraction algorithm. It should be understood that the electronic device 130 may be one or more electronic devices. For example, one electronic device 130 may be used to extract feature points using the same feature point extraction algorithm, and another electronic device 130 may be used to evaluate the merits of the feature point extraction algorithm.
In some embodiments, the image capture device 110 may send the captured sets of pictures to the memory 140 via the network 120, and the memory 140 is configured to store the sets of pictures. In some embodiments, the electronic device 130 may transmit the extracted feature points or the evaluation results of the feature point extraction algorithm to the memory 140 through the network 120. The memory 140 is used for storing the feature points or the evaluation results of the feature point extraction algorithm.
Fig. 2 is a schematic diagram of exemplary hardware and software components of a computing device 200 according to some embodiments of the present application. The electronic device may carry an evaluation method that implements the feature point extraction algorithm. For example, the electronic device 130 depicted in FIG. 1 may include exemplary hardware or software components of the computing device 200 depicted in FIG. 2 for evaluating the feature point extraction algorithm.
In some embodiments, the computing device 200 may be a dedicated computer device specifically designed for evaluating feature point extraction algorithms. For example, the computing device 200 may include a COM port 250 connected to a network connected thereto to facilitate data communications. The computing device 200 may also include a processor 220, the processor 220 in the form of one or more processors for executing computer instructions. The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions that perform the particular functions described herein.
In some embodiments, the processor 220 may include one or more hardware processors, such as microcontrollers, microprocessors, Reduced Instruction Set Computers (RISC), Application Specific Integrated Circuits (ASICs), application specific instruction set processors (ASIPs), Central Processing Units (CPUs), Graphics Processing Units (GPUs), Physical Processing Units (PPUs), microcontroller units, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Advanced RISC Machines (ARMs), Programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof.
The computing device 200 may include an internal communication bus 210, program storage and various forms of data storage devices (e.g., disk 270, Read Only Memory (ROM)230, or Random Access Memory (RAM)240) for various data files processed and/or transmitted by the computer. The pictures captured by the image capture device 110 may be stored in the memory device. The computing device 200 may also include program instructions stored in the ROM 230, RAM 240, and/or other types of non-transitory storage media to be executed by the processor 220. The methods and/or processes of the present application may be implemented as program instructions. The electronic device also includes I/O components 260 that support input/output between the computer and other components (e.g., user interface elements). The computing device 200 may also receive programming and data via network communications.
For illustrative purposes only, only one processor is depicted in the computing device 200 described in the present application. It should be noted, however, that the computing device 200 may also include multiple processors and that, thus, the operations and/or method steps disclosed herein may be performed by one processor, as described herein, or by a combination of multiple processors. For example, if in the present application processor 220 of computing device 200 performs steps A and B, it should be understood that steps A and B could also be performed jointly or separately by two different processors in the processing of information (e.g., a first processor performing step A, a second processor performing step B, or both a first and second processor performing steps A and B together).
Fig. 3 is an exemplary flow diagram of an evaluation method 300 of a feature point extraction algorithm according to some embodiments of the present application.
In 310, the electronic device 130 may obtain feature points for each of the plurality of sets of pictures. In some embodiments, each of the plurality of sets of pictures may include at least two pictures in different lighting conditions of the same location. For example, the plurality of pictures may include a plurality of pictures of a group of building a under different lighting conditions, a plurality of pictures of a group of person B under different lighting conditions, a plurality of pictures of a group of animal C under different lighting conditions, and a plurality of pictures of a group of crossing D under different lighting conditions. In some embodiments, the plurality of sets of pictures may be captured by one or more image capturing devices 110, or may be stored in the memory 140.
In some embodiments, the feature points may be extracted by the same feature point extraction algorithm according to the same feature extraction rule. For example, each of the multiple groups of pictures may be used as an input of the same feature point extraction algorithm, and the feature point extraction algorithm outputs a feature image corresponding to each picture according to the same feature extraction rule of the same feature point extraction algorithm. Each feature image comprises a plurality of pixel points. The feature points may be pixel points in a feature image corresponding to the corresponding picture. In some embodiments, the electronic device 130 may set the number of extracted feature points. For example, the electronic device 130 may extract 500 feature points, 1000 feature points, or 2000 feature points on the feature image corresponding to each of the multiple sets of pictures.
It should be understood that the electronic device that extracts the feature points may be the same as or different from the electronic device that evaluates the feature point extraction algorithm. For example, the electronic device a may continue to evaluate the merits of the feature point extraction algorithm after extracting the feature points. For another example, the electronic device a may be configured to extract feature points, and send the extracted feature points to the electronic device B, and the electronic device B may be configured to evaluate the quality of the feature point extraction algorithm. It should be understood that the same feature extraction rule of the same feature point extraction algorithm may be any one of the same methods or rules, and is not limited in this application.
In 320, the electronic device 130 may determine an overall feature point distribution of each group of pictures according to the feature points in each picture. In some embodiments, the overall feature point distribution of each group of pictures may reflect a dense situation of the location distribution of the feature points extracted by the feature point extraction algorithm in the multiple pictures at the same location under different lighting conditions. For example, according to feature points in a group of pictures of the building a under different lighting conditions, the electronic device 130 may determine at which positions the feature points extracted by the feature extraction algorithm are distributed more densely and at which positions the feature points are distributed sparsely.
In some embodiments, the electronic device 130 may mark the feature points in each of the pictures, and superimpose the marked feature points of the multiple pictures corresponding to each group of pictures to obtain the overall feature point distribution of the group of pictures. For example, the electronic device 130 may mark the feature value of the feature point as 1 and the feature value of the non-feature point as 0. The overall feature point distribution of each group of pictures may be the superposition of the positions corresponding to the feature points and the marked feature values. In some embodiments, the electronic device 130 may further mark pixel points in the neighborhood of the extracted feature points, mark feature values of the pixel points as 1, that is, after the pixel points around each feature value are marked as feature points, overlap the marked feature point values of multiple pictures corresponding to each group of pictures to obtain the total feature point distribution of the corresponding group of pictures. The method for determining the overall feature point distribution of each group of pictures according to the feature points in each picture can be described in detail in fig. 4 of the present application.
In 330, the electronic device 130 may determine an illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures. In some embodiments, the illumination robustness score of the feature point extraction algorithm may reflect the stability of feature points extracted by the feature point extraction algorithm under different illumination conditions. The higher the illumination robustness score of the feature point extraction algorithm is, the stronger the illumination invariance of the feature point extraction algorithm is, and the stronger the robustness of the feature points extracted by the feature point extraction algorithm under different illumination conditions is. In some embodiments, the illumination robustness score of the feature point extraction algorithm may include an overall performance score s _ avg of the feature point extraction algorithm over the plurality of groups of pictures or an average number k _ avg of high robustness feature points extracted by the feature point extraction algorithm in each group of pictures in the plurality of groups of pictures or a combination of the two. For example, the electronic device 130 may use the overall performance score s _ avg or the average number k _ avg of the high-robustness feature points alone as the illumination robustness score of the feature point extraction algorithm to perform illumination robustness evaluation on different feature point extraction algorithms. When the computing power of the electronic device performing the visual positioning is strong, the electronic device may adopt a feature point extraction algorithm in which the overall performance score s _ avg is smaller than a certain score threshold and the average number k _ avg of the high-robustness feature points is higher than a certain number threshold to enhance the image construction quality or the visual positioning. When there are too many interference points or invalid points in the map, the electronic device may employ a feature point extraction algorithm in which the overall performance score s _ avg is higher than the score threshold. For another example, the electronic device 130 may use the overall performance score s _ avg and the average number of highly robust feature points k _ avg together as the illumination robustness score of the feature point extraction algorithm. When the overall performance score s _ avg and the average number k _ avg of the high-robustness feature points are higher than the score threshold and the number threshold, respectively, the electronic device 130 may evaluate that the illumination robustness of the corresponding feature point extraction algorithm is high. When the overall performance score s _ avg and the average number k _ avg of the high-robustness feature points are lower than the score threshold and the number threshold, respectively, the electronic device 130 may evaluate that the illumination robustness of the corresponding feature point extraction algorithm is low. When one of the overall performance score s _ avg and the average number k _ avg of the high-robustness feature points is higher than a corresponding threshold and the other is lower than the corresponding threshold, the electronic device 130 may evaluate the illumination robustness of the feature point extraction algorithm using respective corresponding rules. In some embodiments, the score threshold and/or the quantity threshold may be set by the electronic device 130 according to a certain rule (e.g., a machine learning algorithm), or may be set by an operator of the electronic device 130 empirically and stored in any storage device (e.g., the memory 140, the read-only memory 230, the random access memory 240, etc.). For another example, the electronic device 130 may use a weighted average of the overall performance score s _ avg and the average number k _ avg of the high-robustness feature points as the illumination robustness score of the feature point extraction algorithm.
In some embodiments, the overall performance score s _ avg of the feature point extraction algorithm on the plurality of groups of pictures may be an average confidence s of the feature points extracted by the feature point extraction algorithm in each group of picturesiAnd (4) determining. For example, the overall performance score s _ avg of the feature point extraction algorithm on the plurality of groups of pictures may be an average confidence s of the feature points extracted by the feature point extraction algorithm in each group of picturesiAverage value of (a). The feature point extraction algorithm extracts the average confidence coefficient s of the feature points in each group of picturesiRelated to the frequency with which the feature points are extracted. In each group of pictures, the higher the frequency of extracting the feature points is, the higher the corresponding feature value in the overall feature point distribution of the group of pictures is, which indicates the average confidence s of the feature points of the group of picturesiThe higher. Regarding the overall performance score s _ avg of the feature point extraction algorithm on the multiple groups of pictures and the overall feature point distribution according to each group of pictures, the method for determining the overall performance score s _ avg of the feature point extraction algorithm on the multiple groups of pictures can be described in detail in fig. 7 and 8 of the present application.
In some embodiments, in the multiple groups of pictures, the average number k _ avg of the high-robustness feature points extracted by the feature point extraction algorithm in each group of pictures may be determined according to the number k of feature points extracted by the feature point extraction algorithm in each group of pictures exceeding a certain extraction number thresholdi(number k of highly robust feature points)i) And (4) determining. For example, the average number k _ avg of the high-robustness feature points may be the number k of the high-robustness feature points in each group of picturesiAverage value of (a). Determining the high robustness characteristics according to the average number k _ avg of the high robustness characteristic points and the overall characteristic point distribution of each group of picturesThe method of averaging the number of points k _ avg can be described in detail in fig. 9 of the present application.
In some embodiments, the illumination robustness score of the feature point extraction algorithm may be used to determine the merits of different feature point extraction algorithms. For example, the electronic device 130 may evaluate a plurality of different feature point extraction algorithms separately to determine an illumination robustness score corresponding to each feature point extraction algorithm. The higher the illumination robustness score of the feature point extraction algorithm, the stronger the illumination invariance of the feature point extraction algorithm, which indicates that the robustness of the feature points extracted by the feature point extraction algorithm under different illumination conditions is stronger. In some embodiments, the illumination robustness score of the feature point extraction algorithm may be used to optimize the same feature point extraction algorithm. For example, the electronic device 130 may evaluate the illumination robustness scores of the same feature point extraction algorithm under different parameters for performing parameter design or preprocessing on the same feature point extraction algorithm.
Fig. 4 is an exemplary flow diagram for determining an overall feature point distribution for each group of pictures according to some embodiments of the present application.
In 410, for each picture in each group of pictures, the electronic device 130 may mark feature points and non-feature points therein. For example, after the electronic device extracts feature points in each picture according to the same extraction rule of the same feature point extraction algorithm, the electronic device 130 may mark feature values of the extracted feature points as 1, mark remaining pixel points that are not extracted as non-feature points, and mark feature values of the non-feature points as 0.
In 420, the electronic device 130 may determine a neighborhood of each of the feature points in each picture. In some embodiments, the neighborhood of each feature point may be an arbitrarily shaped or sized region centered on the feature point. The regions may be rectangular, circular, elliptical, etc., or any combination thereof. For example, the neighborhood of each feature point may be a circular region with a radius R centered at the feature point. For another example, as shown in fig. 5a, the neighborhood of the feature point with the feature value labeled as 1 may be a rectangular region with the area of the feature point as the center.
At 430, the electronic device 130 can mark pixel points in the neighborhood of each picture as feature points. In some embodiments, for the pixel points in the neighborhood of each feature point in each picture, the electronic device marks them as feature points and marks their feature values as 1. For example, as shown in FIG. 5b, the eigenvalues of the pixel points in the 3 x 3 rectangular neighborhood of the eigenvalue-labeled 1 are all labeled 1. It should be understood that fig. 5a and 5b are only one example and are not intended to be limiting of neighborhoods and neighborhood labels. For example, the neighborhood may be any other regularly or irregularly shaped region. For another example, when the pixel points in the neighborhood have both marked feature points and non-feature points, the electronic device 130 may mark all the marked feature points and non-feature points as feature points.
At 440, the electronic device 130 may determine an overall feature point distribution of each group of pictures according to the feature points marked in each picture. In some embodiments, the overall feature point distribution of each group of pictures may represent a superposition result of feature points in a plurality of pictures included in each group of pictures. For example, as shown in fig. 6a to 6d, each group of pictures includes three pictures, i.e., fig. 6a, 6b, and 6c, and the labeled feature values of the three pictures are distributed as shown in the figure, where a pixel with a feature value of 1 may be a feature point extracted by the same feature extraction algorithm according to the same feature extraction rule, or may be a pixel in the neighborhood labeled in step 430; the pixel point with the feature value of 0 is a non-feature point which is not extracted or marked. The electronic device 130 may add the feature values of the pixel points at the corresponding positions in the three images of fig. 6a, fig. 6b, and fig. 6c to obtain the overall feature point distribution of the group of pictures as shown in fig. 6 d. As shown in fig. 6d, the pixel with the feature value of 3 is represented in each of the three pictures of the group of pictures, and the pixel at the corresponding position is extracted by the feature extraction algorithm or marked as a feature point; the pixel point with the characteristic value of 2 is represented in two pictures in the group of pictures, and the pixel point at the corresponding position is extracted by the characteristic extraction algorithm or marked as a characteristic point; the pixel point with the characteristic value of 1 is represented in one of the group of pictures, and the pixel point at the corresponding position is extracted by the characteristic extraction algorithm or marked as a characteristic point; the pixel point with the characteristic value of 0 is represented in three pictures of the group of pictures, and the pixel point at the corresponding position is not extracted by the characteristic extraction algorithm or is marked as a characteristic point.
It should be understood that the overall feature point distributions shown in fig. 6a-6d are only examples, and the overall feature point distribution of each group of pictures may be any form of statistical result. For example, the overall characteristic point distribution of each group of pictures can be represented in the form of a graph, a curve, a contour line, and the like, or any combination thereof.
Fig. 7 is an exemplary flow diagram for determining overall performance scores of a feature point extraction algorithm across multiple sets of pictures according to some embodiments of the present application.
In 710, the electronic device 130 may determine an average confidence s of the feature points of each group of pictures according to the overall feature point distribution of each group of picturesi. The average confidence coefficient s of the feature points of each group of picturesiThe average probability that the feature points of each group of pictures are extracted by the feature point extraction algorithm can be represented. The average confidence coefficient s of the feature points of each group of picturesiThe higher the illumination robustness of the feature points representing the group of pictures extracted by the feature point extraction algorithm is. The electronic device 130 may determine the average confidence s of the feature points of each group of pictures according to the exemplary process shown in fig. 8i
In 810, the electronic device 130 may score, according to the overall feature point distribution of each group of pictures, pixel points at corresponding positions of each group of pictures, and determine a pixel score of each feature point in each group of pictures. The scoring rules may be set by the electronic device 130 according to a certain rule (e.g., a machine learning algorithm), or may be set by an operator of the electronic device 130 empirically and stored in any storage device (e.g., the memory 140, the read-only memory 230, the random access memory 240, etc.). For example, points with a large number of times of extraction or marking in each group of pictures may be taken as valid feature points, which have a positive effect on evaluating the feature extraction algorithm. Therefore, the score of the valid feature point in the scoring rule is a positive value, and the more times it is extracted or marked, the higher the score thereof. Points that are extracted or marked a lesser number of times per group of photographs may be considered as inefficient feature points that negatively impact the evaluation of the feature extraction algorithm. Therefore, the score of the low-effect feature point in the scoring rule is a negative value, and the less the number of times of being extracted or marked, the lower the score thereof. For another example, when each group of pictures includes 6 pictures in the same place under different lighting conditions, after the overall feature point distribution of each group of pictures is obtained, the electronic device 130 may score the pixel points at the corresponding positions of each group of pictures according to the nonlinear scoring standard shown in table 1. The electronic device 130 may score each pixel point in each group of pictures based on the number of times each pixel point in each group of pictures is a feature point. The fraction of each pixel point and the occurrence frequency of the characteristic point corresponding to each pixel point form a nonlinear relation.
Characteristic value of pixel point Pixel score
0 0
1 -4
2 -1
3 4
4 4
5 25
6 25
TABLE 1 scoring criteria
As shown in table 1, when the feature value of a pixel point is 0, that is, the pixel point is not extracted or marked by the feature point extraction algorithm in any of the 6 pictures in the group of pictures (the number of occurrences of the feature point in the 6 pictures in the group is 0), the pixel score corresponding to the pixel point is 0; when the feature value of a pixel point is 1 or 2, namely the pixel point is extracted or marked by the feature point extraction algorithm in 1 or 2 of the group of pictures (the number of times of occurrence of the feature point in 6 pictures in the group is 1 or 2), the pixel scores corresponding to the pixel point are-4 and 1 respectively; when the feature value of a pixel point is 3 or 4, that is, the pixel point is extracted or marked by the feature point extraction algorithm in 3 or 4 of the group of pictures (the number of times of occurrence of the feature point in 6 of the group of pictures is 3 or 4), the pixel score corresponding to the pixel point is 4; when the feature value of a pixel point is 5 or 6, that is, the pixel point is extracted or marked by the feature point extraction algorithm in 5 or all 6 pictures of the group of pictures (the number of occurrences of the feature point in the 6 pictures of the group is 5 or 6), the pixel score corresponding to the pixel point is 25. When a pixel point is extracted or marked by the feature point extraction algorithm in 1 of the group of pictures, the pixel point is taken as an inefficient feature point which has negative influence on the evaluation of the feature extraction algorithm, and the corresponding score is-4.
At 820, the electronic device 130 can determine the number of feature points for each group of pictures. For example, the electronic device 130 may count the number n of pixel points (i.e., feature points) in each group of pictures, where the feature value is not 0.
At 830, the electronic device 130 may determine an average confidence s of the feature points of each group of pictures according to the pixel score of each feature point in each group of pictures and the number of feature points of each group of picturesi. In some embodiments, the electronic device 130 may further sum the pixel scores of each pixel point in each group of pictures to obtain a total pixel score S of each group of pictures. The electronic device 130 may determine the ratio of the total pixel score S to the number of feature points n as the average confidence S of the feature points of each group of picturesiI.e. si=S/n。
In 720, the electronic device 130 may determine an average confidence s of the feature points of each group of picturesiAnd determining the overall performance score s _ avg of the feature point extraction algorithm on the plurality of groups of pictures. In some embodiments, the electronic device 130 may determine the average confidence s of the feature points of each of the plurality of groups of picturesiAnd summing to obtain a sum s of the average confidence degrees of the multiple groups of pictures. The overall performance score s _ avg of the feature point extraction algorithm on the multiple groups of pictures may be a ratio of a sum s of average confidence degrees of the multiple groups of pictures to a number N of the multiple groups of pictures, that is, the total performance score s _ avg is
Figure BDA0002065387640000161
For example, when the plurality of groups of pictures are 4 groups of pictures, the feature point extraction algorithm scores the overall performance of the 4 groups of pictures
Figure BDA0002065387640000162
Wherein s is1、s2、s3And s4The average confidence of the feature points of 4 groups of pictures.
FIG. 9 is an exemplary flow chart for determining an average number of highly robust feature points according to some embodiments of the present application.
At 910, the electronic device 130 may determine the number k of high robustness feature points in each group of pictures according to the overall feature point distribution of each group of picturesi. In some embodiments, the high-robustness feature points may be feature points extracted by the feature point extraction algorithm more than a certain extraction number threshold in each group of pictures. For example, when 6 pictures are included in each group of pictures, the electronic device 130 may set the threshold value of the number of times of extraction to be 5. That is, in 6 pictures of each group of pictures, when the number of times that a certain feature point is extracted by the feature point extraction algorithm exceeds 5 times (that is, the feature point in at least 5 pictures of the 6 pictures is extracted by the feature point extraction algorithm), the feature point is a high-robustness feature point. The electronic device 130 may determine the number k of high robustness feature points in each group of picturesi
In some embodiments, the electronic device 130 may determine the number k of the high-robustness feature points in each group of pictures according to the overall feature point distribution of each group of picturesi. For example, the electronic device 130 may determine the number of pixel points whose feature values in the overall feature point distribution of each group of pictures exceed the extraction number threshold as the number k of high-robustness feature points in each group of picturesi. In some embodiments, the threshold number of extractions may be set by the electronic device 130 according to a certain rule (e.g., a machine learning algorithm), or may be set by an operator of the electronic device 130 empirically and stored in any storage device (e.g., the memory 140, the read-only memory 230, the random access memory 240, etc.).
At 920, the electronic device 130 may determine the number k of the high robustness feature points in each group of pictures according to the number kiAnd determining the average number k _ avg of the high-robustness feature points extracted by the feature point extraction algorithm in each group of the plurality of groups of pictures. In some embodiments, the electronic device 130 may determine the number k of robust feature points in each group of picturesiSumming to obtain the multiple groups of picturesThe sum of the number of highly robust feature points k. In the multiple groups of pictures, the average number k _ avg of the high-robustness feature points extracted by the feature point extraction algorithm in each group may be a ratio of the total number k of the high-robustness feature points of the multiple groups of pictures to the number N of the multiple groups of pictures, that is, the ratio is
Figure BDA0002065387640000171
For example, when the plurality of groups of pictures are 4 groups of pictures, the average number of the high-robustness feature points extracted by the feature point extraction algorithm in each group of the 4 groups of pictures
Figure BDA0002065387640000172
Wherein k is1、k2、k3And k4The number of high robustness feature points in each of the 4 groups of pictures.
Fig. 10 is a schematic diagram of an evaluation device 1000 of a feature point extraction algorithm according to some embodiments of the present application. The evaluation apparatus 1000 of the feature point extraction algorithm may include a feature point acquisition unit 1010, an overall feature point distribution determination unit 1020, and an evaluation unit 1030.
The feature point obtaining unit 1010 may be configured to obtain a feature point of each of the plurality of sets of pictures. The characteristic points are extracted and obtained by the same characteristic point extraction algorithm.
The overall feature point distribution determining unit 1020 may be configured to determine an overall feature point distribution of each group of pictures according to the feature points in each picture.
The evaluation unit 1030 may be configured to determine an illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures. The evaluation unit 1030 may evaluate the merits of the feature point extraction algorithm based on the illumination robustness score of the feature point extraction algorithm.
The application also proposes an electronic device comprising a memory, a processor and a computer program stored on said memory and executable on said processor. The processor, when executing the computer program, may implement the steps of the evaluation method of the feature point extraction algorithm as described in the foregoing.
The present application also proposes a computer-readable storage medium having stored thereon a computer program. The computer program may, when being executed by a processor, implement the steps of the evaluation method of the feature point extraction algorithm as described in the foregoing.
In conclusion, upon reading the present detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure can be presented by way of example only, and not limitation. Those skilled in the art will appreciate that the present application is intended to cover various reasonable variations, adaptations, and modifications of the embodiments described herein, although not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this application and are within the spirit and scope of the exemplary embodiments of the application.
Furthermore, certain terminology has been used in this application to describe embodiments of the application. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the application.
It should be appreciated that in the foregoing description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of such feature. This application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. This is not to be taken as an admission that any of the features of the claims are essential, and it is fully possible for a person skilled in the art to extract some of them as separate embodiments when reading the present application. That is, embodiments in the present application may also be understood as an integration of multiple sub-embodiments. And each sub-embodiment described herein is equally applicable to less than all features of a single foregoing disclosed embodiment.
In some embodiments, numbers expressing quantities or properties useful for describing and claiming certain embodiments of the present application are to be understood as being modified in certain instances by the terms "about", "approximately" or "substantially". For example, "about", "approximately" or "substantially" may mean a ± 20% variation of the value it describes, unless otherwise specified. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as possible.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, descriptions, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except as related to any prosecution history, may be inconsistent or conflicting with this document or any prosecution history which may have a limiting effect on the broadest scope of the claims. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document are used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present application. Other modified embodiments are also within the scope of the present application. Accordingly, the disclosed embodiments are presented by way of example only, and not limitation. Those skilled in the art can implement the invention in the present application in alternative configurations according to the embodiments in the present application. Thus, embodiments of the present application are not limited to those embodiments described with accuracy in the application.

Claims (12)

1. A method for evaluating a feature point extraction algorithm is applied to electronic equipment, and is characterized by comprising the following steps:
acquiring feature points of each picture in a plurality of groups of pictures, wherein each group of pictures comprises at least two pictures in the same place under different illumination conditions, and the feature points are extracted and obtained by the same feature point extraction algorithm;
determining the overall characteristic point distribution of each group of pictures according to the characteristic points in each picture, including
Determining the neighborhood of each feature point in each picture;
marking pixel points of each picture based on the feature points and the neighborhood thereof; and
determining the overall characteristic point distribution of each group of pictures according to the marks in each picture; wherein the determining the overall feature point distribution of each group of pictures according to the marks in each picture comprises:
marking and superposing pixel points corresponding to each picture in each group of pictures to obtain the overall characteristic point distribution of each group of pictures;
the overall characteristic point distribution comprises the times that each pixel point in each group of pictures is a characteristic point; and
and determining an illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures, wherein the illumination robustness score of the feature point extraction algorithm comprises at least one of overall performance scores of the feature point extraction algorithm on the plurality of groups of pictures or average number of high-robustness feature points extracted by the feature point extraction algorithm in each group of pictures in the plurality of groups of pictures.
2. The method of claim 1, wherein each pixel point in each group of pictures is scored based on the overall feature point distribution.
3. The method according to any one of claims 1-2, wherein determining the overall performance score of the feature point extraction algorithm over the plurality of groups of pictures comprises:
determining the average confidence of the feature points of each group of pictures according to the overall feature point distribution of each group of pictures; and
and determining the overall performance score of the feature point extraction algorithm on the multiple groups of pictures according to the average confidence of the feature points of each group of pictures.
4. The method of claim 3, wherein determining the average confidence level of the feature points of each group of pictures according to the overall feature point distribution of each group of pictures comprises:
determining a pixel score for each of the feature points in each group of pictures;
determining the number of the feature points of each group of pictures; and
and determining the average confidence of the feature points of each group of pictures according to the pixel score of each feature point in each group of pictures and the number of the feature points of each group of pictures.
5. The method of any one of claims 1-2, wherein determining the average number of highly robust feature points extracted by the feature point extraction algorithm in each of the plurality of groups of pictures comprises:
determining the number of high robustness feature points in each group of pictures according to the overall feature point distribution of each group of pictures; and
and determining the average number of the high-robustness characteristic points extracted by the characteristic point extraction algorithm in each group of the plurality of groups of pictures according to the number of the high-robustness characteristic points in each group of pictures.
6. The method according to claim 5, wherein the high-robustness feature points in each group of pictures comprise feature points extracted by the feature point extraction algorithm more than an extraction threshold.
7. A method for evaluating a feature point extraction algorithm, applied to an electronic device, is characterized by comprising the following steps:
acquiring feature points of each picture in a plurality of groups of pictures, wherein each group of pictures comprises at least two pictures in the same place under different illumination conditions, and the feature points are extracted and obtained by the same feature point extraction algorithm;
determining the overall characteristic point distribution of each group of pictures according to the characteristic points in each picture; and
determining an illumination robustness score of the feature point extraction algorithm according to the overall feature point distribution of each group of pictures, wherein the illumination robustness score of the feature point extraction algorithm comprises at least one of overall performance scores of the feature point extraction algorithm on the multiple groups of pictures or average number of high-robustness feature points extracted by the feature point extraction algorithm in each group of pictures in the multiple groups of pictures;
wherein the determining the overall feature point distribution of each group of pictures according to the feature points of each picture comprises: determining a neighborhood of each feature point in each picture, marking pixel points of each picture based on the feature points and the neighborhoods thereof, and determining the overall feature point distribution of each group of pictures according to the marks in each picture;
wherein the determining the overall feature point distribution of each group of pictures according to the marks in each picture comprises: marking and superposing pixel points corresponding to each picture in each group of pictures to obtain the overall characteristic point distribution of each group of pictures, wherein the overall characteristic point distribution comprises the times that each pixel point in each group of pictures is a characteristic point;
scoring each pixel point in each group of pictures based on the overall characteristic point distribution;
wherein, scoring each pixel point in each group of pictures comprises: and scoring each pixel point in each group of pictures based on the times that each pixel point in each group of pictures is a feature point, wherein the score of each pixel point and the times that the feature point corresponding to each pixel point appears form a nonlinear relation.
8. The method of claim 7, wherein determining the overall performance score of the feature point extraction algorithm over the plurality of groups of pictures comprises:
determining the average confidence of the feature points of each group of pictures according to the overall feature point distribution of each group of pictures; and
and determining the overall performance score of the feature point extraction algorithm on the multiple groups of pictures according to the average confidence of the feature points of each group of pictures.
9. The method of claim 8, wherein determining the average confidence level of the feature points of each group of pictures according to the overall feature point distribution of each group of pictures comprises:
determining a pixel score for each of the feature points in each group of pictures;
determining the number of the feature points of each group of pictures; and
and determining the average confidence of the feature points of each group of pictures according to the pixel score of each feature point in each group of pictures and the number of the feature points of each group of pictures.
10. The method of claim 9, wherein determining the average number of high robustness feature points extracted by the feature point extraction algorithm in each group of pictures in the plurality of groups of pictures comprises:
determining the number of high robustness feature points in each group of pictures according to the overall feature point distribution of each group of pictures; and
and determining the average number of the high-robustness characteristic points extracted by the characteristic point extraction algorithm in each group of the plurality of groups of pictures according to the number of the high-robustness characteristic points in each group of pictures.
11. The method according to claim 10, wherein the high-robustness feature points in each group of pictures comprise feature points extracted by the feature point extraction algorithm more than an extraction threshold.
12. An evaluation system of a feature point extraction algorithm, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the evaluation method of the feature point extraction algorithm according to any one of claims 1 to 11 when executing the computer program.
CN201910419047.4A 2019-05-20 2019-05-20 Evaluation system and method of feature point extraction algorithm Active CN110135442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910419047.4A CN110135442B (en) 2019-05-20 2019-05-20 Evaluation system and method of feature point extraction algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910419047.4A CN110135442B (en) 2019-05-20 2019-05-20 Evaluation system and method of feature point extraction algorithm

Publications (2)

Publication Number Publication Date
CN110135442A CN110135442A (en) 2019-08-16
CN110135442B true CN110135442B (en) 2021-12-14

Family

ID=67571631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910419047.4A Active CN110135442B (en) 2019-05-20 2019-05-20 Evaluation system and method of feature point extraction algorithm

Country Status (1)

Country Link
CN (1) CN110135442B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894050A (en) * 2016-06-01 2016-08-24 北京联合大学 Multi-task learning based method for recognizing race and gender through human face image
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
CN107046640A (en) * 2017-02-23 2017-08-15 北京理工大学 It is a kind of based on interframe movement slickness without reference video stabilised quality evaluation method
US9798950B2 (en) * 2015-07-09 2017-10-24 Olympus Corporation Feature amount generation device, feature amount generation method, and non-transitory medium saving program
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013252185A (en) * 2012-06-05 2013-12-19 Canon Inc Endoscope and endoscope apparatus
CN106372111B (en) * 2016-08-22 2021-10-15 中国科学院计算技术研究所 Local feature point screening method and system
CN108305235B (en) * 2017-01-11 2022-02-18 北京大学 Method and device for fusing multiple pictures

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9798950B2 (en) * 2015-07-09 2017-10-24 Olympus Corporation Feature amount generation device, feature amount generation method, and non-transitory medium saving program
CN105894050A (en) * 2016-06-01 2016-08-24 北京联合大学 Multi-task learning based method for recognizing race and gender through human face image
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
CN107046640A (en) * 2017-02-23 2017-08-15 北京理工大学 It is a kind of based on interframe movement slickness without reference video stabilised quality evaluation method
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Attila Börcs等.Object extraction in urban environments from large-scale dynamic point cloud datasets.《2013 11th International Workshop on Content-Based Multimedia Indexing 》.2013, *
Robust Point Sets Matching by Fusing Feature and Spatial Information Using Nonuniform Gaussian Mixture Models;Wenbing Tao等;《IEEE Transactions on Image Processing》;20150624;第3754-3767页 *
基于3种测度值的特征提取方法优化评价;王天杨等;《仪器仪表学报》;20100430;第898-903页 *
基于图像颜色信息的C-FAST特征检测和匹配算法;刘潇潇等;《激光与光电子学进展》;20190331;第56卷(第5期);第2.1节第8段、第3.1节,图1-2 *
基于学习不变特征变换的兵马俑图像分区匹配;冯筠等;《光学 精密工程》;20180731;第26卷(第7期);第1774-1783页 *
基于改进 Harris-SIFT算子的快速图像配准算法;许佳佳等;《电子测量与仪器学报》;20150131;第48-54页 *

Also Published As

Publication number Publication date
CN110135442A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN107610141B (en) Remote sensing image semantic segmentation method based on deep learning
WO2021003824A1 (en) Image recognition-based illegal building identification method and device
CN108268867B (en) License plate positioning method and device
CN112434578B (en) Mask wearing normalization detection method, mask wearing normalization detection device, computer equipment and storage medium
CN108776819A (en) A kind of target identification method, mobile terminal and computer readable storage medium
CN110533950A (en) Detection method, device, electronic equipment and the storage medium of parking stall behaviour in service
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN113436162B (en) Method and device for identifying weld defects on surface of hydraulic oil pipeline of underwater robot
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN109871829B (en) Detection model training method and device based on deep learning
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN109190617B (en) Image rectangle detection method and device and storage medium
CN113221869B (en) Medical invoice structured information extraction method, device equipment and storage medium
CN112036455A (en) Image identification method, intelligent terminal and storage medium
CN112101195B (en) Crowd density estimation method, crowd density estimation device, computer equipment and storage medium
CN110991220B (en) Egg detection and image processing method and device, electronic equipment and storage medium
CN110378254B (en) Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
CN103761515A (en) Human face feature extracting method and device based on LBP
CN106778777A (en) A kind of vehicle match method and system
CN115239644A (en) Concrete defect identification method and device, computer equipment and storage medium
CN110335322B (en) Road recognition method and road recognition device based on image
CN111178153A (en) Traffic sign detection method and system
CN108090425B (en) Lane line detection method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant