CN109299652B - Model training method for image positioning, image positioning method and device - Google Patents

Model training method for image positioning, image positioning method and device Download PDF

Info

Publication number
CN109299652B
CN109299652B CN201810854417.2A CN201810854417A CN109299652B CN 109299652 B CN109299652 B CN 109299652B CN 201810854417 A CN201810854417 A CN 201810854417A CN 109299652 B CN109299652 B CN 109299652B
Authority
CN
China
Prior art keywords
feature
matching
points
image
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810854417.2A
Other languages
Chinese (zh)
Other versions
CN109299652A (en
Inventor
刘晓静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaxiao Precision Suzhou Co ltd
Original Assignee
Huaxiao Precision Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaxiao Precision Suzhou Co ltd filed Critical Huaxiao Precision Suzhou Co ltd
Priority to CN201810854417.2A priority Critical patent/CN109299652B/en
Publication of CN109299652A publication Critical patent/CN109299652A/en
Application granted granted Critical
Publication of CN109299652B publication Critical patent/CN109299652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Probability & Statistics with Applications (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a model training method for image positioning, an image positioning method and a device, wherein the model training method for image positioning comprises the following steps: extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary; extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set; performing feature point matching on feature points in the original feature set and feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set; and finishing the training of the model. When the feature set of the to-be-positioned area in the to-be-identified image is mapped to the standard feature set, namely the to-be-identified image is mapped to the model, the to-be-positioned area in the to-be-identified image can be obtained through the to-be-positioned area in the model, so that the model can be used for positioning different devices in the same category without repeated modeling.

Description

Model training method for image positioning, image positioning method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a model training method for image localization, an image localization method, a model training device for image localization, an image localization device, an image localization system, and a computer-readable storage medium.
Background
Along with intelligent power distribution station and the unmanned on duty's of power distribution station requirement is higher and higher, also higher and higher to the demand that intelligent power distribution station carries out the precision and patrols and examines, under this background, state net company carries out the intelligence and patrols and examines the robot and replace artifical the patrolling and examining. The power distribution station inspection robot adopts an indoor rail-mounted mobile platform, carries various sensor acquisition devices, acquires and analyzes operation condition parameters of various devices in the power distribution station, and accesses a power distribution station management background system to collect and analyze data, so that the unattended technical level of a transformer substation is improved.
In order to enable the intelligent inspection robot to have the vision analysis capability, a vision recognition algorithm is embedded into the intelligent inspection robot, so that the intelligent inspection robot can read and recognize the state of running equipment. When the identification is carried out, firstly, the equipment area to be identified on one device is positioned, the existing identification algorithm adopts a method of one equipment and one model to position the equipment area, when the model of one equipment is used for identifying other equipment of the same type, because the installation positions, the installation angles and the like of the two equipment are different, the identification accuracy is poor, although the two equipment belong to the same type, when the two equipment are installed on different devices or are installed at different positions of the same device, two positioning models need to be established, the application range of the positioning models is narrow, and in addition, when the equipment is more, the identification efficiency is reduced by repeated modeling of the equipment of the same type.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to solve the problem of narrow application range of the positioning model in the existing recognition algorithm, and provide a training method for the positioning model which can be applied to different devices of the same category.
To this end, according to a first aspect, the invention provides a model training method for image localization, comprising the steps of: extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary; extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set; performing feature point matching on feature points in the original feature set and feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set; and finishing the training of the model.
Optionally, the method includes the steps of extracting SURF feature points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images, and respectively forming a first visual dictionary and a second visual dictionary, and the method includes the following steps: extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first characteristic set and a second characteristic set; performing clustering calculation on the first feature set and the second feature set to respectively obtain a first clustering set and a second clustering set; the first set of clusters is used as a first visual dictionary and the second set of clusters is used as a second visual dictionary.
Optionally, the step of performing feature point matching on feature points in the original feature set and feature points in the first visual dictionary, and removing feature points in the original feature set with mismatching to obtain a standard feature set includes the following steps: calculating the neighbor matching of the feature points in the original feature set to the feature points in the first visual dictionary to form a first initial matching set; removing the error matching in the first initial matching set to obtain a first matching set; and taking the set of all the feature points in the original feature set which are reserved in the first matching set as a standard feature set.
Optionally, after removing the incorrect matches in the first initial matching set and obtaining the first matching set, the method further includes the following steps: calculating the neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set to form a second initial matching set; removing the error matching in the second initial matching set to obtain a second matching set; removing the asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; and taking the set of all feature points which are correctly matched in the original feature set as a standard feature set.
According to a second aspect, the present invention provides an image localization method, comprising the steps of: completing model training by using all or part of the method in the first aspect to obtain a standard feature set; extracting SURF characteristic points of an image to be identified to form a characteristic set to be identified; calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set; and the area to be positioned in the standard image is the area to be positioned in the image to be identified.
Optionally, the method for extracting SURF feature points of an image to be identified to form a feature set to be identified includes the following steps: extracting SURF characteristic points of an image to be identified; respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in an image to be identified; calculating the ratio of the number of the effective characteristic points to the number of the ineffective characteristic points in each image area; the valid feature points are feature points matched with the feature points in the first visual dictionary, and the invalid feature points are feature points matched with the feature points in the second visual dictionary; and acquiring the characteristic points in the image area with the maximum ratio to form a characteristic set to be identified.
Optionally, calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set, including the following steps: calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set; removing the error matching in the third initial matching set to obtain a third matching set; and calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set.
Optionally, after removing the incorrect matches in the third initial matching set to obtain a third matching set, the method further includes the following steps: calculating the neighbor matching of the feature points in the standard feature set to the feature points in the feature set to be identified to form a fourth initial matching set; removing the error matching in the fourth initial matching set to obtain a fourth matching set; asymmetric matches in the third and fourth matching sets are removed.
According to a third aspect, the present invention provides a model training apparatus for image localization, comprising: the first extraction module is used for extracting SURF characteristic points of a region to be positioned and a region which is not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary; the second extraction module is used for extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set; and the first matching module is used for performing feature point matching on the feature points in the original feature set and the feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set.
Optionally, the first extraction module comprises: the first extraction unit is used for extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first characteristic set and a second characteristic set; the cluster calculation unit is used for carrying out cluster calculation on the first feature set and the second feature set to obtain a first cluster set and a second cluster set; the first set of clusters is used as a first visual dictionary and the second set of clusters is used as a second visual dictionary.
Optionally, the first matching module comprises: the first matching unit is used for calculating the neighbor matching of the feature points in the original feature set to the feature points in the first visual dictionary to form a first initial matching set; a first removing unit, configured to remove an error match in the first initial matching set to obtain a first matching set; and taking the set of all the feature points in the original feature set, which are reserved in the first matching set, as a standard feature set.
Optionally, the first matching module further comprises: the second matching unit is used for calculating the neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set to form a second initial matching set; a second removing unit, configured to remove an error match in the second initial matching set to obtain a second matching set; a third removing unit, configured to remove asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; and taking the set of all feature points which are correctly matched in the original feature set as a standard feature set.
According to a fourth aspect, the present invention provides an image localization apparatus comprising: a model training module, configured to complete training of a model by using the method of any one of the first aspect, and obtain a standard feature set; the characteristic set forming module is used for extracting SURF characteristic points of the image to be identified and forming a characteristic set to be identified; the mapping calculation module is used for calculating a mapping matrix from the feature set to be identified to the standard feature set and mapping the feature set to be identified to the standard feature set; and the area to be positioned in the standard image is the area to be positioned in the image to be identified.
Optionally, the feature set forming module comprises: the characteristic extraction unit is used for extracting SURF characteristic points of the image to be identified; the device comprises a characteristic acquisition unit, a feature extraction unit and a feature recognition unit, wherein the characteristic acquisition unit is used for respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in an image to be recognized; a ratio calculation unit for calculating a ratio of the number of valid feature points to the number of invalid feature points in each image region; the valid feature points are feature points matched with the feature points in the first visual dictionary, and the invalid feature points are feature points matched with the feature points in the second visual dictionary; and the characteristic set forming unit is used for acquiring the characteristic points in the image area with the maximum ratio and forming the characteristic set to be identified.
Optionally, the mapping calculation module includes: the third matching unit is used for calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set; a fourth removing unit, configured to remove an error match in the third initial matching set to obtain a third matching set; and the mapping calculation unit is used for calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set.
Optionally, the mapping calculation module further comprises: the fourth matching unit is used for calculating the neighbor matching from the feature points in the standard feature set to the feature points in the feature set to be identified to form a fourth initial matching set; a fifth removing unit, configured to remove an error match in the fourth initial matching set to obtain a fourth matching set; and a sixth removing unit for removing the asymmetric matching in the third matching set and the fourth matching set.
According to a fifth aspect, the present invention provides an image localization system comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method according to any one of the first aspect or to perform the method according to any one of the second aspect.
According to a sixth aspect, the present invention provides a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of the first aspect or the steps of the method according to any one of the second aspect.
The technical scheme provided by the embodiment of the invention has the following advantages:
1. the invention provides a model training method for image positioning, which comprises the following steps: extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary; extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set; performing feature point matching on feature points in the original feature set and feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set; and finishing the training of the model.
The method comprises the steps of forming a first visual dictionary by extracting SURF feature points of a region to be positioned in a plurality of images, removing feature points which are wrongly matched with the first visual dictionary in an original feature set according to the matching condition of the first visual dictionary and the SURF feature points in the region to be positioned in a standard image, finally obtaining a standard feature set which is formed by the feature points capable of correctly reflecting the features of the region to be positioned, completing model training, and in practical application, when the feature set of the region to be positioned in an image to be identified is mapped to the standard feature set, namely the image to be identified is mapped to the model, obtaining the region to be positioned in the image to be identified through the region to be positioned in the model, so that the positioning can be completed as long as equipment in the region to be identified is equipment of the same category capable of being mutually mapped, and the model can be used for positioning different equipment of the same category, and each device in the same category is not required to be modeled separately, so that the application range of the model is expanded.
2. The invention provides a model training method for image positioning, which extracts SURF characteristic points of a region to be positioned and a non-region to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary and comprises the following steps: extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first characteristic set and a second characteristic set; performing clustering calculation on the first feature set and the second feature set to respectively obtain a first clustering set and a second clustering set; the first set of clusters is used as a first visual dictionary and the second set of clusters is used as a second visual dictionary. By clustering the first feature set and the second feature set and taking the obtained clustering center sets (the first clustering set and the second clustering set) as the first visual dictionary and the second visual dictionary, the number of feature points in the first visual dictionary and the second visual dictionary can be greatly reduced, and the calculation amount during subsequent model training and to-be-recognized image positioning can be reduced.
3. The model training method for image positioning provided by the invention removes the error matching in the first initial matching set, and after obtaining the first matching set, the method also comprises the following steps: calculating the neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set to form a second initial matching set; removing the error matching in the second initial matching set to obtain a second matching set; removing the asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; and taking the set of all feature points which are correctly matched in the original feature set as a standard feature set. After the first mismatching removal is carried out to obtain the first matching set, the second mismatching removal and the third mismatching removal are carried out continuously, so that the feature points which are wrongly matched with the first visual dictionary in the original feature set can be removed more comprehensively, namely the feature points which are possibly wrongly matched in the original feature set can be removed more comprehensively, and the model is further optimized.
4. The image positioning method provided by the invention comprises the following steps: completing model training by using all or part of the method in the first aspect to obtain a standard feature set; extracting SURF characteristic points of an image to be identified to form a characteristic set to be identified; calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set; and the area to be positioned in the standard image is the area to be positioned in the image to be identified. The model which can be used for positioning different devices of the same type is used for positioning the region to be positioned in the image to be recognized, so that the image to be recognized can be the image of any device which belongs to the same type as the device in the model, the application range of the image positioning method is expanded, and the positioning efficiency of positioning the images of different devices of the same type is improved.
5. The image positioning method provided by the invention extracts SURF characteristic points of an image to be identified to form a characteristic set to be identified, and comprises the following steps: extracting SURF characteristic points of an image to be identified; respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in an image to be identified; calculating the ratio of the number of the effective characteristic points to the number of the ineffective characteristic points in each image area; the valid feature points are feature points matched with the feature points in the first visual dictionary, and the invalid feature points are feature points matched with the feature points in the second visual dictionary; and acquiring the characteristic points in the image area with the maximum ratio to form a characteristic set to be identified. The feature points in the image area with the largest ratio of the effective feature points in the image to be identified are selected to form the feature set to be identified, so that the number of the feature points in the feature set to be identified can be reduced, namely, the calculated amount of the image positioning method is reduced, meanwhile, the number of the feature points in the area to be positioned is ensured, and the positioning accuracy of the image positioning method can also be ensured.
6. The image positioning method provided by the invention is used for calculating a mapping matrix from a feature set to be identified to a standard feature set and mapping the feature set to be identified to the standard feature set, and comprises the following steps: calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set; removing the error matching in the third initial matching set to obtain a third matching set; and calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set. By calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set and removing the wrong matching, the feature points which are possibly wrong in the feature set to be identified can be removed, the possibility of positioning errors caused by the fact that mapping results are wrong due to the fact that the characteristic points which are wrong are reduced, and the influence of the situations that noise exists in an image or a region to be positioned is shielded and the like on positioning accuracy is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a method for model training for image localization according to embodiment 1;
FIG. 2 is a flowchart illustrating a specific method of step S30 in FIG. 1;
fig. 3 is a flowchart of a method of positioning an image according to embodiment 2;
FIG. 4 is a flowchart illustrating a specific method of step S200 in FIG. 3;
FIG. 5 is a flowchart illustrating a specific method of step S300 in FIG. 3;
FIG. 6 is a schematic structural diagram of a model training apparatus for image localization provided in embodiment 3;
fig. 7 is a schematic structural diagram of an image positioning apparatus provided in embodiment 4;
fig. 8 is a schematic diagram of a hardware structure of the image positioning system provided in embodiment 5.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Example 1
The embodiment provides a model training method for image localization, as shown in fig. 1. It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein. The process comprises the following steps:
step S10, extracting SURF characteristic points of the to-be-positioned region and the non-to-be-positioned region in a plurality of pre-collected images, and respectively forming a first visual dictionary and a second visual dictionary. In this embodiment, the SURF feature is a feature with a constant scale, a position and a scale are defined for each detected feature, a scale value can be used to define a window and a size around a feature point, and the window will contain the same visual information regardless of the scale of an object. In this embodiment, the to-be-positioned region refers to a region where the to-be-positioned device is located, and the non-to-be-positioned region refers to any region that does not include the to-be-positioned region in a plurality of pre-collected images.
Step S20, extracting SURF feature points of a region to be located in a standard image to form an original feature set. In this embodiment, the standard image refers to an image with high quality, in which the region to be located is clear, complete, and has no occlusion. In a specific embodiment, the region to be positioned and the non-region to be positioned are manually input for extracting SURF feature points, and specifically, the region to be positioned can be input in a mouse framing region, an input coordinate framing region, a touch screen framing region and other modes.
And step S30, performing feature point matching on the feature points in the original feature set and the feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set. In this embodiment, a neighbor algorithm (K-NN algorithm) is used to calculate feature point matching between feature points in an original feature set and feature points in a first visual dictionary, specifically, the similarity of the feature points is measured according to the euclidean distances between the feature points, that is, the euclidean distances are calculated for one current feature point in the original feature set and all feature points in the first visual dictionary, and a nearest neighbor feature point and a next nearest neighbor feature point are selected from the current feature point and all feature points; if the Euclidean distance from the current feature point to the nearest neighbor feature point and the ratio of the Euclidean distance from the current feature point to the next nearest neighbor feature point are less than or equal to a preset threshold value, the current feature point and the nearest neighbor feature point are considered to be correctly matched; if the Euclidean distance from the current feature point to the nearest neighbor feature point and the ratio of the Euclidean distance from the current feature point to the next nearest neighbor feature point are larger than a preset threshold value, the current feature point and the nearest neighbor feature point are considered to be in wrong matching; of course, it is also possible to calculate the euclidean distances for one feature point in the first visual dictionary and all feature points in the original feature set, and determine whether the feature points are correctly matched with each other by referring to the above determination method.
And step S40, finishing the training of the model.
In the model training method for image positioning provided by this embodiment, a first visual dictionary is formed by extracting SURF feature points of a region to be positioned in a plurality of images, feature points in an original feature set that are incorrectly matched with the first visual dictionary are removed according to matching conditions of the SURF feature points in the region to be positioned in the first visual dictionary and a standard image, and finally a standard feature set composed of feature points that can correctly reflect the features of the region to be positioned is obtained, so that model training is completed, in practical application, when the feature set of the region to be positioned in an image to be identified is mapped to the standard feature set, that is, the image to be identified is mapped to the model, the region to be positioned in the image to be identified can be obtained through the region to be positioned in the model, and therefore, as long as devices in the region to be identified are devices of the same category that can be mapped to each other, positioning can be completed, the model can be used for positioning different devices of the same category, and each device of the same category does not need to be modeled separately, so that the application range of the model is expanded.
In an alternative embodiment, step S10 includes the steps of:
step S11, SURF feature points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images are extracted, and a first feature set and a second feature set are respectively formed.
Step S12, performing cluster calculation on the first feature set and the second feature set to obtain a first cluster set and a second cluster set, respectively. In this embodiment, the first cluster set is used as the first visual dictionary, and the second cluster set is used as the second visual dictionary. In this embodiment, firstly, a clustering center number K is set, then, a clustering calculation is performed on feature sets by using a K-means algorithm, so as to generate a clustering set composed of K clustering centers, specifically, the clustering center number of a first feature set and the clustering center number of a second feature set are respectively set according to the number of image blocks of an area to be positioned and an area not to be positioned, each clustering center corresponds to an image block, specifically, the image block of the area to be positioned refers to an identification area or a mark area in the area to be positioned, wherein the identification area may be a reading area, a reading pointer area, a reading knob area, or the like, and the mark area may be an adjustment key area, an equipment name area, an equipment trademark area, or other image areas with obvious boundaries with the surroundings; the image block of the non-to-be-positioned area refers to an image area with a clear boundary with the periphery in the non-to-be-positioned area. In a specific embodiment, it should be noted that one image block may correspond to one or more cluster centers, and the number K of the cluster centers may be determined and adjusted according to experience in actual use. By clustering the first feature set and the second feature set and taking the obtained clustering center sets (the first clustering set and the second clustering set) as the first visual dictionary and the second visual dictionary, the number of feature points in the first visual dictionary and the second visual dictionary can be greatly reduced, and the calculation amount during subsequent model training and to-be-recognized image positioning can be reduced.
In an alternative embodiment, as shown in fig. 2, step S30 includes the following steps:
step S31, calculating neighboring matches of the feature points in the original feature set to the feature points in the first visual dictionary to form a first initial matching set. In this embodiment, a feature point matching from a feature point in an original feature set to a feature point in a first visual dictionary is calculated by using a neighbor algorithm (K-NN algorithm), specifically, the similarity of the feature points is measured according to the euclidean distances between the feature points, that is, the euclidean distances are calculated for one current feature point in the original feature set and all feature points in the first visual dictionary, and a nearest neighbor feature point and a next nearest neighbor feature point are selected from the current feature point and the nearest neighbor feature point, and the current feature point is matched with the nearest neighbor feature point.
Step S32, removing the mismatching in the first initial matching set to obtain a first matching set. In this embodiment, all the feature points in the original feature set that remain in the first matching set are taken as the standard feature set. In this embodiment, if the ratio of the euclidean distance from the current feature point to the nearest neighbor feature point to the euclidean distance from the current feature point to the next nearest neighbor feature point is less than or equal to a preset threshold, it is determined that the current feature point and the nearest neighbor feature point are correctly matched; and if the Euclidean distance from the current feature point to the nearest neighbor feature point and the ratio of the Euclidean distance from the current feature point to the next nearest neighbor feature point are larger than a preset threshold value, the current feature point and the nearest neighbor feature point are considered to be wrongly matched.
In an alternative embodiment, as shown in fig. 2, the step S32 is further followed by the following steps:
step S33, calculating neighboring matches of the feature points in the first visual dictionary to the feature points in the original feature set, forming a second initial matching set. In this embodiment, a feature point matching from a feature point in the first visual dictionary to a feature point in the original feature set is calculated using a neighbor algorithm (K-NN algorithm), specifically, the similarity of the feature points is measured according to the euclidean distances between the feature points, that is, the euclidean distances are calculated for one current feature point in the first visual dictionary and all feature points in the original feature set, and a nearest neighbor feature point and a next nearest neighbor feature point are selected from the current feature point and the nearest neighbor feature point, and the current feature point is matched with the nearest neighbor feature point.
And step S34, removing the error matching in the second initial matching set to obtain a second matching set. In this embodiment, if the ratio of the euclidean distance from the current feature point to the nearest neighbor feature point to the euclidean distance from the current feature point to the next nearest neighbor feature point is less than or equal to a preset threshold, it is determined that the current feature point and the nearest neighbor feature point are correctly matched; and if the Euclidean distance from the current feature point to the nearest neighbor feature point and the ratio of the Euclidean distance from the current feature point to the next nearest neighbor feature point are larger than a preset threshold value, the current feature point and the nearest neighbor feature point are considered to be wrongly matched.
And step S35, removing the asymmetric matching in the first matching set and the second matching set to obtain a correct matching set. In this embodiment, a set of all feature points in the original feature set that have completed correct matching is used as the standard feature set. In this embodiment, since noise points may exist in a plurality of pre-collected images and standard images, and a certain accuracy error may also exist when performing SURF feature point extraction, some wrong feature points may exist in extracted feature points, some asymmetric matches may also exist in the matches in the first matching set and the second matching set obtained after removing the wrong matches, and the probability that these asymmetric matched feature points are the wrong feature points is high, in order to optimize a model, all asymmetric matches are removed to obtain a correct matching set, a set of all feature points in the original feature set that have completed the correct matches is used as a standard feature set, model training is completed, and the model is further optimized.
Example 2
The present embodiment provides an image positioning method, as shown in fig. 3. It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein. The process comprises the following steps:
and step S100, finishing model training by using all or part of the method of the embodiment 1 to obtain a standard feature set.
And S200, extracting SURF characteristic points of the image to be identified to form a characteristic set to be identified. In this embodiment, the feature set to be recognized includes all SURF feature points in the whole image to be recognized.
Step S300, calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set. In this embodiment, the region to be located in the standard image is the region to be located in the image to be identified. In this embodiment, since there may be noise in the image to be identified and a certain accuracy error may also exist when performing SURF feature point extraction, it is inevitable that some error feature points that cannot be mapped into the standard feature set exist in automatically extracting feature points in the feature set to be identified, and therefore, the RANSAC algorithm with strong fault-tolerant capability is used to calculate the mapping matrix. In this embodiment, since the feature set to be recognized is mapped to the standard feature set, that is, the image to be recognized is mapped to the standard image space, the region where the device in the image to be recognized is located can be obtained according to the region where the device in the standard image is located.
In a specific embodiment, after the feature set to be recognized is mapped to the standard feature set, that is, after the image to be recognized is mapped to the standard image space, the modeling information of the standard image can be used to further accurately position the recognition area in the image to be recognized, so as to read the output value of the area to be positioned, wherein the recognition area can be a reading area, a reading pointer area, a reading knob area or the like.
In the image positioning method provided by the embodiment, the model capable of being used for positioning the same type of different devices is used to complete the positioning of the region to be positioned in the image to be recognized, so that the image to be recognized can be the image of any device belonging to the same type as the device in the model, the application range of the image positioning method is expanded, and the positioning efficiency for positioning the image of the same type of different devices is improved.
In an alternative embodiment, as shown in fig. 4, step S200 includes the following steps:
step S210, extracting SURF feature points of the image to be recognized. In the present embodiment, SURF feature points are extracted for the entire image to be recognized.
Step S220, respectively acquiring SURF feature points in a plurality of image areas at different positions or different sizes in the image to be identified. In this embodiment, a plurality of sliding windows with different scales are used, and slide on an image to be identified at certain intervals, each sliding window forms an image region at each position, SURF feature points in each image region are obtained, and a corresponding feature set is formed, specifically, if N image regions are formed, N feature sets, G respectively, are correspondingly formed1,G2,G3……GN
In step S230, the ratio of the number of valid feature points to the number of invalid feature points in each image region is calculated. In this embodiment, the valid feature points refer to feature points that match feature points in the first visual dictionary, and the invalid feature points refer to feature points that match feature points in the second visual dictionary.
In a specific embodiment, step S230 includes the following steps: step S231, calculating the neighbor matching of the feature points in a current feature set and the feature points in the first visual dictionary, and recording the number of the effective feature points successfully matched; wherein the current feature set is G1,G2,G3……GNAny one feature set of; step S232, calculating the neighbor matching of the feature points in the current feature set and the feature points in the second visual dictionary, and recording the number of invalid feature points successfully matched; step S233, calculating the ratio of the number of the effective characteristic points in the current characteristic set to the number of the ineffective characteristic points; repeating the steps S231-S233 until all feature sets (G) are completed1,G2,G3……GN) The calculation of the ratio of the number of valid feature points to the number of invalid feature points in (1).
Step S240, obtaining the characteristic points in the image area with the maximum ratio to form a characteristic set to be identified. The feature points in the image area with the largest ratio of the effective feature points in the image to be identified are selected to form the feature set to be identified, so that the number of the feature points in the feature set to be identified can be reduced, namely, the calculated amount of the image positioning method in the embodiment is reduced, and meanwhile, the number of the feature points in the area to be positioned is ensured, namely, the positioning accuracy of the image positioning method can also be ensured.
In an alternative embodiment, as shown in fig. 5, step S300 includes the following steps:
step S310, calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set. In this embodiment, a neighbor algorithm (K-NN algorithm) is used to calculate the matching between the feature points in the feature set to be identified and the feature points in the standard feature set, and a specific calculation method is the same as the neighbor matching calculation method provided in step S31 and step S33 in embodiment 1, and is not described herein again.
Step S320, removing the error matches in the third initial matching set to obtain a third matching set. In this embodiment, the specific determination method of the error matching is the same as the determination methods provided in step S32 and step S34 in embodiment 1, and is not described herein again.
Step S330, calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set. By calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set and removing the wrong matching, the feature points which are possibly wrong in the feature set to be identified can be removed, the possibility of positioning errors caused by the fact that mapping results are wrong due to the fact that the characteristic points which are wrong are reduced, and the influence of the situations that noise exists in an image or a region to be positioned is shielded and the like on positioning accuracy is reduced.
In an alternative embodiment, as shown in fig. 5, after step S320, the following steps are further included:
step S340, calculating the neighbor matching from the feature points in the standard feature set to the feature points in the feature set to be identified, and forming a fourth initial matching set.
Step S350, removing the error matches in the fourth initial matching set to obtain a fourth matching set.
And step S360, removing the asymmetric matching in the third matching set and the fourth matching set. By removing the error matching of the fourth initial matching set and removing the asymmetric matching of the third matching set and the fourth matching set, the error feature points in the feature set to be identified are further removed, the influence of the error feature points on the positioning accuracy can be reduced, and the positioning accuracy of the image positioning method in the embodiment is further improved.
Example 3
In this embodiment, a model training apparatus for image localization is provided, and the apparatus is used to implement the foregoing embodiment 1 and its preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides a model training apparatus for image localization, as shown in fig. 6, including: a first extraction module 10, a second extraction module 20 and a first matching module 30. Wherein the content of the first and second substances,
the first extraction module 10 is configured to extract SURF feature points of a region to be located and a non-region to be located in a plurality of pre-collected images, and form a first visual dictionary and a second visual dictionary respectively; the second extraction module 20 is configured to extract SURF feature points of a region to be located in a standard image to form an original feature set; the first matching module 30 is configured to perform feature point matching on feature points in the original feature set and feature points in the first visual dictionary, and remove feature points in the original feature set that are incorrectly matched, to obtain a standard feature set.
In an alternative embodiment, the first extraction module 10 comprises: the device comprises a first extraction unit and a cluster calculation unit. The first extraction unit is used for extracting SURF feature points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first feature set and a second feature set; the cluster calculation unit is used for carrying out cluster calculation on the first feature set and the second feature set to obtain a first cluster set and a second cluster set; the first set of clusters is used as a first visual dictionary and the second set of clusters is used as a second visual dictionary.
In an alternative embodiment, the first matching module 30 comprises: a first matching unit and a first removing unit. The first matching unit is used for calculating the neighbor matching of the feature points in the original feature set to the feature points in the first visual dictionary to form a first initial matching set; the first removing unit is used for removing the error matching in the first initial matching set to obtain a first matching set; and taking the set of all the feature points in the original feature set, which are reserved in the first matching set, as a standard feature set.
In an alternative embodiment, the first matching module 30 further comprises: a second matching unit, a second removing unit and a third removing unit. The second matching unit is used for calculating the neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set to form a second initial matching set; the second removing unit is used for removing the error matching in the second initial matching set to obtain a second matching set; the third removing unit is used for removing the asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; and taking the set of all feature points which are correctly matched in the original feature set as a standard feature set.
Example 4
In this embodiment, an image positioning apparatus is provided, which is used to implement the above embodiment 2 and the preferred embodiment thereof, and the description thereof is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides an image positioning apparatus, as shown in fig. 7, including: a model training module 100, a feature set formation module 200 and a map calculation module 300. Wherein the content of the first and second substances,
the model training module 100 is configured to complete training of a model by using the method of any one of the above first aspects, and obtain a standard feature set; the feature set forming module 200 is configured to extract SURF feature points of an image to be identified, and form a feature set to be identified; the mapping calculation module 300 is configured to calculate a mapping matrix from the feature set to be identified to the standard feature set, and map the feature set to be identified to the standard feature set; and the area to be positioned in the standard image is the area to be positioned in the image to be identified.
In an alternative embodiment, the feature set formation module 200 includes: the device comprises a feature extraction unit, a feature acquisition unit, a ratio calculation unit and a feature set forming unit. Wherein the content of the first and second substances,
the feature extraction unit is used for extracting SURF feature points of the image to be identified; the characteristic acquisition unit is used for respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in the image to be identified; the ratio calculation unit is used for calculating the ratio of the number of the effective characteristic points in each image area to the number of the ineffective characteristic points; the valid feature points are feature points matched with the feature points in the first visual dictionary, and the invalid feature points are feature points matched with the feature points in the second visual dictionary; the feature set forming unit is used for acquiring feature points in the image area with the maximum ratio and forming a feature set to be identified.
In an alternative embodiment, the map calculation module 300 includes: a third matching unit, a fourth removing unit and a mapping calculating unit. The third matching unit is used for calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set; the fourth removing unit is used for removing the error matching in the third initial matching set to obtain a third matching set; the mapping calculation unit is used for calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set.
In an alternative embodiment, the mapping calculation module 300 further includes: a fourth matching unit, a fifth removing unit and a sixth removing unit. The fourth matching unit is used for calculating the neighbor matching from the feature points in the standard feature set to the feature points in the feature set to be identified to form a fourth initial matching set; the fifth removing unit is used for removing the error matching in the fourth initial matching set to obtain a fourth matching set; the sixth removing unit is used for removing the asymmetric matching in the third matching set and the fourth matching set.
Example 5
An embodiment of the present invention further provides an image positioning system, as shown in fig. 8, the image positioning system may include: at least one processor 801, such as a CPU (Central Processing Unit), at least one communication interface 803, memory 804, at least one communication bus 802. Wherein a communication bus 802 is used to enable connective communication between these components. The communication interface 803 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 803 may also include a standard wired interface and a standard wireless interface. The Memory 804 may be a high-speed RAM (Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 804 may optionally be at least one memory device located remotely from the processor 801 as previously described. Wherein the memory 804 stores an application program, and the processor 801 calls the program code stored in the memory 804 for executing any of the method steps in embodiment 1, or executing any of the method steps in embodiment 2, i.e. for performing the following operations:
extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary; extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set; performing feature point matching on feature points in the original feature set and feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set; and finishing the training of the model.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first characteristic set and a second characteristic set; performing clustering calculation on the first feature set and the second feature set to respectively obtain a first clustering set and a second clustering set; the first set of clusters is used as a first visual dictionary and the second set of clusters is used as a second visual dictionary.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: calculating the neighbor matching of the feature points in the original feature set to the feature points in the first visual dictionary to form a first initial matching set; removing the error matching in the first initial matching set to obtain a first matching set; and taking the set of all the feature points in the original feature set which are reserved in the first matching set as a standard feature set.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: calculating the neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set to form a second initial matching set; removing the error matching in the second initial matching set to obtain a second matching set; removing the asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; and taking the set of all feature points which are correctly matched in the original feature set as a standard feature set.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: completing model training by using all or part of the method described in embodiment 1 to obtain a standard feature set; extracting SURF characteristic points of an image to be identified to form a characteristic set to be identified; calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set; and the area to be positioned in the standard image is the area to be positioned in the image to be identified.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: extracting SURF characteristic points of an image to be identified; respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in an image to be identified; calculating the ratio of the number of the effective characteristic points to the number of the ineffective characteristic points in each image area; the valid feature points are feature points matched with the feature points in the first visual dictionary, and the invalid feature points are feature points matched with the feature points in the second visual dictionary; and acquiring the characteristic points in the image area with the maximum ratio to form a characteristic set to be identified.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set; removing the error matching in the third initial matching set to obtain a third matching set; and calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set.
In the embodiment of the present invention, the processor 801 invokes the program code in the memory 804, and is further configured to perform the following operations: calculating the neighbor matching of the feature points in the standard feature set to the feature points in the feature set to be identified to form a fourth initial matching set; removing the error matching in the fourth initial matching set to obtain a fourth matching set; asymmetric matches in the third and fourth matching sets are removed.
The communication bus 802 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 802 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one line is shown in FIG. 8, but this does not represent only one bus or one type of bus.
The memory 804 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: SSD); the memory 804 may also comprise a combination of the above-described types of memory.
The processor 801 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 801 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Example 6
An embodiment of the present invention further provides a non-transitory computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions can execute any one of the method steps in embodiment 1 or any one of the method steps in embodiment 2. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A model training method for image localization, comprising the steps of:
extracting SURF characteristic points of a region to be positioned and a region not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary;
extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set;
performing feature point matching on the feature points in the original feature set and the feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set;
performing feature point matching on the feature points in the original feature set and the feature points in the first visual dictionary, and removing the feature points with wrong matching in the original feature set to obtain a standard feature set, including the following steps:
calculating the neighbor matching of the feature points in the original feature set to the feature points in the first visual dictionary to form a first initial matching set;
removing the error matching in the first initial matching set to obtain a first matching set; taking the set of all feature points in the original feature set which are reserved in the first matching set as the standard feature set;
calculating the neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set to form a second initial matching set;
removing the error matching in the second initial matching set to obtain a second matching set;
removing the asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; taking the set of all feature points which are correctly matched in the original feature set as the standard feature set;
and finishing the training of the model.
2. The model training method for image localization according to claim 1, wherein the extracting SURF feature points of the regions to be localized and the regions not to be localized in the pre-collected images to form a first visual dictionary and a second visual dictionary respectively comprises the following steps:
extracting SURF characteristic points of a region to be positioned and a region not to be positioned in the plurality of pre-collected images to respectively form a first characteristic set and a second characteristic set;
performing cluster calculation on the first feature set and the second feature set to respectively obtain a first cluster set and a second cluster set; the first set of clusters is used as the first visual dictionary and the second set of clusters is used as the second visual dictionary.
3. An image localization method, comprising the steps of:
completing model training using the method of any of claims 1-2, resulting in a standard feature set;
extracting SURF characteristic points of an image to be identified to form a characteristic set to be identified;
calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set; the region to be positioned in the standard image is the region to be positioned in the image to be identified;
calculating a mapping matrix from the feature set to be identified to the standard feature set, and mapping the feature set to be identified to the standard feature set, including the following steps:
calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set;
removing the error matching in the third initial matching set to obtain a third matching set;
calculating the neighbor matching from the feature points in the standard feature set to the feature points in the feature set to be identified to form a fourth initial matching set;
removing the error matching in the fourth initial matching set to obtain a fourth matching set;
removing asymmetric matches in the third matching set and the fourth matching set;
and calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set, and mapping the removed feature set to be identified to the removed standard feature set.
4. The image positioning method according to claim 3, wherein the step of extracting SURF feature points of the image to be identified to form a feature set to be identified comprises the following steps:
extracting SURF characteristic points of an image to be identified;
respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in the image to be identified;
calculating the ratio of the number of effective characteristic points to the number of ineffective characteristic points in each image area; the valid feature points are feature points matched with feature points in the first visual dictionary, and the invalid feature points are feature points matched with feature points in the second visual dictionary;
and acquiring the characteristic points in the image area with the maximum ratio to form a characteristic set to be identified.
5. A model training apparatus for image localization, comprising:
the first extraction module is used for extracting SURF characteristic points of a region to be positioned and a region which is not to be positioned in a plurality of pre-collected images to respectively form a first visual dictionary and a second visual dictionary;
the second extraction module is used for extracting SURF characteristic points of a region to be positioned in a standard image to form an original characteristic set;
the first matching module is used for performing feature point matching on the feature points in the original feature set and the feature points in the first visual dictionary, and removing the feature points which are wrongly matched in the original feature set to obtain a standard feature set;
the first matching module includes:
a first matching unit, configured to calculate neighbor matches of feature points in the original feature set to feature points in the first visual dictionary, to form a first initial matching set;
a first removing unit, configured to remove an incorrect match in the first initial matching set to obtain a first matching set; taking a set of feature points reserved in the first matching set in the original feature set as the standard feature set;
a second matching unit, configured to calculate neighbor matching of the feature points in the first visual dictionary to the feature points in the original feature set, so as to form a second initial matching set;
a second removing unit, configured to remove an error match in the second initial matching set to obtain a second matching set;
a third removing unit, configured to remove asymmetric matching in the first matching set and the second matching set to obtain a correct matching set; and taking the set of all feature points which are correctly matched in the original feature set as the standard feature set.
6. The model training apparatus for image localization according to claim 5, wherein the first extraction module comprises:
the first extraction unit is used for extracting SURF characteristic points of a region to be positioned and a region not to be positioned in the plurality of pre-collected images to respectively form a first characteristic set and a second characteristic set;
the cluster calculation unit is used for carrying out cluster calculation on the first feature set and the second feature set to obtain a first cluster set and a second cluster set; the first set of clusters is used as the first visual dictionary and the second set of clusters is used as the second visual dictionary.
7. An image localization apparatus, comprising:
a model training module for completing the training of the model by using the method of any one of claims 1-2 to obtain a standard feature set;
the characteristic set forming module is used for extracting SURF characteristic points of the image to be identified and forming a characteristic set to be identified;
the mapping calculation module is used for calculating a mapping matrix from the feature set to be identified to the standard feature set and mapping the feature set to be identified to the standard feature set; the region to be positioned in the standard image is the region to be positioned in the image to be identified;
the mapping calculation module includes:
the third matching unit is used for calculating the neighbor matching of the feature points in the feature set to be identified to the feature points in the standard feature set to form a third initial matching set;
a fourth removing unit, configured to remove an incorrect match in the third initial matching set to obtain a third matching set;
the mapping calculation unit is used for calculating a mapping matrix from the removed feature set to be identified to the removed standard feature set and mapping the removed feature set to be identified to the removed standard feature set;
the fourth matching unit is used for calculating the neighbor matching from the feature points in the standard feature set to the feature points in the feature set to be identified to form a fourth initial matching set;
a fifth removing unit, configured to remove an incorrect match in the fourth initial matching set to obtain a fourth matching set;
a sixth removing unit configured to remove asymmetric matching in the third matching set and the fourth matching set.
8. The image localization apparatus according to claim 7, wherein the feature set formation module comprises:
the characteristic extraction unit is used for extracting SURF characteristic points of the image to be identified;
the characteristic acquisition unit is used for respectively acquiring SURF characteristic points in a plurality of image areas at different positions or different sizes in the image to be identified;
a ratio calculation unit configured to calculate a ratio of the number of valid feature points to the number of invalid feature points in each of the image regions; the valid feature points are feature points matched with feature points in the first visual dictionary, and the invalid feature points are feature points matched with feature points in the second visual dictionary;
and the characteristic set forming unit is used for acquiring the characteristic points in the image area with the maximum ratio and forming a characteristic set to be identified.
9. An image positioning system, including control center and with a plurality of terminals that the control center is connected, its characterized in that still includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any one of claims 1-2 or to perform the method of any one of claims 3-4.
10. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, perform the steps of the method of any of the preceding claims 1-2, or perform the steps of the method of any of the preceding claims 3-4.
CN201810854417.2A 2018-07-30 2018-07-30 Model training method for image positioning, image positioning method and device Active CN109299652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810854417.2A CN109299652B (en) 2018-07-30 2018-07-30 Model training method for image positioning, image positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810854417.2A CN109299652B (en) 2018-07-30 2018-07-30 Model training method for image positioning, image positioning method and device

Publications (2)

Publication Number Publication Date
CN109299652A CN109299652A (en) 2019-02-01
CN109299652B true CN109299652B (en) 2020-11-13

Family

ID=65172712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810854417.2A Active CN109299652B (en) 2018-07-30 2018-07-30 Model training method for image positioning, image positioning method and device

Country Status (1)

Country Link
CN (1) CN109299652B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113984072B (en) * 2021-10-28 2024-05-17 阿波罗智能技术(北京)有限公司 Vehicle positioning method, device, equipment, storage medium and automatic driving vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577409A (en) * 2012-07-19 2014-02-12 阿里巴巴集团控股有限公司 Method and device for establishing image indexes in image searches
CN104156413A (en) * 2014-07-30 2014-11-19 中国科学院自动化研究所 Trademark density based personalized trademark matching recognition method
CN105654122A (en) * 2015-12-28 2016-06-08 江南大学 Spatial pyramid object identification method based on kernel function matching
CN105956074A (en) * 2016-04-28 2016-09-21 北京航空航天大学 Single image scene six-degree-of-freedom positioning method of adjacent pose fusion guidance
CN107180422A (en) * 2017-04-02 2017-09-19 南京汇川图像视觉技术有限公司 A kind of labeling damage testing method based on bag of words feature
US10007863B1 (en) * 2015-06-05 2018-06-26 Gracenote, Inc. Logo recognition in images and videos
CN108255858A (en) * 2016-12-29 2018-07-06 北京优朋普乐科技有限公司 A kind of image search method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577409A (en) * 2012-07-19 2014-02-12 阿里巴巴集团控股有限公司 Method and device for establishing image indexes in image searches
CN104156413A (en) * 2014-07-30 2014-11-19 中国科学院自动化研究所 Trademark density based personalized trademark matching recognition method
US10007863B1 (en) * 2015-06-05 2018-06-26 Gracenote, Inc. Logo recognition in images and videos
CN105654122A (en) * 2015-12-28 2016-06-08 江南大学 Spatial pyramid object identification method based on kernel function matching
CN105956074A (en) * 2016-04-28 2016-09-21 北京航空航天大学 Single image scene six-degree-of-freedom positioning method of adjacent pose fusion guidance
CN108255858A (en) * 2016-12-29 2018-07-06 北京优朋普乐科技有限公司 A kind of image search method and system
CN107180422A (en) * 2017-04-02 2017-09-19 南京汇川图像视觉技术有限公司 A kind of labeling damage testing method based on bag of words feature

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
图像中标志图匹配技术研究;王茁语;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;正文第2-5章 *
基于词袋模型的物体识别方法研究;齐梅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160615;正文第2-3章 *

Also Published As

Publication number Publication date
CN109299652A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN110020592B (en) Object detection model training method, device, computer equipment and storage medium
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN111435446A (en) License plate identification method and device based on L eNet
CN113657202B (en) Component identification method, training set construction method, device, equipment and storage medium
CN116168351B (en) Inspection method and device for power equipment
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN111444911B (en) Training method and device of license plate recognition model and license plate recognition method and device
CN114820679B (en) Image labeling method and device electronic device and storage medium
CN113240623A (en) Pavement disease detection method and device
CN110222704B (en) Weak supervision target detection method and device
CN111783561A (en) Picture examination result correction method, electronic equipment and related products
CN113505261B (en) Data labeling method and device and data labeling model training method and device
CN109299652B (en) Model training method for image positioning, image positioning method and device
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN112132892A (en) Target position marking method, device and equipment
CN112818946A (en) Training of age identification model, age identification method and device and electronic equipment
CN116721396A (en) Lane line detection method, device and storage medium
CN115063739B (en) Abnormal behavior detection method, device, equipment and computer storage medium
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN113269678A (en) Fault point positioning method for contact network transmission line
CN110837757A (en) Face proportion calculation method, system, equipment and storage medium
CN113516161B (en) Risk early warning method for tunnel constructors
CN112784632B (en) Method and device for detecting potential safety hazards of power transmission line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant