CN112991448B - Loop detection method, device and storage medium based on color histogram - Google Patents

Loop detection method, device and storage medium based on color histogram Download PDF

Info

Publication number
CN112991448B
CN112991448B CN202110300183.9A CN202110300183A CN112991448B CN 112991448 B CN112991448 B CN 112991448B CN 202110300183 A CN202110300183 A CN 202110300183A CN 112991448 B CN112991448 B CN 112991448B
Authority
CN
China
Prior art keywords
color
image
loop
color histogram
hsv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110300183.9A
Other languages
Chinese (zh)
Other versions
CN112991448A (en
Inventor
周方华
魏武
韩进
林光杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110300183.9A priority Critical patent/CN112991448B/en
Publication of CN112991448A publication Critical patent/CN112991448A/en
Application granted granted Critical
Publication of CN112991448B publication Critical patent/CN112991448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a loop detection method, a device and a storage medium based on a color histogram, wherein the method comprises the following steps: converting the RGB image of the key frame into a gray level image and an HSV color image; ORB characteristic points and LSD line characteristics are extracted from the image; non-uniformly quantizing HSV color space, and calculating a color histogram; selecting a color similar image according to the main color vector in the color histogram; normalizing the color histogram, and selecting a candidate image set C according to the Pasteur coefficient; detecting candidate loop according to the word bag model; and (3) performing time consistency and space consistency detection verification on the candidate loop, selecting a real loop, performing loop correction and eliminating accumulated errors. The application applies the global color feature color histogram of the image to loop detection, provides richer image information for a loop detection algorithm, improves the loop detection accuracy and the algorithm operation efficiency, and can be widely applied to the field of mobile robot vision positioning and navigation.

Description

Loop detection method, device and storage medium based on color histogram
Technical Field
The application relates to the field of mobile robot vision positioning and navigation, in particular to a loop detection method, a loop detection device and a storage medium based on a color histogram.
Background
Thanks to the progress of computer and sensor technology, mobile robot technology has been developed unprecedented, and its application scenario is gradually changed from military to commercial and civil, and its huge market potential has prompted a lot of technological companies to put into the research of mobile robot technology. In the field of mobile robots, positioning and navigation technologies are one of the most important basic technologies, and autonomous navigation robots using vision or laser as a sensor estimate the pose of the robot in a progressive manner, i.e. pose information at a certain moment is estimated according to pose information at a previous moment, so that error accumulation is inevitably generated, and the error cannot be fundamentally eliminated, thereby causing inaccurate positioning information for a long time.
The proposal of the loop detection technology solves the problems to a great extent, and the main aim is to detect certain positions (namely, detect the annular route in the movement track of the robot) which the robot repeatedly passes through in sequence according to the information acquired by the sensor, and eliminate the accumulated error according to the comparison of the positioning information of the repeated positions. The loop detection technology can eliminate accumulated errors to a great extent, and greatly improves the positioning accuracy of the robot, so that the loop detection technology has very wide application and is almost one of the necessary modules of a robot positioning and navigation system.
The current mainstream loop detection algorithm is a Bag-of-Words model (BoW), which clusters descriptors of feature points (SIFT, SURF, ORB, etc.) in an image into Words, describes a frame of image by using Bag-of-Words vectors formed by a plurality of Words, and then measures the similarity between the images by using the difference between the Bag-of-Words vectors, so as to find out possible loops. The bag-of-words model only uses the feature points in the image, discards other information, does not fully utilize the information such as rich colors in the image, and the bag-of-words model needs to calculate the word vector of each key frame, so that the calculated amount is increased, and the operation efficiency is reduced.
Disclosure of Invention
In order to solve at least one of the technical problems existing in the prior art to a certain extent, the application aims to provide a loop detection method, a loop detection device and a storage medium based on a color histogram.
The technical scheme adopted by the application is as follows:
a loop detection method based on a color histogram comprises the following steps:
acquiring a color image of a key frame, and performing color processing on the acquired color image;
the color processing includes:
converting the color image into a gray scale image and an HSV image;
ORB characteristic points and LSD line characteristics are extracted according to the gray level diagram and the HSV image;
carrying out non-uniform quantization on an HSV color space of the HSV image, and obtaining a quantized color histogram according to the HSV color image;
constructing an image main color vector according to the color histogram, and acquiring an image belonging to the same class as the current key frame according to the image main color vector;
calculating a Pasteur coefficient between the color histograms of the current key frame and the similar images of the current key frame, and constructing a loop candidate image set according to the Pasteur coefficient;
if the loop candidate image set is empty, returning to acquire a color image of the next key frame, and performing color processing;
if the loop candidate image set is non-empty, calculating the bag-of-words vectors of all color images in the loop candidate image set, calculating the similarity of the bag-of-words vectors of the current key frame and the bag-of-words vectors of other image frames, determining that the detected similarity exceeds a preset threshold, and determining that a loop exists;
and acquiring image frames according to the similarity, performing time consistency and space consistency detection verification on the acquired image frames, and if the verification is passed, judging that loop exists and performing loop correction.
Further, the non-uniformly quantizing the HSV color space of the HSV image, and obtaining the quantized color histogram according to the HSV color image, includes:
dividing an HSV color space of the HSV image into a plurality of color intervals, wherein each color interval corresponds to an interval bin in a color histogram,
and calculating the number of pixels with colors in each bin interval according to the HSV color map, and obtaining a one-dimensional color histogram.
Further, the one-dimensional color histogram is obtained by:
dividing an H component of an HSV color space into 16 sections, and dividing an S component and a V component into 4 sections respectively;
the H component, the S component and the V component are weighted and combined to form a one-dimensional color vector:
G=Q S Q V H+Q V S+V
wherein Q is S 、Q V The quantization levels of the saturation S and brightness V components, respectively, with the one-dimensional color vector G ranging from values of [0,1, ], 255];
A one-dimensional color histogram of the HSV image is obtained by calculating a one-dimensional color vector G.
Further, the constructing an image main color vector according to the color histogram, and obtaining an image of the same class as the current key frame according to the image main color vector includes:
selecting three color values with the largest duty ratio in the color histogram as main colors of the image, wherein each main color is composed of an 8-bit binary vector v i Representing the sequential combination of primary colors into a 24-bit binary primary color vector P:
P=[v 1 ,v 2 ,v 3 ]
and acquiring the image belonging to the same class as the current key frame by adopting a K-means algorithm according to the image main color vector P.
Further, the calculation formula of the pasteurization coefficient is as follows:
wherein, p and p' are color histograms of the two images, and the value range of the Pasteur coefficient rho is [0,1].
Further, the bag-of-words vector v of the current key frame c Bag of words vector v with other image frames k The similarity calculation formula of (2) is:
if s (v) c ,v k ) Greater than a predetermined threshold, determining the current key frame F c And the kth frame F k A loop exists between the two loops; otherwise, the loop is not formed.
Further, the ORB feature points consist of key points and BRIEF-32 descriptors;
after extracting key points, screening the extracted key points by adopting a quadtree splitting method and a non-maximum suppression algorithm to remove edge effects;
the BRIEF-32 descriptor is a 256-bit binary vector, and each bit in the binary vector is determined by the color similarity of any two pixel blocks in a circular area;
the circular area takes a key point as a center and has a radius of m pixels;
the pixel block is an area acquired in the circular area according to a preset mode.
Further, performing a temporal consistency detection verification on the obtained image frame, including:
acquiring a plurality of image frames adjacent to the current key frame, detecting whether a loop is formed according to the acquired image frames, and if the loop is formed, determining that the loop meets the time consistency;
performing spatial consistency detection verification on the obtained image frame, including:
and calculating pose transformation between the current key frame and the image frames forming the loop, and if the amplitude of the pose transformation is smaller than a threshold value, determining that the loop meets the spatial consistency.
The application adopts another technical scheme that:
a color histogram based loop detection apparatus comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
The application adopts another technical scheme that:
a storage medium having stored therein a processor executable program which when executed by a processor is for performing the method as described above.
The beneficial effects of the application are as follows: the global color feature color histogram of the image is applied to loop detection, so that richer image information is provided for a loop detection algorithm, and the loop detection accuracy is improved; in addition, the HSV color space is quantized at unequal intervals, and then the color histogram is calculated, so that the calculation complexity of an algorithm is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present application, and other drawings may be obtained according to these drawings without the need of inventive labor for those skilled in the art.
FIG. 1 is a schematic flow chart of a loop detection method based on a color histogram in an embodiment of the application;
FIG. 2 is a flowchart of a K-means clustering algorithm in an embodiment of the application;
FIG. 3 is a schematic diagram illustrating loop time consistency and spatial consistency detection in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present application, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present application and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present application.
In the description of the present application, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present application can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
As shown in fig. 1, the present embodiment provides a loop detection method based on a color histogram, which includes the following steps:
s1, acquiring a color image of a key frame, and converting the color image into a gray level image and an HSV image. Two or more key frames may be included in a video.
And S2, extracting ORB characteristic points and LSD line characteristics according to the gray level image and the HSV image.
The extracted feature points should be distributed in the whole image as uniformly as possible, and two measures are taken in this embodiment to ensure uniform distribution of ORB feature points, one is to extract feature points for image blocks; and secondly, screening the characteristic points by adopting a quadtree splitting method and a non-maximum suppression algorithm. LSD is some straight line segment feature in the image, its descriptor is LBD, a binary descriptor.
The ORB feature points are composed of FAST key points and BRIEF descriptors, and in order to enable the feature points to have direction invariance, the direction vector from the geometric center of the pixel block to the gray center of the pixel block is used for representing the direction of the feature points.
When ORB characteristic points of each frame of image are extracted, the image is converted into a gray level image and an HSV image, and an image pyramid is respectively constructed. By establishing an image pyramid, feature points are extracted in each layer of the pyramid, so that a scale space is formed, and the scale invariance of the feature points is ensured.
Extracting FAST key points from each layer of the gray image pyramid; the FAST key point is a corner point, and is judged by comparing the gray value of a pixel point with the gray value of a nearby point.
A certain pixel point p (the gray value is set to be I in the gray level diagram p ) The gray value of the circle center is I, and 16 pixel points are shared on the circle with 3 pixel units as radius i (i=1,2,...,16):
Wherein:
in the present embodiment, the threshold t=0.2i is taken p If N > N 0 Then consider the p-point as the key point, N 0 Typically 12 or 9, in this embodiment N 0 =9。
In order to reduce the edge effect, the feature points should be distributed uniformly in the whole graph as much as possible, the gray graph is divided into a plurality of 30×30 small areas before extracting the key points, and the gray graph is divided into the small areas in each cellFeature points are extracted in the domain. Let the whole image co-extract M 0 The expected extracted feature point number is M i Then condition M should be satisfied 0 >M 1
The gray centroid method is adopted to calculate the direction for each key point, and the calculation method is as follows:
selecting a disc area Patch with radius of r pixels by taking a key point as a center, and calculating a center point taking a gray value of an image block as a weight, namely a gray centroid C:
wherein, the liquid crystal display device comprises a liquid crystal display device,
assuming that the geometric center O of the gray centroid C disk is not coincident, the direction of the key point can be defined by a vectorThe direction angle θ of (2) represents:
θ=atan2(m 01 ,m 10 )。
the embodiment selects BRIEF-32 descriptor, namely, a 256-bit binary vector is used for describing a feature point. In an image block of 31×31 pixels centering on a key point, 256 pairs of pixel points are selected in a machine learning manner, and coordinates of each pixel point are set as (x i ,y i ) I=1, 2,..512, constitute matrix D:
in order to ensure the rotation invariance of the feature point descriptors, the D matrix needs to be rotated and transformed according to the direction angle theta of the feature points:
D θ =R θ D
wherein R is θ A rotation matrix of the direction angle θ that is a feature point:
D θ a matrix of coordinates of the rotated pixel points is set such that a pair of the coordinates of the pixel points are (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) And Des at the ith bit of the descriptor i Corresponding to each other. Calculated as (x 'in HSV three monochromatic channel images respectively' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) The pixel average value of the disc-shaped pixel block Patch with the center and 2 pixels as the radius is calculated as follows:
calculating the color similarity of the two pixel blocks:
wherein Cdist i Bdist as color difference i Is the difference in brightness.
Then the ith bit Des of the descriptor i Defined according to the following method:
wherein ε is c 、ε B The thresholds of color difference and brightness difference are respectively shown as Cdist i And Bdist i And when the colors of the two pixel blocks are smaller than the threshold value, the color of the two pixel blocks is similar, the value of the corresponding bit of the descriptor is 1, otherwise, the colors of the two pixel blocks are different, and the value of the corresponding bit of the descriptor is 0.
The 256 pixel points are subjected to the operation to obtain a 256-bit binary vector, namely the R-BRIEF descriptor of the feature point.
S3, carrying out non-uniform quantization on the HSV color space of the HSV image, and acquiring a quantized color histogram according to the HSV color image.
And non-uniformly quantizing the HSV color space, and calculating a quantized one-dimensional color histogram according to the HSV color image. The quantization of the HSV color space is to divide the HSV color space into a plurality of small color intervals, each small color interval corresponds to one interval bin in the color histogram, and the color histogram is obtained by calculating the number of pixels with colors falling in each bin interval; since a color image is represented by three channels of hue H, saturation S and brightness V, and the represented colors are very various, the calculation amount will be very large if the color histogram is directly calculated without quantizing the color space. Therefore, the application firstly quantizes the three color components H, S, V at unequal intervals according to the perception of the color by human eyes, and then calculates the histogram so as to reduce the calculation complexity of the algorithm, and the specific quantization method is as follows:
according to the range of different colors in the HSV color space and the subjective perception of human vision on the colors, the H component is divided into 16 sections, and the S component and the V component are respectively divided into 4 sections:
the H, S, V components are weighted and combined according to the quantization levels to form a one-dimensional color vector:
G=Q S Q V H+Q V S+V
wherein Q is S 、Q V Quantization levels for saturation S and brightness V components, respectively, with G ranging from 0,1, 255]。
Further, a one-dimensional face of the HSV image can be obtained by calculating GA color histogram, wherein the abscissa of the color histogram is the value of G, namely the color value of the whole HSV image after non-uniform quantization, and the ordinate is the number n of pixels with the color value falling in the corresponding interval i
S4, constructing an image main color vector according to the color histogram, and acquiring an image which belongs to the same class as the current key frame according to the image main color vector.
Selecting three color values with the largest duty ratio in the color histogram to form a binary vector as an image main color vector P, and finding out an image of the same category as the current frame according to the P; taking three color values with the largest duty ratio in the color histogram as the main color of the image, wherein each color value consists of an 8-bit binary vector v i I=1, 2,3, which are sequentially combined into one 24-bit binary primary color vector P:
P=[v 1 ,v 2 ,v 3 ]
further, as shown in fig. 2, the K-means algorithm is used to classify the primary color vectors P of all key frames, and a K-ary tree is built to improve the retrieval efficiency.
S5, calculating the Pasteur coefficient between the color histograms of the current key frame and the similar images of the current key frame, and constructing a loop candidate image set according to the Pasteur coefficient.
Normalizing the color histogram, calculating the Pasteur coefficient between the current frame and each image color histogram in the same class, and selecting all images with the Pasteur coefficient larger than a threshold value to form a loop candidate image set C; the Pasteur coefficient is a measure of the similarity between two statistical samples and is calculated as follows:
where p and p' are color histograms of two images, n=255. The value range of the Pasteur coefficient ρ is [0,1], and the closer the value is to 1, the higher the similarity of the two color histograms is.
And selecting an image with higher similarity with the color histogram of the current key frame from the image frames obtained by clustering according to a preset threshold value to form a candidate image set C.
And S6, if the loop candidate image set is empty, returning to the step S1. If the loop candidate image set is non-empty, calculating the bag-of-words vectors of all color images in the loop candidate image set, calculating the similarity of the bag-of-words vectors of the current key frame and the bag-of-words vectors of other image frames, determining that the detected similarity exceeds a preset threshold, and determining that a loop exists. The bag-of-words vector is calculated from the ORB feature points and the LSD line features.
Judging whether the candidate image set C is empty or not, if the candidate image set C is empty, indicating that no image similar to the current frame color histogram exists in the historical key frame, namely the current key frame does not form a loop, returning to the step S1 to continuously detect the next key frame; if the image set C is not empty, the current key frame is possibly formed into a loop, and the current frame is further judged by using a word bag model;
calculating the point feature and line feature bag vector of the current frame and all images in the image set C according to the bag-of-word model, and calculating the bag-of-word vector v of the current frame c And candidate frame bag of words vector v k The similarity of (2) is calculated as follows:
if s (v) c ,v k ) Greater than a predetermined threshold, the current frame F is considered c And the kth frame F k A loop exists between the two loops; otherwise, the current frame does not form a loop.
S7, acquiring image frames according to the similarity, performing time consistency and space consistency detection verification on the acquired image frames, and if the verification is passed, judging that loop-back exists and performing loop-back correction.
As shown in fig. 3, the detection and verification of temporal consistency and spatial consistency are performed between the image frames where loop-backs may exist. The time consistency verification is performed according to whether the same loop can be continuously detected between the front frame and the rear frame of the loop frame, if the same loop can still be detected by the next frames of the current frame of the loop, the loop is considered to meet the time consistency; the space consistency detection and verification is to calculate the pose transformation between the current frame and the loop frame, and consider that the loop meets the space consistency if the amplitude of the pose transformation is smaller.
Further, if the detected loop can meet the time consistency and the space consistency, the detected loop is regarded as a real loop, loop correction is started, global optimization is performed on all key frames on the loop, and accumulated errors are eliminated.
In summary, compared with the prior art, the method of the embodiment has the following beneficial effects:
(1) The global color feature color histogram of the image is applied to loop detection, so that richer image information is provided for a loop detection algorithm, and the loop detection accuracy is improved.
(2) And the HSV color space is quantized at unequal intervals according to the perception of the human eyes on the color, and then the color histogram is calculated, so that the calculation complexity of an algorithm is reduced.
(3) The two-stage detection strategy based on the color histogram and the bag-of-words model reduces the calculation and matching of the bag-of-words vector to a certain extent, and improves the operation efficiency of the algorithm.
The embodiment also provides a loop detection device based on a color histogram, which comprises:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
The color histogram-based loop detection device can execute any combination implementation steps of the color histogram-based loop detection method provided by the embodiment of the method, and has corresponding functions and beneficial effects.
Embodiments of the present application also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
The embodiment also provides a storage medium which stores instructions or programs for executing the loop detection method based on the color histogram, and when the instructions or programs are run, the instructions or programs can execute any combination implementation steps of the method embodiment, and the method has corresponding functions and beneficial effects.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (9)

1. The loop detection method based on the color histogram is characterized by comprising the following steps of:
acquiring a color image of a key frame, and performing color processing on the acquired color image;
the color processing includes:
converting the color image into a gray scale image and an HSV image;
ORB characteristic points and LSD line characteristics are extracted according to the gray level diagram and the HSV image;
carrying out non-uniform quantization on an HSV color space of the HSV image, and obtaining a quantized color histogram according to the HSV color image;
constructing an image main color vector according to the color histogram, and acquiring an image belonging to the same class as the current key frame according to the image main color vector;
calculating a Pasteur coefficient between the color histograms of the current key frame and the similar images of the current key frame, and constructing a loop candidate image set according to the Pasteur coefficient;
if the loop candidate image set is empty, returning to acquire a color image of the next key frame, and performing color processing;
if the loop candidate image set is non-empty, calculating the bag-of-words vectors of all color images in the loop candidate image set, calculating the similarity of the bag-of-words vectors of the current key frame and the bag-of-words vectors of other image frames, determining that the detected similarity exceeds a preset threshold, and determining that a loop exists;
acquiring an image frame according to the similarity, performing time consistency and space consistency detection verification on the acquired image frame, and if the verification is passed, judging that a loop exists and performing loop correction;
the constructing an image main color vector according to the color histogram, and obtaining an image of the same category as the current key frame according to the image main color vector comprises the following steps:
selecting three color values with the largest duty ratio in the color histogram as main colors of the image, wherein each main color is composed of an 8-bit binary vector v i Representing the sequential combination of primary colors into a 24-bit binary primary color vector P:
P=[v 1 ,v 2 ,v 3 ]
and acquiring the image belonging to the same class as the current key frame by adopting a K-means algorithm according to the image main color vector P.
2. The color histogram-based loop detection method according to claim 1, wherein the non-uniformly quantizing the HSV color space of the HSV image, and obtaining the quantized color histogram according to the HSV color image, includes:
dividing an HSV color space of the HSV image into a plurality of color intervals, wherein each color interval corresponds to an interval bin in a color histogram,
and calculating the number of pixels with colors in each bin interval according to the HSV color map, and obtaining a one-dimensional color histogram.
3. The color histogram-based loop detection method according to claim 2, wherein the one-dimensional color histogram is obtained by:
dividing an H component of an HSV color space into 16 sections, and dividing an S component and a V component into 4 sections respectively;
the H component, the S component and the V component are weighted and combined to form a one-dimensional color vector:
G=Q S Q V H+Q V S+V
wherein Q is s 、Q V The quantization levels of the saturation S and brightness V components respectively, and the one-dimensional color vector G has the value range of [0,1, …,255];
A one-dimensional color histogram of the HSV image is obtained by calculating a one-dimensional color vector G.
4. The color histogram-based loop detection method according to claim 1, wherein the calculation formula of the barstar coefficient is:
wherein, p and p' are color histograms of the two images, and the value range of the Pasteur coefficient rho is [0,1].
5. The color histogram-based loop detection method of claim 1, wherein a bag-of-words vector v of the current key frame c Bag of words vector v with other image frames k The similarity calculation formula of (2) is:
if s (v) c ,v k ) Greater than a predetermined threshold, determining the current key frame F c And the kth frame F k A loop exists between the two loops; otherwise, the loop is not formed.
6. The method for color histogram based loop detection according to claim 1, wherein,
the ORB feature points consist of key points and BRIEF-32 descriptors;
after extracting key points, screening the extracted key points by adopting a quadtree splitting method and a non-maximum suppression algorithm to remove edge effects;
the BRIEF-32 descriptor is a 256-bit binary vector, and each bit in the binary vector is determined by the color similarity of any two pixel blocks in a circular area;
the circular area takes a key point as a center and has a radius of m pixels;
the pixel block is an area acquired in the circular area according to a preset mode.
7. The method for color histogram based loop detection according to claim 1, wherein,
performing time consistency detection verification on the obtained image frames, including:
acquiring a plurality of image frames adjacent to the current key frame, detecting whether a loop is formed according to the acquired image frames,
if the loop is formed, determining that the loop meets the time consistency;
performing spatial consistency detection verification on the obtained image frame, including:
and calculating pose transformation between the current key frame and the image frames forming the loop, and if the amplitude of the pose transformation is smaller than a threshold value, determining that the loop meets the spatial consistency.
8. A color histogram-based loop detection apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a color histogram based loop detection method as claimed in any one of claims 1-7.
9. A storage medium having stored therein a processor executable program, wherein the processor executable program when executed by a processor is for performing the method of any of claims 1-7.
CN202110300183.9A 2021-03-22 2021-03-22 Loop detection method, device and storage medium based on color histogram Active CN112991448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300183.9A CN112991448B (en) 2021-03-22 2021-03-22 Loop detection method, device and storage medium based on color histogram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300183.9A CN112991448B (en) 2021-03-22 2021-03-22 Loop detection method, device and storage medium based on color histogram

Publications (2)

Publication Number Publication Date
CN112991448A CN112991448A (en) 2021-06-18
CN112991448B true CN112991448B (en) 2023-09-26

Family

ID=76334255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300183.9A Active CN112991448B (en) 2021-03-22 2021-03-22 Loop detection method, device and storage medium based on color histogram

Country Status (1)

Country Link
CN (1) CN112991448B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038103B (en) * 2024-04-11 2024-06-14 南京师范大学 Visual loop detection method based on improved dynamic expansion model self-adaptive algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003244469A (en) * 2002-02-19 2003-08-29 Nisca Corp Image deciding apparatus, image reader, image deciding method, program and storage medium stored with program
CN103392185A (en) * 2010-12-30 2013-11-13 派尔高公司 Color similarity sorting for video forensics search
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110188809A (en) * 2019-05-22 2019-08-30 浙江大学 A kind of winding detection method based on image block
CN112507778A (en) * 2020-10-16 2021-03-16 天津大学 Loop detection method of improved bag-of-words model based on line characteristics

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8326028B2 (en) * 2007-12-26 2012-12-04 Hitachi Computer Peripherals Co., Ltd. Dropout color processing method and processing apparatus using same
JP5217886B2 (en) * 2008-10-14 2013-06-19 富士通株式会社 Loopback device and mirroring method
US10636114B2 (en) * 2018-08-04 2020-04-28 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for scan-matching oriented visual slam

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003244469A (en) * 2002-02-19 2003-08-29 Nisca Corp Image deciding apparatus, image reader, image deciding method, program and storage medium stored with program
CN103392185A (en) * 2010-12-30 2013-11-13 派尔高公司 Color similarity sorting for video forensics search
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110188809A (en) * 2019-05-22 2019-08-30 浙江大学 A kind of winding detection method based on image block
CN112507778A (en) * 2020-10-16 2021-03-16 天津大学 Loop detection method of improved bag-of-words model based on line characteristics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于图优化的单目视觉SLAM关键技术研究;杜鹏飞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20181215(第12期);第4章 *
基于数字地标图的室内自定位方法研究;方榕;《中国优秀硕士学位论文全文数据库 信息科技辑》;20210115(第01期);第三章 *
基于视觉的四旋翼无人机室内自主导航***的研究与实现;张鹏;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20200115(第01期);第3-4章 *

Also Published As

Publication number Publication date
CN112991448A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN106960446B (en) Unmanned ship application-oriented water surface target detection and tracking integrated method
CN111178245B (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN107633226B (en) Human body motion tracking feature processing method
US11200451B2 (en) Object recognition method and apparatus
US9275447B2 (en) Method and system for describing image region based on color histogram
US8406535B2 (en) Invariant visual scene and object recognition
CN108197604A (en) Fast face positioning and tracing method based on embedded device
US9904868B2 (en) Visual attention detector and visual attention detection method
CN110659550A (en) Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium
CN111860439A (en) Unmanned aerial vehicle inspection image defect detection method, system and equipment
CN111383244B (en) Target detection tracking method
CN110958467B (en) Video quality prediction method and device and electronic equipment
US20240153240A1 (en) Image processing method, apparatus, computing device, and medium
Wu et al. Typical target detection in satellite images based on convolutional neural networks
CN114693983B (en) Training method and cross-domain target detection method based on image-instance alignment network
CN112101114B (en) Video target detection method, device, equipment and storage medium
CN112633294A (en) Significance region detection method and device based on perceptual hash and storage device
CN112991448B (en) Loop detection method, device and storage medium based on color histogram
Lecca et al. Comprehensive evaluation of image enhancement for unsupervised image description and matching
Gupta et al. A scheme for attentional video compression
WO2021051382A1 (en) White balance processing method and device, and mobile platform and camera
CN111105436B (en) Target tracking method, computer device and storage medium
CN114387454A (en) Self-supervision pre-training method based on region screening module and multi-level comparison
CN111091583B (en) Long-term target tracking method
US20230386023A1 (en) Method for detecting medical images, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant