CN110688936A - Method, machine and storage medium for representing characteristics of environment image - Google Patents

Method, machine and storage medium for representing characteristics of environment image Download PDF

Info

Publication number
CN110688936A
CN110688936A CN201910904694.4A CN201910904694A CN110688936A CN 110688936 A CN110688936 A CN 110688936A CN 201910904694 A CN201910904694 A CN 201910904694A CN 110688936 A CN110688936 A CN 110688936A
Authority
CN
China
Prior art keywords
line segment
sub
image
environment
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910904694.4A
Other languages
Chinese (zh)
Other versions
CN110688936B (en
Inventor
罗丹平
闫瑞君
叶力荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Technology Co Ltd filed Critical Shenzhen Silver Star Intelligent Technology Co Ltd
Priority to CN201910904694.4A priority Critical patent/CN110688936B/en
Publication of CN110688936A publication Critical patent/CN110688936A/en
Application granted granted Critical
Publication of CN110688936B publication Critical patent/CN110688936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to a method, a machine and a storage medium for representing the characteristics of an environment image, wherein the method comprises the following steps: acquiring an environment image; carrying out image recognition on the environment image to obtain line segments in the environment image; obtaining a local area in a preset range around the line segment based on the line segment; acquiring pixel characteristics of a local area of a line segment, and acquiring a descriptor of the line segment according to the pixel characteristics; matching the descriptors of the line segments with a pre-acquired environment dictionary to obtain the categories of the line segments, wherein the environment dictionary comprises at least two line segment samples and the descriptors and the categories corresponding to the line segment samples; and obtaining the line characteristics of the line segments according to the coordinate information and the category of the line segments, and performing image matching based on the line characteristics. When the feature representation method is applied to image matching in a scene with a simple environment and a plurality of lines, the image matching can be carried out based on line features, and the matching accuracy is high.

Description

Method, machine and storage medium for representing characteristics of environment image
Technical Field
The embodiment of the invention relates to the field of machine vision, in particular to a method, a machine and a storage medium for representing characteristics of an environment image.
Background
The robot is very popular among people because the robot can replace the people to do heavy household work. The robot needs to move in an unknown environment while completing the user's tasks. In order to achieve autonomous positioning and navigation during movement, an incremental map needs to be built, and the position of the map is estimated.
In the positioning and mapping process of the robot, positions in different environment images need to be matched, in the prior art, a feature point matching method is often adopted, and the method can obtain a good effect in a complex outdoor environment.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art: in a home environment, because the environment in the home environment is relatively simple, a large area of homochromatic patterns often exists, and if a characteristic point matching method is adopted, the matching accuracy is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a feature representation method, a machine and a storage medium of an environment image, wherein the environment image is represented by line features, and when the environment image is matched by the method, the matching accuracy is high.
In a first aspect, an embodiment of the present invention provides a method for representing features of an environment image, where the method includes:
acquiring an environment image;
performing image recognition on the environment image to obtain line segments in the environment image;
obtaining a local area in a preset range around the line segment based on the line segment;
acquiring pixel characteristics of a local area of the line segment, and acquiring a descriptor of the line segment according to the pixel characteristics;
matching the descriptors of the line segments with a pre-acquired environment dictionary to obtain the categories of the line segments, wherein the environment dictionary comprises at least two line segment samples and the descriptors and the categories corresponding to the line segment samples;
and obtaining the line characteristics of the line segment according to the coordinate information and the category of the line segment.
In some of these embodiments, the method further comprises obtaining an environment dictionary:
acquiring at least two environmental image samples;
performing image recognition on the at least two environmental image samples to obtain line segment samples in the at least two environmental image samples;
obtaining a local area in a preset range around the line segment sample based on the line segment sample;
acquiring pixel characteristics of a local area of the line segment sample, and acquiring a descriptor of the line segment sample according to the pixel characteristics;
and clustering the descriptors of the line segment samples to obtain at least one category so as to obtain the descriptors and the categories corresponding to the line segment samples.
In some embodiments, the obtaining pixel characteristics of a local region of the line segment and obtaining a descriptor of the line segment according to the pixel characteristics includes:
dividing the local area of the line segment into at least three sub-areas, and obtaining the pixel characteristics corresponding to each sub-area;
and obtaining a descriptor of the line segment according to the pixel characteristics of each sub-area in the local area of the line segment.
In some embodiments, the line characteristics of the line segment further include a second category;
the method further comprises the following steps:
and comparing the pixel characteristics of each subarea in the local area of the line segment, and obtaining a second category of the line segment based on the change of the pixel characteristics of each subarea.
In some embodiments, dividing the local region into at least three sub-regions includes:
the local area is divided into at least three sub-areas along a vertical direction of the line segment.
In some embodiments, the pixel characteristic comprises a first pixel value, a second pixel value, and a third pixel value;
the comparing the pixel characteristics of each sub-region in the local region of the line segment, and obtaining a second category of the line segment based on the change of the pixel characteristics of each sub-region includes:
comparing a first pixel value, a second pixel value and a third pixel value corresponding to each sub-area in the local area according to a fixed sequence, if the pixel value of the adjacent sub-area is larger, giving a first value, otherwise, giving a second value;
and forming data by the obtained first value and the second value according to a fixed sequence, and taking the data as the second category.
In some embodiments, the first pixel value corresponding to the sub-region is an R average value of the sub-region, the second pixel value corresponding to the sub-region is a G average value of the sub-region, and the third pixel value corresponding to the sub-region is a B average value of the sub-region.
In some embodiments, after acquiring the local region, the method further comprises:
and if the local area exceeds the image range of the environment image, discarding the local area.
In some embodiments, the matching the descriptor of the line segment to a pre-acquired environment dictionary to obtain the category of the line segment includes:
obtaining the adjacent range space of the descriptor of the line segment according to the descriptor of the line segment and a preset distance;
and obtaining descriptors of the line segment samples in the environment dictionary, wherein the line segment samples are located in the adjacent range space, and taking the category which contains the largest number of descriptors in the descriptors of the line segment samples in the adjacent range space as the category of the line segment.
In a second aspect, an embodiment of the present invention further provides a machine, including:
a machine main body;
a controller built in the machine body;
the controller includes:
at least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In a third aspect, the present invention also provides a non-transitory computer-readable storage medium storing computer-executable instructions, which, when executed by a machine, cause the machine to perform the above-mentioned method.
Compared with the prior art, the application has the following beneficial effects at least: according to the feature representation method, the machine and the computer-readable storage medium of the environment image, the line segment in the environment image is extracted, the local area in the preset range around the line segment is obtained, and the descriptor of the line segment is obtained according to the pixel feature of the local area. Matching the descriptor of the line segment with an environment dictionary to obtain the category of the line segment, and then obtaining the line characteristics of the line segment according to the coordinate information and the category of the line segment. When the feature representation method is applied to image matching in a scene with a simple environment and a plurality of lines, the image matching can be carried out based on line features, and compared with a feature point matching method, the matching accuracy is high. And the coordinate information of the line segment and the category of the line segment are used for representing the line characteristics of the line segment, so that the characteristics of the line segment can be more accurately described, and the matching precision is further improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic diagram of an application scenario of a method and an apparatus for representing features of an environment image according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of one embodiment of the robot of the present invention;
FIG. 3a is a flow chart illustrating an embodiment of a method for characterizing an environmental image according to the present invention;
FIG. 3b is a flowchart illustrating an environment dictionary obtaining process according to an embodiment of the method for representing features of an environment image according to the present invention;
FIG. 4 is a schematic diagram of a local region for acquiring line segments in an embodiment of a method for characterizing an environmental image according to the present invention;
FIG. 5 is a diagram of a partial region divided into sub-regions according to an embodiment of the method for characterizing an environmental image of the present invention;
FIG. 6 is a schematic diagram of the structure of one embodiment of the apparatus for characterizing an environmental image according to the present invention;
FIG. 7 is a schematic diagram of the structure of an embodiment of the apparatus for characterizing an environmental image according to the present invention;
fig. 8 is a schematic diagram of the hardware structure of the controller in one embodiment of the robot of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the field of machine vision, two or more scene images are often required to be matched, and at present, a method of matching feature points is mostly adopted to match the scene images. The method has high matching accuracy in a relatively complex environment (such as an outdoor environment). The structure and the color composition of the household environment are simpler, the common color of the same article is the same due to the consideration of beauty, but a large number of lines are formed at the connecting position of the two articles. In such an environment, if a scene image is described using line features and image matching is performed based on the line features, matching accuracy is higher than that of a feature point matching method.
The line feature of the environment image may be obtained by performing line recognition based on the environment image to obtain a line segment in the environment image, and then representing the line feature of the line segment by using the coordinates and the category of the line segment. Wherein the category of the line segment can be obtained from a color feature descriptor of a certain area around the line segment. Specifically, an environment dictionary is obtained through training, and the environment dictionary comprises a large number of line segment samples, and each line segment sample has a corresponding color feature descriptor and a corresponding category. And comparing the color feature descriptors of the line segments with the color feature descriptors of the line segment samples in the environment dictionary to obtain the category of the line segment.
Fig. 1 shows an application scenario of the feature representation method and apparatus for an environment image according to an embodiment of the present invention. The application scenario includes a robot 10, wherein the robot 10 may be a mobile robot, such as a sweeping robot, an inspection robot, an unmanned sampling robot, an unmanned forklift, and so on.
The robot 10 may need to move in an unknown environment in order to accomplish a user's task or otherwise. In order to realize autonomous positioning and navigation in the process of movement, an incremental map needs to be built, and positioning is carried out simultaneously, namely the position of the incremental map is estimated.
The robot 10 positions its own position in the map according to its own observation distance of the surrounding object, and may adopt a binocular vision method when acquiring the observation distance, and need to match two images obtained by a binocular camera. If the robot is applied to an environment with a simple environment and a plurality of lines, the scene image is represented by the line features by using the feature representation method of the embodiment of the invention, and the image matching is carried out based on the line features, so that the matching accuracy is higher, more accurate observation distance can be obtained, and the positioning accuracy of the robot is further improved.
In some embodiments, referring to fig. 2, the robot 10 includes a robot main body 11, a camera 12, a controller 13, and a traveling mechanism 14. The robot main body 11 is a main body structure of the robot, and may be made of a corresponding shape structure and manufacturing material (such as hard plastic or metal such as aluminum or iron) according to actual needs of the robot 10, for example, the shape structure is set to be a flat cylinder shape common to sweeping robots.
The traveling mechanism 14 is a structural device provided in the robot main body 11 and providing the robot 10 with a traveling capability. The running gear 14 can be embodied in particular by means of any type of moving means, such as rollers, tracks, etc. The camera device 12 is used to obtain an image of the surroundings, for example an RGBD camera.
The controller 13 is an electronic computing core built in the robot main body 11 for executing logical operation steps to realize intelligent control of the robot 10. The controller 13 is connected to the camera device 12, and is configured to execute a preset algorithm to perform self-positioning and map composition according to the surrounding environment image acquired by the camera device 12.
In this embodiment, a feature representation method of an environment image according to an embodiment of the present invention may be executed by the controller 13, and the controller 13 executes the feature representation method to represent the environment image obtained by the image pickup device with line features and perform image matching based on the line features.
It should be noted that the feature representation method of the environment image according to the embodiment of the present invention is particularly suitable for a scene with a simple environment and many lines, and in such an environment, higher matching accuracy can be obtained than the feature point matching method. The characterization methods of embodiments of the present invention are equally applicable to other environments, such as complex outdoor environments and the like.
The feature representation method of the environment image according to the embodiment of the present invention can be applied to various suitable machines, and can be applied to a non-movable robot as well as a movable robot. In addition to being applied to robots, the method of the embodiment of the present invention may also be applied to other machines, such as unmanned aerial vehicles, computers, mobile terminals, and the like.
Fig. 3a is a flowchart illustrating a method for representing features of an environment image according to an embodiment of the present invention, where the method may be applied to any machine, such as the robot 10 shown in fig. 1 or fig. 2, and when the method is applied to the robot 10, the method is executed by the controller 13 in the robot 10, and when the method is applied to another machine, the method may be executed by a controller of another machine. As shown in fig. 3, the method includes:
101: an environmental image is acquired.
The environment image may be acquired by a camera device carried by the machine, or may be acquired by other equipment and then transmitted to the machine through a communication device.
102: and carrying out image recognition on the environment image to obtain line segments in the environment image.
The line Segment in the environment image may be identified by any suitable line identification algorithm in the prior art, such as Hough (Hough) line detection algorithm, lsd (line Segment detector) line extraction algorithm, and the like.
103: and obtaining a local area in a preset range around the line segment based on the line segment.
The local area may be an area within a certain range around the line segment, for example, a certain area with a length covering the line segment and a width extending appropriately larger than the width of the line segment, and the local area may take any suitable shape. In the embodiment shown in fig. 4, the local region is a rectangle, and the length of the rectangle is the same as the length of the corresponding line segment.
In some embodiments, in order to further improve the accuracy of the line segment feature description, after a local region is obtained, it is determined whether the local region exceeds the image range, and if the local region exceeds the image range, the local region is discarded, otherwise, the local region is continuously used for calculation.
And 104, acquiring the pixel characteristics of the local area of the line segment, and acquiring the descriptor of the line segment according to the pixel characteristics.
The pixel characteristic of the local area may be an RGB value of the local area, and in order to reduce noise, an RGB average value in the local area may be adopted, that is, values of three color channels of red (R), green (G), and blue (B) in the local area are respectively averaged to obtain an R value R, a G value G, and a B value B, and then these descriptions are combined together in sequence, so that the descriptor of the line segment is (R, G, B).
In other embodiments, in order to make the features of each line segment more obvious and thus make the line segments more easily distinguished, a local region of the line segment is divided into at least two sub-regions, and the pixel features corresponding to each sub-region are obtained; and then obtaining the descriptor of the line segment according to the pixel characteristics of each sub-region in the local region of the line segment. Wherein the number of sub-regions may be any suitable number, such as 3, 4, 5, etc. The segmentation of the sub-regions may be performed in any suitable direction, for example, along the direction of the line segment or along a direction perpendicular to the line segment.
In some embodiments, since the color generally does not change along the direction of the line segment, the local area is divided into a plurality of sub-areas along the direction perpendicular to the line segment, so that the line segment can be more accurately characterized. In the embodiment shown in fig. 5, the local area is equally divided into three sub-areas band1, band2, and band3 along the direction perpendicular to the line segment. Wherein D isLIndicates the direction of the line segment, DIndicating the vertical direction of the line segment.
Taking the pixel characteristics as the RGB mean value, dividing into three sub-regions as an example, obtaining the descriptors of the line segments corresponding to the local region, and first obtaining the RGB mean value r of each sub-region respectively1、g1、b1、r2、g2、b2、r3、g3、b3Then the descriptor of the line segment is (r)1,g1,b1,r2,g2,b2,r3,g3,b3)。
105: and matching the descriptors of the line segments with a pre-acquired environment dictionary to obtain the categories of the line segments, wherein the environment dictionary comprises at least two line segment samples and the descriptors and the categories corresponding to the line segment samples.
In another embodiment, the environment dictionary may be obtained by training the machine itself, and in this embodiment, please refer to fig. 3b, the method further includes the step of training the environment dictionary:
101 a: acquiring at least two environmental image samples;
102 a: performing image recognition on the at least two environmental image samples to obtain line segment samples in the at least two environmental image samples;
103 a: obtaining a local area in a preset range around the line segment sample based on the line segment sample;
104 a: acquiring pixel characteristics of a local area of the line segment sample, and acquiring a descriptor of the line segment sample according to the pixel characteristics;
in steps 101a-104a, a large number of environment image samples are exchanged, then line segment samples are identified based on the environment image samples, local areas of the line segment samples are obtained, and descriptors of the line segment samples are obtained according to pixel characteristics of the local areas. The steps of identifying the line segment samples, obtaining the local area, and obtaining the descriptor of the line segment samples may refer to steps 102, 103, and 104, which are not described herein again.
105 a: and clustering the descriptors of the line segment samples to obtain at least one category so as to obtain the descriptors and the categories corresponding to the line segment samples.
The descriptors of the line segment samples can be clustered by adopting a K-means clustering algorithm (K-means) to obtain at least one category. Setting a K value, and then clustering according to the pixel feature descriptors of the line segment samples to obtain K categories. The K categories may be represented by different characters, such as 1, 2 … K.
When obtaining the category of the line segment, the category of the line segment may be obtained by comparing the descriptor of the line segment with the descriptor in the environment dictionary. Specifically, the descriptor of the line segment is matched with a line segment sample in an environment dictionary through a K-nearest neighbor algorithm (KNN), and the category of the line segment is obtained. First, a neighboring range space of the descriptor of the line segment is obtained according to the descriptor of the line segment and a preset distance, wherein the dimension of the neighboring range space depends on the dimension of the descriptor, for example, where the local region is divided into three sub-regions, and the pixel characteristics include an R value, a G value, and a B value, the descriptor is nine dimensions, and then the neighboring range space is nine dimensions. Then, the descriptors in the environment dictionary which are located in the adjacent range space are obtained, and the category which contains the largest number of descriptors and corresponds to the descriptors is taken as the category of the line segment.
106: and obtaining the line characteristics of the line segment according to the coordinate information and the category of the line segment.
Combining the coordinate information and the category of the line segment together in order to obtain the line characteristics of the line segment, which can be expressed as: linei(start _ x, start _ y, end _ x, end _ y, group _ id _ rgb), where (start _ x, start _ y) denotes start point coordinates and (end _ x, end _ y) denotes end point coordinates.
In other embodiments, in order to further improve the accuracy of the line segment feature description, the second category of the line segment is also obtained based on the variation of the pixel features of the sub-regions. Firstly, the color values of the sub-regions are compared in the same order, i.e. for each local region, the comparison of the sub-region color values is in the same order. For example, the second sub-region and the first sub-region, the third sub-region and the second sub-region, …, the last sub-region and the last sub-region, and the last sub-region and the first sub-region are compared sequentially. If the color value of the neighboring subregion becomes large, a first value is assigned, otherwise, a second value is assigned, wherein the first value and the second value can be any suitable characters, such as 1 and 0. And then, forming data by the first values and the second values in sequence, wherein the data is the second category, and the sequence can be the same as the comparison sequence. Specifically, the data may be binary data, and the obtained respective first values and second values are combined into one binary data as values of the respective bits in a fixed order.
When the pixel features are RGB mean values, the R value, the G value and the B value are respectively compared, if R isi+1>riThen 1, otherwise 0, if gi+1>giThen 1, otherwise 0, if bi+1>biThen 1, otherwise 0. In FIG. 5In the illustrated embodiment, if r2>r1、b2>b1、g2>g1、r3<r2、b3<b2、g3<g2、r3>r1、b3>b1、g3>g1The second category is 111000111. If the second category is represented by group _ id _ fast, the line segment characteristics can be represented as: linei=(start_x,start_y,end_x,end_y,group_id_fast,group_id_rgb)。
According to the feature representation method, the machine and the computer-readable storage medium of the environment image, the line segment in the environment image is extracted, the local area in the preset range around the line segment is obtained, and the descriptor of the line segment is obtained according to the pixel feature of the local area. Matching the descriptor of the line segment with an environment dictionary to obtain the category of the line segment, and then obtaining the line characteristics of the line segment according to the coordinate information and the category of the line segment. When the feature representation method is applied to image matching in a scene with a simple environment and a plurality of lines, the image matching can be carried out based on line features, and compared with a feature point matching method, the matching accuracy is high. And the coordinate information of the line segment and the category of the line segment are used for representing the line characteristics of the line segment, so that the characteristics of the line segment can be more accurately described, and the matching precision is further improved. Moreover, the storage space can be greatly reduced by the sparse description method based on the line characteristics.
Accordingly, as shown in fig. 6, an embodiment of the present invention further provides an apparatus for characterizing an environment image, which can be applied to any machine, such as the robot 10 shown in fig. 1 or fig. 2, where the apparatus 600 for characterizing an environment image includes:
an image obtaining module 601, configured to obtain an environment image;
an identifying module 602, configured to perform image identification on the environment image to obtain a line segment in the environment image;
a local area obtaining module 603, configured to obtain, based on the line segment, a local area in a preset range around the line segment;
a descriptor obtaining module 604, configured to obtain pixel characteristics of a local area of the line segment, and obtain a descriptor of the line segment according to the pixel characteristics;
a category obtaining module 605, configured to match the descriptor of the line segment with a pre-obtained environment dictionary to obtain a category of the line segment, where the environment dictionary includes at least two line segment samples and the descriptor and the category corresponding to the line segment samples;
a line feature obtaining module 606, configured to obtain a line feature of the line segment according to the coordinate information and the category of the line segment.
The embodiment of the invention extracts the line segments in the environment image and obtains the line characteristics of the line segments based on the line segments. When the feature representation method is applied to image matching in a scene with a simple environment and a plurality of lines, the image matching can be carried out based on line features, and the matching accuracy is high. And the storage space can be greatly reduced by the sparse description method based on the line characteristics.
In some embodiments, the apparatus 600 for representing features of an environment image further comprises an environment dictionary obtaining module 607 configured to:
acquiring at least two environmental image samples;
performing image recognition on the at least two environmental image samples to obtain line segment samples in the at least two environmental image samples;
obtaining a local area in a preset range around the line segment sample based on the line segment sample;
acquiring pixel characteristics of a local area of the line segment sample, and acquiring a descriptor of the line segment sample according to the pixel characteristics;
and clustering the descriptors of the line segment samples to obtain at least one category so as to obtain the descriptors and the categories corresponding to the line segment samples.
In some embodiments, the descriptor obtaining module 604 is specifically configured to:
dividing the local area of the line segment into at least three sub-areas, and obtaining the pixel characteristics corresponding to each sub-area;
and obtaining a descriptor of the line segment according to the pixel characteristics of each sub-area in the local area of the line segment.
In some embodiments, the line characteristics of the line segment further include a second category;
the category acquisition module 605 is further configured to:
and comparing the pixel characteristics of each subarea in the local area of the line segment, and obtaining a second category of the line segment based on the change of the pixel characteristics of each subarea.
In some embodiments, the descriptor obtaining module 604 is specifically configured to:
the local area is divided into at least three sub-areas along a vertical direction of the line segment.
In some embodiments, the pixel characteristic comprises a first pixel value, a second pixel value, and a third pixel value;
the category acquisition module 605 is further configured to:
comparing a first pixel value, a second pixel value and a third pixel value corresponding to each sub-area in the local area according to a fixed sequence, if the pixel value of the adjacent sub-area is larger, giving a first value, otherwise, giving a second value;
and forming data by the obtained first value and the second value according to a fixed sequence, and taking the data as the second category.
In some embodiments, the first pixel value corresponding to the sub-region is an R average value of the sub-region, the second pixel value corresponding to the sub-region is a G average value of the sub-region, and the third pixel value corresponding to the sub-region is a B average value of the sub-region.
In some embodiments, the local area acquisition module 603 is further configured to:
and if the local area exceeds the image range of the environment image, discarding the local area.
It should be noted that the above-mentioned apparatus can execute the method provided by the embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the method. For technical details which are not described in detail in the device embodiments, reference is made to the methods provided in the embodiments of the present application.
Fig. 8 is a schematic diagram of a hardware structure of a controller in a machine, taking the machine as a robot as an example, and as shown in fig. 8, the controller 13 includes:
one or more processors 131 and a processor 132, one processor 131 being exemplified in fig. 8.
The processor 131 and the processor 132 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The processor 132, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the feature representation method of the environment image in the embodiment of the present application (for example, the image acquisition module 601 shown in fig. 6). The processor 131 executes various functional applications of the controller and data processing, i.e., a feature representation method of an environment image, which implements the above-described method embodiments, by running a nonvolatile software program, instructions, and modules stored in the processor 132.
The processor 132 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the feature representation device of the environment image, and the like. Further, the processor 132 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the processor 132 may optionally include memory located remotely from the processor 131, which may be connected to the robot over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the processor 132 and when executed by the one or more processors 131 perform the method for characterizing an image of an environment in any of the above-described method embodiments, e.g. performing the method steps 101 to 106 in fig. 3a, the method steps 101a-105a in fig. 3b described above; the functions of the modules 601 and 606 in fig. 6 and the modules 601 and 607 in fig. 7 are realized.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as one of the processors 131 in fig. 8, to cause the one or more processors to perform a method for characterizing an environment image in any of the above method embodiments, such as performing the above-described method steps 101 to 106 in fig. 3a, and the method steps 101a to 105a in fig. 3 b; the functions of the modules 601 and 606 in fig. 6 and the modules 601 and 607 in fig. 7 are realized.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a general hardware platform, and may also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A method for characterizing an environmental image, the method comprising:
acquiring an environment image;
performing image recognition on the environment image to obtain line segments in the environment image;
obtaining a local area in a preset range around the line segment based on the line segment;
acquiring pixel characteristics of a local area of the line segment, and acquiring a descriptor of the line segment according to the pixel characteristics;
matching the descriptors of the line segments with a pre-acquired environment dictionary to obtain the categories of the line segments, wherein the environment dictionary comprises at least two line segment samples and the descriptors and the categories corresponding to the line segment samples;
and obtaining the line characteristics of the line segment according to the coordinate information and the category of the line segment.
2. The method of characterizing an environmental image according to claim 1, further comprising obtaining an environmental dictionary:
acquiring at least two environmental image samples;
performing image recognition on the at least two environmental image samples to obtain line segment samples in the at least two environmental image samples;
obtaining a local area in a preset range around the line segment sample based on the line segment sample;
acquiring pixel characteristics of a local area of the line segment sample, and acquiring a descriptor of the line segment sample according to the pixel characteristics;
and clustering the descriptors of the line segment samples to obtain at least one category so as to obtain the descriptors and the categories corresponding to the line segment samples.
3. The method for representing the features of the environment image according to claim 1 or 2, wherein the obtaining the pixel features of the local area of the line segment and obtaining the descriptor of the line segment according to the pixel features comprises:
dividing the local area of the line segment into at least three sub-areas, and obtaining the pixel characteristics corresponding to each sub-area;
and obtaining a descriptor of the line segment according to the pixel characteristics of each sub-area in the local area of the line segment.
4. The method according to claim 3, wherein the line features of the line segment further include a second category;
the method further comprises the following steps:
and comparing the pixel characteristics of each subarea in the local area of the line segment, and obtaining a second category of the line segment based on the change of the pixel characteristics of each subarea.
5. The method of characterizing an image of an environment according to claim 4, wherein dividing the local region into at least three sub-regions comprises:
the local area is divided into at least three sub-areas along a vertical direction of the line segment.
6. The method according to claim 4, wherein the pixel feature includes a first pixel value, a second pixel value, and a third pixel value;
then, the comparing the pixel characteristics of each sub-region in the local region of the line segment, and obtaining the second category of the line segment based on the change of the pixel characteristics of each sub-region includes:
comparing a first pixel value, a second pixel value and a third pixel value corresponding to each sub-area in the local area according to a fixed sequence, if the pixel value of the adjacent sub-area is larger, giving a first value, otherwise, giving a second value;
and forming data by the obtained first value and the second value according to a fixed sequence, and taking the data as the second category.
7. The method according to claim 6, wherein the first pixel value corresponding to the sub-region is an R-average value of the sub-region, the second pixel value corresponding to the sub-region is a G-average value of the sub-region, and the third pixel value corresponding to the sub-region is a B-average value of the sub-region.
8. The method of characterizing an image of an environment according to claim 1 or 2, wherein after acquiring the local region, the method further comprises:
and if the local area exceeds the image range of the environment image, discarding the local area.
9. The method for representing features of an environment image according to claim 1 or 2, wherein the matching the descriptors of the line segments with a pre-acquired environment dictionary to obtain the categories of the line segments comprises:
obtaining the adjacent range space of the descriptor of the line segment according to the descriptor of the line segment and a preset distance;
and obtaining descriptors of the line segment samples in the environment dictionary, wherein the line segment samples are located in the adjacent range space, and taking the category which contains the largest number of descriptors in the descriptors of the line segment samples in the adjacent range space as the category of the line segment.
10. A machine, characterized in that it comprises:
a machine main body;
a controller built in the machine body;
the controller includes:
at least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any of claims 1-9.
11. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a machine, cause the machine to perform the method of any one of claims 1-9.
CN201910904694.4A 2019-09-24 2019-09-24 Method, machine and storage medium for representing characteristics of environment image Active CN110688936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910904694.4A CN110688936B (en) 2019-09-24 2019-09-24 Method, machine and storage medium for representing characteristics of environment image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910904694.4A CN110688936B (en) 2019-09-24 2019-09-24 Method, machine and storage medium for representing characteristics of environment image

Publications (2)

Publication Number Publication Date
CN110688936A true CN110688936A (en) 2020-01-14
CN110688936B CN110688936B (en) 2021-03-02

Family

ID=69110316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910904694.4A Active CN110688936B (en) 2019-09-24 2019-09-24 Method, machine and storage medium for representing characteristics of environment image

Country Status (1)

Country Link
CN (1) CN110688936B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034837A (en) * 2020-07-16 2020-12-04 珊口(深圳)智能科技有限公司 Method for determining working environment of mobile robot, control system and storage medium
CN113673276A (en) * 2020-05-13 2021-11-19 广东博智林机器人有限公司 Target object identification docking method and device, electronic equipment and storage medium
CN114764242A (en) * 2020-12-31 2022-07-19 清华大学 Sampling robot, robot system for sampling and detecting cargos and detection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592277A (en) * 2011-12-12 2012-07-18 河南理工大学 Curve automatic matching method based on gray subset division
CN103870847A (en) * 2014-03-03 2014-06-18 中国人民解放军国防科学技术大学 Detecting method for moving object of over-the-ground monitoring under low-luminance environment
CN104123533A (en) * 2013-04-26 2014-10-29 株式会社电装 Object detection apparatus
US20150279048A1 (en) * 2014-03-26 2015-10-01 Postech Academy - Industry Foundation Method for generating a hierarchical structured pattern based descriptor and method and device for recognizing object using the same
CN105118062A (en) * 2015-09-02 2015-12-02 大连理工大学 Single end-point characteristic description based line segment matching method
CN108256531A (en) * 2018-01-05 2018-07-06 上海交通大学 A kind of sub- building method of local feature description based on image color information and system
US20190018423A1 (en) * 2017-07-12 2019-01-17 Mitsubishi Electric Research Laboratories, Inc. Barcode: Global Binary Patterns for Fast Visual Inference
CN109426793A (en) * 2017-09-01 2019-03-05 中兴通讯股份有限公司 A kind of image behavior recognition methods, equipment and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592277A (en) * 2011-12-12 2012-07-18 河南理工大学 Curve automatic matching method based on gray subset division
CN104123533A (en) * 2013-04-26 2014-10-29 株式会社电装 Object detection apparatus
CN103870847A (en) * 2014-03-03 2014-06-18 中国人民解放军国防科学技术大学 Detecting method for moving object of over-the-ground monitoring under low-luminance environment
US20150279048A1 (en) * 2014-03-26 2015-10-01 Postech Academy - Industry Foundation Method for generating a hierarchical structured pattern based descriptor and method and device for recognizing object using the same
CN105118062A (en) * 2015-09-02 2015-12-02 大连理工大学 Single end-point characteristic description based line segment matching method
US20190018423A1 (en) * 2017-07-12 2019-01-17 Mitsubishi Electric Research Laboratories, Inc. Barcode: Global Binary Patterns for Fast Visual Inference
CN109426793A (en) * 2017-09-01 2019-03-05 中兴通讯股份有限公司 A kind of image behavior recognition methods, equipment and computer readable storage medium
CN108256531A (en) * 2018-01-05 2018-07-06 上海交通大学 A kind of sub- building method of local feature description based on image color information and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余东平 等: "压缩感知多目标无源定位中的字典适配方法", 《电子与信息学报》 *
舒凯翔: "基于 RGB-D 图像的移动机器人三维地图构建与导航***研究与设计", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
谢晓佳: "基于点线综合特征的双目视觉SLAM方法", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673276A (en) * 2020-05-13 2021-11-19 广东博智林机器人有限公司 Target object identification docking method and device, electronic equipment and storage medium
CN112034837A (en) * 2020-07-16 2020-12-04 珊口(深圳)智能科技有限公司 Method for determining working environment of mobile robot, control system and storage medium
CN114764242A (en) * 2020-12-31 2022-07-19 清华大学 Sampling robot, robot system for sampling and detecting cargos and detection method

Also Published As

Publication number Publication date
CN110688936B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN110688936B (en) Method, machine and storage medium for representing characteristics of environment image
US10217221B2 (en) Place recognition algorithm
CN109658454B (en) Pose information determination method, related device and storage medium
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN110176032B (en) Three-dimensional reconstruction method and device
CN109410316B (en) Method for three-dimensional reconstruction of object, tracking method, related device and storage medium
EP3427186A1 (en) Systems and methods for normalizing an image
CN107170011B (en) robot vision tracking method and system
CN108171715B (en) Image segmentation method and device
US11922658B2 (en) Pose tracking method, pose tracking device and electronic device
CN110021029B (en) Real-time dynamic registration method and storage medium suitable for RGBD-SLAM
CN109543634B (en) Data processing method and device in positioning process, electronic equipment and storage medium
CN109753945B (en) Target subject identification method and device, storage medium and electronic equipment
CN111476894A (en) Three-dimensional semantic map construction method and device, storage medium and electronic equipment
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN111738033A (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
WO2022228391A1 (en) Terminal device positioning method and related device therefor
CN112265463A (en) Control method and device of self-moving equipment, self-moving equipment and medium
Ji et al. An evaluation of conventional and deep learning‐based image‐matching methods on diverse datasets
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN111914890B (en) Image block matching method between images, image registration method and product
EP3598388B1 (en) Method and apparatus for visual odometry, and non-transitory computer-readable recording medium
CN111656404B (en) Image processing method, system and movable platform
JP2014102805A (en) Information processing device, information processing method and program
CN116295354A (en) Unmanned vehicle active global positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518000 building A1, Yinxing hi tech Industrial Park, Guanlan street, Longhua District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.

CP03 Change of name, title or address