CN112434659B - Reflection characteristic point eliminating method, device, robot and readable storage medium - Google Patents

Reflection characteristic point eliminating method, device, robot and readable storage medium Download PDF

Info

Publication number
CN112434659B
CN112434659B CN202011440515.5A CN202011440515A CN112434659B CN 112434659 B CN112434659 B CN 112434659B CN 202011440515 A CN202011440515 A CN 202011440515A CN 112434659 B CN112434659 B CN 112434659B
Authority
CN
China
Prior art keywords
image
feature points
black
gray
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011440515.5A
Other languages
Chinese (zh)
Other versions
CN112434659A (en
Inventor
全王飞
赖有仿
刘志超
赵勇胜
王涛
黄明强
何婉君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202011440515.5A priority Critical patent/CN112434659B/en
Publication of CN112434659A publication Critical patent/CN112434659A/en
Application granted granted Critical
Publication of CN112434659B publication Critical patent/CN112434659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method, a device, a robot and a readable storage medium for eliminating reflective feature points, wherein the method for eliminating reflective feature points is applied to the calculation of the visual pose of the robot and comprises the following steps: carrying out gray processing on the original image to obtain a gray image; preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smoothed; comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area; extracting characteristic points from the gray level image, removing the characteristic points from the black mask area, and judging whether the number of the characteristic points is smaller than a preset number; and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number. The method and the device can rapidly position the reflective area on the original image, remove reflective characteristic points and effectively improve the pose calculation accuracy.

Description

Reflection characteristic point eliminating method, device, robot and readable storage medium
Technical Field
The application relates to the technical field of robots, in particular to a method and a device for eliminating reflection characteristic points, a robot and a readable storage medium.
Background
In the prior art, the robot pose calculation is performed by acquiring images through the vision of the robot, namely through a camera, and the method has the characteristics of low cost, abundant information and high robustness. However, the phenomenon of strong reflection easily occurs in the environment, such as strong reflection caused by smooth ground or glass, or the occurrence of reflection points or larger light spots in the image acquired by the camera, which seriously affects the accuracy of pose calculation by extracting the feature points later.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, a robot and a readable storage medium for removing reflective feature points, so as to quickly locate a reflective region on an original image, remove reflective feature points, and effectively improve pose calculation accuracy.
In order to achieve the above purpose, the present application adopts the following technical scheme:
a reflection characteristic point eliminating method is applied to the vision pose calculation of a robot and comprises the following steps:
carrying out gray processing on the original image to obtain a gray image;
preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smoothed;
comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area;
extracting characteristic points from the gray level image, removing the characteristic points from the black mask area, and judging whether the number of the residual characteristic points after removing is smaller than a preset number;
and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
Preferably, in the method for removing reflective feature points, the preprocessing the gray-scale image to obtain a preprocessed image with reduced detail level and smoothed includes:
carrying out noise removal processing of preset filtering on the gray level image to obtain a denoised image;
and carrying out preset fuzzy algorithm processing on the denoised image to obtain the preprocessed image.
Preferably, in the method for removing reflective feature points, the preset filtering includes at least one filtering mode of mean filtering, gaussian filtering, median filtering and bilateral filtering; the preset blurring algorithm comprises at least one algorithm of Gaussian blurring and a filter2D function.
Preferably, in the method for removing reflective feature points, the comparing the preprocessed image with the original image by using a preset algorithm includes:
performing image subtraction operation by using the preprocessed image and the original image to obtain an initial mask image;
and carrying out corrosion expansion treatment of a preset value on the black mask area of the initial mask image to obtain the black and white mask image.
Preferably, the method for removing reflective feature points further includes, before extracting feature points from the gray-scale image:
and when the original image is determined to be not the initial frame image, extracting corresponding characteristic points on the current gray level image through optical flow tracking by utilizing the characteristic points extracted from the gray level image of the previous frame.
Preferably, in the method for eliminating reflective feature points, when the robot is a binocular robot, the method further includes:
and extracting corresponding characteristic points on the other gray level image according to the characteristic points of the white mask area extracted on the current gray level image and the optical flow tracking.
Preferably, the method for eliminating the reflection feature points further includes:
and inputting the characteristic points with the number equal to the preset number into the visual inertial navigation tight coupling frame to obtain corresponding robot pose output.
The application also provides a reflection characteristic point removing device which is applied to the vision pose calculation of the robot and comprises the following components:
the image gray processing module is used for carrying out gray processing on the original image to obtain a gray image;
the image preprocessing module is used for preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smoothed;
the image mask covering module is used for comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area;
the mask feature point eliminating module is used for extracting feature points from the gray level image, eliminating the feature points on the black mask area and judging whether the number of the residual feature points after eliminating is smaller than a preset number; and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
The application also provides a robot, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor runs the computer program to enable the robot to execute the reflection characteristic point eliminating method.
The application also provides a readable storage medium storing a computer program which when run on a processor performs the method of light reflection feature point culling.
The application provides a method for eliminating reflective feature points, which is applied to the calculation of the visual pose of a robot and comprises the following steps: carrying out gray processing on the original image to obtain a gray image; preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smoothed; comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area; extracting characteristic points from the gray level image, removing the characteristic points from the black mask area, and judging whether the number of the residual characteristic points after removing is smaller than a preset number; and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number. According to the method for eliminating the reflecting characteristic points, the reflecting area on the original image can be rapidly positioned by reducing the level of detail and comparing the smoothed preprocessed image with the original image, and the reflecting characteristic points can be marked by generating the mask on the reflecting area, so that the reflecting characteristic points are eliminated, and the pose calculation accuracy is effectively improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of the present application. Like elements are numbered alike in the various figures.
Fig. 1 is a flowchart of a method for eliminating reflective feature points provided in embodiment 1 of the present application;
fig. 2 is a flowchart of a method for eliminating reflective feature points provided in embodiment 1 of the present application;
FIG. 3 is a flow chart of preprocessing an image provided in embodiment 2 of the present application;
FIG. 4 is a flowchart of generating a preset color mask according to embodiment 2 of the present application;
fig. 5 is a flowchart of a method for eliminating reflective feature points provided in embodiment 3 of the present application;
fig. 6 is a flowchart of a method for eliminating reflective feature points provided in embodiment 4 of the present application;
fig. 7 is a schematic structural diagram of a reflective feature point removing device according to embodiment 5 of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments.
The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present application, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the application belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the application.
Example 1
Fig. 1 is a flowchart of a method for eliminating reflective feature points provided in embodiment 1 of the present application, where the method is applied to the calculation of the visual pose of a robot, and includes the following steps:
step S11: and carrying out gray processing on the original image to obtain a gray image.
In the embodiment of the application, the robot pose calculation is performed by acquiring the image through the vision of the robot, namely the camera, and the method has the characteristics of low cost, abundant information and high robustness. However, the strong reflection phenomenon is easy to occur in the environment, for example, the strong reflection phenomenon is caused by smooth ground or glass, or the reflection point or larger light spot appears in the image acquired by the camera, so that the accuracy of pose calculation is seriously affected by the follow-up extraction of the feature points, and the pose calculation is required to be performed after the feature points are extracted and the reflection feature points in the reflection feature points are removed.
In the embodiment of the application, the gray level processing is firstly carried out on the collected original image for pose calculation, so that a gray level image is obtained, the color detail level in the original image is reduced, the efficiency of subsequent image processing is improved, the overall reflection characteristic point elimination is accelerated, and the pose calculation efficiency is improved. An algorithm or an application program for gray scale processing may be preset in the robot, for example, a gray scale processing application program may be set in the robot, and the obtained original image for pose calculation of each frame may be input to the application program first, so as to obtain a corresponding gray scale image.
Step S12: and preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smooth.
In the embodiment of the application, the principle of searching the reflection points in the image is mainly comparison in brightness, so that the detail level of the gray image can be further reduced and noise can be reduced through preprocessing, and smoothing processing, for example, the detail level and noise of the gray image can be reduced through Gaussian blur, and the gray image after Gaussian processing is similar to the gray image observed through ground glass, so that the comparison or searching operation of the subsequent reflection points or areas is improved.
In the embodiment of the present application, when the above preprocessing process for the gray image may be performed by using an algorithm or an application program, for example, an application program based on a gaussian blur algorithm may be set in the robot, and after the gray image is obtained, the preprocessing image output by the application program may be immediately input to the application program.
Step S13: and comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area.
In the embodiment of the application, the preprocessed image is compared with the original image by a preset algorithm, and the area with the brightness smaller than that of the original image on the preprocessed image is found out, namely the reflection point or the reflection area, so as to generate a corresponding black-and-white mask image. Wherein a black mask in the corresponding position can be generated on the retroreflective spots or retroreflective areas, through which mask retroreflective marking is performed.
Step S14: and extracting characteristic points from the gray level image, removing the characteristic points from the black mask area, and judging whether the number of the residual characteristic points after removing is smaller than a preset number.
Step S15: and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
In the embodiment of the application, when the characteristic points used for pose calculation are extracted from the gray level image, the preset number of the characteristic points can be extracted first, and then the characteristic points extracted from the preset color mask area are removed according to the positions of the preset color mask. And extracting the supplementing quantity of the characteristic points after eliminating, and eliminating until the quantity of the characteristic points reaches the preset quantity, namely the quantity required by pose calculation.
In the embodiment of the application, the reflection area on the original image can be rapidly positioned by reducing the level of detail and comparing the smoothed preprocessed image with the original image, and the reflection characteristic points can be marked by generating the mask on the reflection area, so that the reflection characteristic points are removed, and the pose calculation precision is effectively improved.
Fig. 2 is a flowchart of a method for eliminating reflective feature points provided in embodiment 1 of the present application, and further includes the following steps:
step S16: and inputting the characteristic points with the number equal to the preset number into the visual inertial navigation tight coupling frame to obtain corresponding robot pose output.
In the embodiment of the application, a visual inertial navigation tight coupling frame is deployed in the robot, the eliminated and processed preset number of characteristic points and inertial sensor data are input into the visual inertial navigation tight coupling frame, and the pose result of the current robot can be obtained after joint solution is carried out.
Example 2
Fig. 3 is a flowchart of preprocessing an image provided in embodiment 2 of the present application, including the following steps:
step S31: and carrying out denoising processing of preset filtering on the gray level image to obtain a denoised image.
Step S32: and carrying out preset fuzzy algorithm processing on the denoised image to obtain the preprocessed image.
In the embodiment of the application, the preset filtering comprises at least one filtering mode of mean filtering, gaussian filtering, median filtering and bilateral filtering; the preset blurring algorithm comprises at least one algorithm of Gaussian blurring and a filter2D function. The filter2D function is a filter2D function of opencv (computer vision and machine learning software library), namely a linear filter, the size of a convolution kernel is defined, the image after denoising is input, and the preprocessed image after convolution blurring is output.
Fig. 4 is a flowchart for generating a black-and-white mask image according to embodiment 2 of the present application, which includes the following steps:
step S41: and performing image subtraction operation on the preprocessed image and the original image to obtain an initial mask image.
In the embodiment of the application, through directly carrying out image subtraction operation on the preprocessed image and the original image, black spots are generated in the reflective dot area, the gray value is 0, the rest areas are blank, and the gray value is 255.
Step S42: and carrying out corrosion expansion treatment of a preset value on the black mask area of the initial mask image to obtain the black and white mask image.
In the embodiment of the application, since the extraction of the characteristic points is generally carried out on various edges of the image, the black mask region of the initial mask is also required to be subjected to corrosion expansion treatment with a preset value, so that the black mask region is expanded, the characteristic points are prevented from being taken to the edge positions of the reflective region, and the pose calculation accuracy is further influenced.
Example 3
Fig. 5 is a flowchart of a method for eliminating reflective feature points provided in embodiment 3 of the present application, including the following steps:
step S51: and carrying out gray processing on the original image to obtain a gray image.
This step corresponds to the above step S11, and will not be described here again.
Step S52: and preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smooth.
This step corresponds to the above step S12, and will not be described here again.
Step S53: and comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area.
This step corresponds to the above step S13 and will not be described here again.
Step S54: and when the original image is determined to be not the initial frame image, extracting corresponding characteristic points on the current gray level image through optical flow tracking by utilizing the characteristic points extracted from the gray level image of the previous frame.
Step S55: and eliminating the characteristic points on the black mask area, and judging whether the number of the characteristic points is smaller than the preset number.
Step S56: and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
In the embodiment of the application, as the brightness of the robot visual environment is basically unchanged and the displacement between the two acquired frames of original pictures is smaller, the characteristic points of the next frame of picture direction can be quickly searched from the characteristic point positions on the previous frame of picture through optical flow tracking, thereby reducing the operation amount and improving the speed of integral pose operation. The feature points on the corresponding areas of the preset color masks may be located in the feature points obtained through optical flow tracking, and the elimination and the supplement processing are performed to reach the preset number.
Example 4
Fig. 6 is a flowchart of a method for eliminating reflective feature points provided in embodiment 4 of the present application, where the method is applied to the visual pose calculation of a binocular robot, and includes the following steps:
step S61: and carrying out gray processing on the original image to obtain a gray image.
This step corresponds to the above step S11, and will not be described here again.
Step S62: and preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smooth.
This step corresponds to the above step S12, and will not be described here again.
Step S63: and comparing the preprocessed image with the original image by using a preset algorithm to generate a black-and-white mask image, wherein a black mask area of the black-and-white mask image is a light reflecting area.
This step corresponds to the above step S13 and will not be described here again.
Step S64: and extracting characteristic points from the gray level image, removing the characteristic points from the black mask area, and judging whether the number of the residual characteristic points after removing is smaller than a preset number.
This step corresponds to the above step S14 and will not be described here again.
Step S65: and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
This step corresponds to the above step S15 and will not be described here again.
Step S66: and extracting corresponding characteristic points on the other gray level image according to the characteristic points of the white mask area extracted on the current gray level image and the optical flow tracking.
In the embodiment of the application, in the binocular robot, namely the robot provided with the two image acquisition devices with relatively close positions, the original image synchronously acquired by the other image acquisition device can also quickly search the corresponding characteristic points through optical flow tracking, so that the operation amount is reduced, and the overall pose operation speed is improved.
Example 5
Fig. 7 is a schematic structural diagram of a reflective feature point removing device according to embodiment 5 of the present application.
The reflection characteristic point removing apparatus 700 includes:
an image gray processing module 710, configured to perform gray processing on an original image to obtain a gray image;
an image preprocessing module 720, configured to preprocess the gray level image to obtain a preprocessed image with reduced level of detail and smoothed;
the image mask overlaying module 730 is configured to compare the preprocessed image with the original image by using a preset algorithm, and generate a black-and-white mask image, where a black mask area of the black-and-white mask image is a light reflection area;
a mask feature point removing module 740, configured to extract feature points from the gray scale image, remove feature points on the black mask area, and determine whether the number of the remaining feature points after removal is less than a preset number; and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
In the embodiment of the present application, the more detailed functional description of each module may refer to the content of the corresponding portion in the foregoing embodiment, which is not described herein.
In addition, the application also provides a robot, which comprises a memory and a processor, wherein the memory can be used for storing a computer program, and the processor can enable the robot to execute the method or the functions of each module in the light reflection characteristic point removing device by running the computer program.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the robot (such as audio data, phonebooks, etc.), and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The present embodiment also provides a readable storage medium storing a computer program for use in the robot described above.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the application may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. The method for eliminating the reflection characteristic points is characterized by being applied to the calculation of the visual pose of a robot and comprising the following steps:
carrying out gray processing on the original image to obtain a gray image;
preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smoothed;
performing image subtraction operation by using the preprocessed image and the original image to obtain an initial mask image;
carrying out corrosion expansion treatment of a preset value on a black mask area of the initial mask image to obtain a black-and-white mask image, wherein the black mask area of the black-and-white mask image is a light reflecting area;
extracting characteristic points from the gray level image, removing the characteristic points from the black mask area, and judging whether the number of the residual characteristic points after removing is smaller than a preset number;
and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
2. The method of claim 1, wherein preprocessing the gray-scale image to obtain a preprocessed image with reduced level of detail and smoothed comprises:
carrying out noise removal processing of preset filtering on the gray level image to obtain a denoised image;
and carrying out preset fuzzy algorithm processing on the denoised image to obtain the preprocessed image.
3. The method for eliminating reflective feature points according to claim 2, wherein the preset filtering includes at least one filtering mode selected from a mean filtering mode, a gaussian filtering mode, a median filtering mode and a bilateral filtering mode; the preset blurring algorithm comprises at least one algorithm of Gaussian blurring and a filter2D function.
4. The method of removing reflective feature points according to claim 1, further comprising, before extracting feature points from the grayscale image:
and when the original image is determined to be not the initial frame image, extracting corresponding characteristic points on the current gray level image through optical flow tracking by utilizing the characteristic points extracted from the gray level image of the previous frame.
5. The method of claim 1, wherein when the robot is a binocular robot, further comprising:
and extracting corresponding characteristic points on the other gray level image according to the characteristic points of the white mask area extracted on the current gray level image and the optical flow tracking.
6. The method of removing reflective feature points according to claim 1, further comprising:
and inputting the characteristic points with the number equal to the preset number into the visual inertial navigation tight coupling frame to obtain corresponding robot pose output.
7. The utility model provides a reflection of light characteristic point rejection unit which characterized in that is applied to the vision pose calculation of robot, includes:
the image gray processing module is used for carrying out gray processing on the original image to obtain a gray image;
the image preprocessing module is used for preprocessing the gray level image to obtain a preprocessed image with reduced detail level and smoothed;
the image mask covering module is used for carrying out image subtraction operation on the preprocessed image and the original image to obtain an initial mask image;
carrying out corrosion expansion treatment of a preset value on a black mask area of the initial mask image to obtain a black-and-white mask image, wherein the black mask area of the black-and-white mask image is a light reflecting area;
the mask feature point eliminating module is used for extracting feature points from the gray level image, eliminating the feature points on the black mask area and judging whether the number of the residual feature points after eliminating is smaller than a preset number; and when the number of the feature points is determined to be smaller than the preset number, continuing to extract the feature points from the gray level image and eliminating the feature points on the black mask area until the number of the feature points is equal to the preset number.
8. A robot comprising a memory storing a computer program and a processor running the computer program to cause the robot to perform the light reflection characteristic point elimination method according to any one of claims 1 to 6.
9. A readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the light reflection feature point culling method according to any one of claims 1 to 6.
CN202011440515.5A 2020-12-07 2020-12-07 Reflection characteristic point eliminating method, device, robot and readable storage medium Active CN112434659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011440515.5A CN112434659B (en) 2020-12-07 2020-12-07 Reflection characteristic point eliminating method, device, robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011440515.5A CN112434659B (en) 2020-12-07 2020-12-07 Reflection characteristic point eliminating method, device, robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN112434659A CN112434659A (en) 2021-03-02
CN112434659B true CN112434659B (en) 2023-09-05

Family

ID=74692405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011440515.5A Active CN112434659B (en) 2020-12-07 2020-12-07 Reflection characteristic point eliminating method, device, robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN112434659B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628202B (en) * 2021-08-20 2024-03-19 美智纵横科技有限责任公司 Determination method, cleaning robot and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106124034A (en) * 2016-09-07 2016-11-16 湖南科技大学 Thin-wall part operation mode based on machine vision test device and method of testing
WO2017036160A1 (en) * 2015-09-06 2017-03-09 广州广电运通金融电子股份有限公司 Glasses removal method for facial recognition
CN107845134A (en) * 2017-11-10 2018-03-27 浙江大学 A kind of three-dimensional rebuilding method of the single body based on color depth camera
CN109410215A (en) * 2018-08-02 2019-03-01 北京三快在线科技有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN110070508A (en) * 2019-04-23 2019-07-30 西安交通大学 A kind of unsharp Enhancement Method based on threshold value and Linear Mapping
CN110930323A (en) * 2019-11-07 2020-03-27 华为技术有限公司 Method and device for removing light reflection of image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10559067B2 (en) * 2018-02-28 2020-02-11 Adobe Inc. Removal of shadows from document images while preserving fidelity of image contents

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017036160A1 (en) * 2015-09-06 2017-03-09 广州广电运通金融电子股份有限公司 Glasses removal method for facial recognition
CN106124034A (en) * 2016-09-07 2016-11-16 湖南科技大学 Thin-wall part operation mode based on machine vision test device and method of testing
CN107845134A (en) * 2017-11-10 2018-03-27 浙江大学 A kind of three-dimensional rebuilding method of the single body based on color depth camera
CN109410215A (en) * 2018-08-02 2019-03-01 北京三快在线科技有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN110070508A (en) * 2019-04-23 2019-07-30 西安交通大学 A kind of unsharp Enhancement Method based on threshold value and Linear Mapping
CN110930323A (en) * 2019-11-07 2020-03-27 华为技术有限公司 Method and device for removing light reflection of image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
医药视觉检测机器人异物感知方法与检测***研究;吴成中;《中国博士学位论文全文数据库 医药卫生科技辑》;第1-141页 *

Also Published As

Publication number Publication date
CN112434659A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
EP3798975A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
JP2002259994A (en) Automatic image pattern detecting method and image processor
CN112381061B (en) Facial expression recognition method and system
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN110705353A (en) Method and device for identifying face to be shielded based on attention mechanism
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
CN111159150A (en) Data expansion method and device
CN112434659B (en) Reflection characteristic point eliminating method, device, robot and readable storage medium
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN112435278B (en) Visual SLAM method and device based on dynamic target detection
CN112926695A (en) Image recognition method and system based on template matching
Qi et al. Motion deblurring for optical character recognition
KR20220047749A (en) How to create a slab/finger foreground mask
CN109033797B (en) Permission setting method and device
JP2002269545A (en) Face image processing method and face image processing device
CN111353954A (en) Video image processing method and device and electronic equipment
CN114529715B (en) Image identification method and system based on edge extraction
CN116030280A (en) Template matching method, device, storage medium and equipment
WO2021056531A1 (en) Face gender recognition method, face gender classifier training method and device
CN112070954A (en) Living body identification method, living body identification device, living body identification equipment and storage medium
CN108629788B (en) Image edge detection method, device and equipment and readable storage medium
CN112634298A (en) Image processing method and device, storage medium and terminal
KR20110074638A (en) Robust character segmentation system and method using machine intelligence in a degraded vehicle license plate under illumination effects and dirt
JP2009289189A (en) Apparatus and program for creating learning model, and apparatus and program for detecting object
CN113256482B (en) Photographing background blurring method, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant