CN107945233B - Visual floor sweeping robot and refilling method thereof - Google Patents

Visual floor sweeping robot and refilling method thereof Download PDF

Info

Publication number
CN107945233B
CN107945233B CN201711260465.0A CN201711260465A CN107945233B CN 107945233 B CN107945233 B CN 107945233B CN 201711260465 A CN201711260465 A CN 201711260465A CN 107945233 B CN107945233 B CN 107945233B
Authority
CN
China
Prior art keywords
recharging
sweeping robot
image
recharging seat
seat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711260465.0A
Other languages
Chinese (zh)
Other versions
CN107945233A (en
Inventor
张立新
周毕兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Infinite Power Development Co., Ltd.
Original Assignee
Shenzhen Water World Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Water World Co Ltd filed Critical Shenzhen Water World Co Ltd
Priority to CN201711260465.0A priority Critical patent/CN107945233B/en
Publication of CN107945233A publication Critical patent/CN107945233A/en
Application granted granted Critical
Publication of CN107945233B publication Critical patent/CN107945233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a vision sweeping robot and a refilling method thereof, wherein the refilling method comprises the following steps: collecting an environment image in the cleaning process; comparing the stored environment image with a preset recharging seat image during recharging; selecting an environment image with the highest similarity and returning to the position for photographing the environment image; and (5) aligning the recharging seat to insert for charging. The invention enables the sweeping robot to quickly find the position of the recharging seat during charging.

Description

Visual floor sweeping robot and refilling method thereof
Technical Field
The invention relates to the field of sweeping robots, in particular to a visual sweeping robot and a refilling method thereof.
Background
With the continuous development of science and technology, the sweeping robot gradually enters the human life and plays an active role. The power technology is a key technology of the sweeping robot and is a guarantee for realizing long-term autonomous working of the robot. Because the mobile power supply has capacity limitation, the machine is often required to be manually charged, at present, the main idea for solving the power supply problem of the sweeper is to independently return to charge, and the most common technical means is to guide the sweeper to return to a charging seat for butt joint charging based on an infrared signal. Although the method and the system can realize the autonomous recharging of the sweeper, the infrared sensor has a small emitting angle, a short distance for emitting the coded signal and a slight shielding, the infrared signal cannot completely penetrate, and at the moment, if the sweeper is used for sweeping a larger space of a sweeping environment, the time for the sweeper to detect the infrared guide signal while walking becomes very long, so that the sweeper cannot return to a base, and the situation that the electric quantity is exhausted and stranded on a half road is likely to occur.
Disclosure of Invention
The invention mainly aims to provide a recharging method of a visual sweeping robot, so that the sweeping robot can quickly and accurately find a recharging seat for charging.
The invention provides a recharging method of a vision sweeping robot, which comprises the following steps:
s1, shooting and storing an environment image of the surrounding environment of the sweeper during sweeping;
s2, comparing the stored environment image with a pre-stored recharging seat image during recharging;
s3, when the similarity between the environment image and the image of the recharging seat is the highest, the robot moves to the recharging seat charging position according to the current position information of the robot and the environment scene image information with the highest similarity;
and S4, repeating the steps S2 and S3 until a preset condition is reached.
Further, the preset condition is that the robot can recognize the recognition mark arranged on the recharging seat; the step of moving to the recharging seat charging position comprises the following steps:
and S31, recognizing the identification mark on the recharging seat, and moving the recharging seat to a charging position by taking the identification mark as a reference.
Further, the identification mark is a two-dimensional code.
Further, the recognizing the identification mark on the recharging seat, and moving to the recharging seat charging position by taking the identification mark as a reference comprises:
s311, merging the pixel points of the similar gradient information on the identification marks into line segments;
s312, connecting the combined line segments to form a polygon;
s313, calculating the relative position relationship between the sweeping robot and the identification mark according to the internal reference of the vision sensor of the vision sweeping robot;
and S314, moving to the recharging seat charging position according to the relative position relation.
Further, the step of identifying the identification mark arranged on the recharging seat and moving to the recharging seat charging position by taking the identification mark as a reference comprises:
and S315, identifying two identification marks which are symmetrically arranged on the charging seat by taking the charging electrode as a symmetry axis.
Further, the robot cleaner stores an environment image of its surroundings during cleaning, including:
and S11, extracting and storing the characteristic points of the environment image.
Further, the step of comparing the stored environment image with the pre-stored image of the recharging seat comprises:
s21, matching the feature points of the stored environment image with the feature points of the pre-stored recharging seat image by using a feature matching method;
and S22, calculating the number of the interior points and generating a matching value.
Further, the acquiring of the current position information of the sweeping robot comprises:
and S32, the sweeping robot acquires and stores the posture of the sweeping robot at each moment by using a visual sensor or a laser sensor in the sweeping process.
Further, the step that the sweeping robot moves to the recharging seat charging position according to the current position information and the environment scene image information with the highest similarity comprises the following steps:
s33, marking the current position of the sweeping robot, and expanding the position which is not marked near the current position of the sweeping robot to generate a child node;
s34, calculating an evaluation function value for each child node, and marking the child node with the minimum evaluation function value;
s35, if the child node with the minimum evaluation function value is the target node, stopping expansion, and connecting the minimum child nodes of all marks to generate a path;
and S36, moving to the recharging seat charging position according to the path.
The invention also provides a vision sweeping robot, which comprises:
the shooting module is used for shooting and storing an environment image of the surrounding environment of the sweeper during the sweeping process;
the comparison module is used for comparing the stored environment image with a pre-stored recharging seat image during recharging;
the moving module is used for moving the robot to the recharging seat charging position according to the current position information of the robot and the environment scene image information with the highest similarity when the similarity between the environment image and the recharging seat image is highest;
and the condition module is used for calling the comparison module and the moving module until the pre-condition is reached.
Further, the preset condition is that the robot can recognize the recognition mark arranged on the recharging seat; the moving module includes:
and the charging unit is used for identifying the identification mark on the recharging seat, taking the identification mark as a reference and moving the recharging seat to a charging position.
Further, the identification mark is a two-dimensional code.
Further, the charging unit includes:
the line segment subunit is used for merging the pixel points of the similar gradient information on the identification mark into a line segment;
the polygon subunit is used for connecting the combined line segments to form a polygon;
the calculating subunit calculates and obtains the relative position relationship between the sweeping robot and the identification mark according to the internal reference of the vision sensor of the vision sweeping robot;
and the recharging subunit is used for moving to the recharging seat charging position according to the relative position relationship.
Further, the charging unit includes:
and the charging subunit is used for identifying two identification marks which are symmetrically arranged on the charging seat by taking the charging electrode as a symmetry axis.
Further, the photographing module includes:
and the characteristic point unit is used for extracting and storing the characteristic points of the image.
Further, the alignment module comprises:
the matching unit is used for matching the feature points of the stored environment image with the feature points of the pre-stored recharging seat image;
and the matching value unit is used for calculating the number of the interior points and generating a matching value.
Further, the moving module further includes:
and the storage unit is used for acquiring and storing the posture of the sweeping robot at each moment by utilizing a visual sensor or a laser sensor in the sweeping process.
Further, the moving module further includes:
the extension unit is used for marking the current position of the sweeping robot, and extending the position which is not marked near the current position of the sweeping robot to generate a child node;
the evaluation value unit is used for calculating an evaluation function value for each child node and marking the child node with the minimum evaluation function value;
a path unit, configured to stop expansion and join the minimum child nodes of all the labels to generate a path if the child node with the minimum evaluation value is a target node;
and the moving unit is used for moving to the recharging seat charging position according to the path.
Compared with the prior art, the invention has the beneficial effects that: the robot that sweeps floor just can find recharging seat through gathering the environment image at the in-process of sweeping the floor, improves the speed of finding recharging seat. By utilizing two-dimension codes symmetrically arranged on the charging electrode on the recharging seat, the sweeping robot can be inserted into the recharging seat without an electric signal transmitted by the recharging seat.
Drawings
Fig. 1 is a schematic step diagram of a recharging method of a vision sweeping robot according to an embodiment of the present invention;
fig. 2 is a schematic step diagram of a recharging method of the vision sweeping robot according to an embodiment of the present invention;
fig. 3 is a schematic step diagram of a recharging method of the vision sweeping robot according to an embodiment of the present invention;
fig. 4 is a schematic step diagram of a recharging method of the vision sweeping robot according to an embodiment of the present invention;
fig. 5 is a schematic step diagram of a recharging method of the vision sweeping robot according to an embodiment of the present invention;
fig. 6 is a schematic step diagram of a recharging method of the vision sweeping robot according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a vision sweeping robot according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a visual sweeping robot according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a recharging method of a visual sweeping robot according to an embodiment of the present invention is provided, including the steps of:
s1, shooting and storing an environment image of the surrounding environment of the sweeper during sweeping;
s2, comparing the stored environment image with a pre-stored recharging seat image during recharging;
s3, when the similarity between the environment image and the image of the recharging seat is the highest, the robot moves to the recharging seat charging position according to the current position information of the robot and the environment scene image information with the highest similarity;
and S4, repeating the steps S2 and S3 until a preset condition is reached.
In this embodiment, when the sweeping robot starts sweeping, the sweeping robot does not necessarily start from the refill seat, and may start sweeping after the user directly puts the robot in a certain room, or start sweeping from a certain corner of the room. When the sweeping robot cleans according to the track specified by the preset logic algorithm, the vision sensor of the sweeping robot, if the camera collects the surrounding environment image at the same time, part of the sweeping robot has the camera all around, the camera all around of the sweeping robot can collect the environment image at the same time, when the environment image is collected, the position of the environment image collected at the moment is also recorded, the position of the sweeping robot is recorded, the starting point of the sweeping robot is used as the reference for recording, for example, the displacement and the direction of the sweeping robot from the starting point to the point collecting the environment image can be recorded, and the motion track of the sweeping robot can be recorded in real time when the sweeping robot cleans. The preset rule for acquiring the environment image may be to acquire the environment image once every certain time or once every certain distance. The recharging command is generated, and may be a recharging command generated after the electric quantity of the sweeping robot is lower than a preset threshold value or a command for finishing sweeping sent by a user is received. When the vision sweeping robot starts recharging, the collected environment image is compared with a recharging seat image which is stored in the sweeping robot in advance by a user or shot and stored when the sweeping robot is in a charging position, an environment image with the highest similarity of the recharging seat image is found out, then the environment image is moved to a position corresponding to the environment image with the highest similarity through path planning, and the steps are repeated until the identification mark arranged on the recharging seat can be identified at the position corresponding to the environment image. The robot of sweeping the floor then uses the discernment sign that sets up on the seat of recharging to carry out accurate counterpoint as the reference this moment, and is convenient the robot aims at the charging electrode, charges.
It should be noted that, in the present invention, the stored environment image is compared with a pre-stored image of the recharging seat, where the pre-stored image of the recharging seat includes that the user stores the image of the recharging seat in the visual floor sweeping robot in advance, and also includes an image of the recharging seat that is shot and stored by the visual floor sweeping robot when the recharging seat is charged. In addition, the vision sweeping robot updates the recharging seat image stored in the vision sweeping robot when the recharging seat position changes.
Referring to fig. 2, further, the preset condition is that the robot can recognize the identification mark arranged on the recharging seat; the step of moving to the recharging seat charging position comprises the following steps:
and S31, recognizing the identification mark on the recharging seat, and moving the recharging seat to a charging position by taking the identification mark as a reference.
In this embodiment, after the recharging seat is identified, according to the identifier set on the recharging seat, when the distance between the robot and the recharging seat is smaller than the preset value, the determination may be made according to a distance value measured by photographing with the camera, or may be made by determining the distance between the distance sensor and the recharging seat by using the identifier as a reference, and the sweeping robot moves to the charging position of the recharging seat.
Further, the identification mark is a two-dimensional code.
In this embodiment, the two-dimensional code is a black-and-white picture, and has high identification degree and simple manufacture. The floor sweeping robot is used as an identification mark and placed on the recharging seat, and the floor sweeping robot can conveniently identify and align.
Referring to fig. 3, further, the recognizing the identification mark on the recharging seat, and moving to the recharging seat charging position with the identification mark as a reference includes:
s311, merging the pixel points of the similar gradient information on the identification marks into line segments;
s312, connecting the combined line segments to form a polygon;
s313, calculating the relative position relationship between the sweeping robot and the identification mark according to the internal reference of the vision sensor of the vision sweeping robot;
and S314, moving to the recharging seat charging position according to the relative position relation.
In this embodiment, the gradient direction and the gradient magnitude of each pixel in the obtained image are calculated, and then, by using the similarity measurement of the pixel gradients, adjacent pixels having similar gradient information are combined into a whole. By adopting a method similar to graph cutting, a node of the graph is a pixel point, and the weight of an edge is the gradient similarity of two pixel points (regions). And connecting the detected lines by a spatial adjacency criterion to form polygons, limiting the number of the polygons by limiting the side length of the polygons and limiting the number of corner points formed by the polygons to obtain quadrangles, combining the quadrangles which are spatially adjacent into a new quadrangle, and finally obtaining a large quadrangle containing a large number of 0,1 codes (0,1 represents a small quadrangle). After the quadrangle is detected, the distance is calculated by comparing the coding of the large quadrangle with the preset coding type, and a more accurate detection target is obtained. Calculating homography matrix and external parameters: the homography matrix represents the secondary transformation performed by projecting the 2D point on the two-dimensional code coordinate system to the camera coordinate system, and can be obtained by a direct linear transformation (directlower Transform algorithm). The camera internal parameters are denoted by P and comprise a camera focal length and a center deviation. The external reference is denoted by E. The homography matrix can be written as follows:
Figure BDA0001493483120000071
where Rij (i, j ═ 0,1,2) represents rotation parameters, and Tk (k ═ x, y, z) represents translation parameters.
Since the column of the rotation matrix must be the unit size, the size and direction of s can be obtained according to the corresponding direction information of the two-dimensional code and the camera (the two-dimensional code appears in front of the camera). The third column of the rotation matrix can be recovered by calculating the cross product of the two known columns, since the rotated column matrix must be orthogonal. Therefore, the relative position relation of the two-dimensional code relative to the camera can be obtained. And then the mobile phone is moved to the charging position of the recharging seat according to the relative position relationship to carry out charging.
Referring to fig. 4, further, the step of recognizing the identification mark disposed on the recharging seat and moving to the recharging seat charging position with the identification mark as a reference includes:
and S315, identifying two identification marks which are symmetrically arranged on the charging seat by taking the charging electrode as a symmetry axis.
In the embodiment, two identification marks are arranged on the recharging seat, are positioned at the same height and are symmetrically distributed on two sides of the charging electrode; the sweeping robot moves to the symmetrical axis of the two identification marks; the sweeper is continuously adjusted to be positioned on the symmetry axis of the two identification marks on the floor, so that the accuracy of aligning the charging electrode is higher. After the charging electrode is aligned, the straight line running is kept, and the charging can be smoothly inserted or the charging can be immediately carried out after the charging seat is returned to power.
Referring to fig. 5, further, the storing of the environmental image of the surroundings of the sweeper during sweeping includes:
and S11, extracting and storing the characteristic points of the environment image.
In this embodiment, carry out the feature extraction with the environment image, can reduce the memory of image, save space more when saving, compare the characteristic point when comparing moreover, can reduce the work load of comparing.
Referring to fig. 6, further, the step of comparing the stored environment image with the pre-stored image of the refill seat includes:
s21, matching the feature points of the stored environment image with the feature points of the pre-stored recharging seat image by using a feature matching method;
and S22, calculating the number of the interior points and generating a matching value.
In this embodiment, the feature points extracted from the environment image collected by the sweeping robot in the sweeping process are compared with the feature points of the recharging seat, the extracted feature points of the collected environment image are compared with the preset feature points of the recharging seat one by one, and the feature matching method is used for matching; the interior points refer to feature points in the two images which have a one-to-one correspondence relationship, and the more similar the two images are, the more interior points are, the higher the corresponding generated matching value is. And then selecting the environment image with the highest similarity, namely the environment image with the highest matching value. And confirming that the object corresponding to the characteristic point with the highest matching value is the recharging seat.
Further, the acquiring of the current position information of the sweeping robot comprises:
and S32, the sweeping robot acquires and stores the posture of the sweeping robot at each moment by using a visual sensor or a laser sensor in the sweeping process.
In this embodiment, the vision sensor is configured to acquire the moving position of the sweeping robot by using the internal reference of the camera, and acquire the posture of the sweeping robot in real time, so that the sweeping robot can recognize the position of the sweeping robot in the environment. The laser sensor determines the gesture of the sweeping robot by detecting the distance between the laser sensor and the surrounding objects, and the gesture acquired at each moment is stored in the sweeping robot.
Further, the step that the sweeping robot moves to the recharging seat charging position according to the current position information and the environment scene image information with the highest similarity comprises the following steps:
s33, marking the current position of the sweeping robot, and expanding the position which is not marked near the current position of the sweeping robot to generate a child node;
s34, calculating an evaluation function value for each child node, and marking the child node with the minimum evaluation function value;
s35, if the child node with the minimum evaluation function value is the target node, stopping expansion, and connecting the minimum child nodes of all marks to generate a path;
and S36, moving to the recharging seat according to the path.
In this embodiment, when the sweeping robot returns to the position where the environmental image is collected, path planning is performed, and a shortest path is found by using an a-Star algorithm, which is also called an a-Star algorithm, and is a direct search method for solving the shortest path most effectively in a static road network. The specific calculation steps are as follows:
a) firstly, judging whether 8 nodes around the initial node have obstacle points, if so, firstly removing the obstacle points, then solving the node with the minimum cost from the nodes except the obstacle points, and adding the node into a result list.
b) Then, the node is solved, and the evaluation function f (n) ═ g (n) + h (n) according to the a-algorithm is obtained. Where g (n) is the cost that has been paid from the starting point to the current node n, and h (n) is the cost estimate from the current node n to the target node. When the current node and the starting node are on the same horizontal line or vertical line, the cost between the nodes is 10, that is, g (n) is 10; when the current node is on the same diagonal line as the starting node, the cost between the nodes is 14, i.e., g (n) is 14. h (n) (abscissa grid number between current node and target node + ordinate grid number between current node and target node) × 10, 8 nodes around the current node respectively solve h (n) with the target node.
c) And 8 h (n) solved each time are compared in size, and the minimum node is selected as the initial node to continue searching. And ending the search until a target node is met, and solving an optimal path.
And after the optimal path is obtained, returning to the recharging seat according to the path for charging.
In conclusion, the recharging method of the visual sweeping robot enables the sweeping robot to find the recharging seat by acquiring the environment image in the sweeping process, and the speed of finding the recharging seat is improved. By utilizing two-dimension codes symmetrically arranged on the charging electrode on the recharging seat, the sweeping robot can be inserted into the recharging seat without an electric signal transmitted by the recharging seat.
Referring to fig. 7, the present invention further provides a vision sweeping robot, including:
the shooting module 1 is used for shooting and storing an environment image of the surrounding environment of the sweeper during the sweeping process;
the comparison module 2 is used for comparing the stored environment image with a pre-stored recharging seat image during recharging;
the moving module 3 is configured to, when the similarity between the environment image and the recharging seat image is highest, move the robot to the recharging seat charging position according to the current position information of the robot and the environment scene image information with the highest similarity;
and the condition module 4 is used for calling the comparison module and the moving module until the precondition is reached.
In this embodiment, when the sweeping robot starts sweeping, the sweeping robot does not necessarily start from the refill seat, and may start sweeping after the user directly puts the robot in a certain room, or start sweeping from a certain corner of the room. When the robot of sweeping the floor cleans according to the orbit that preset logic algorithm stipulated, the robot of sweeping the floor shoots module 1 and also gathers the surrounding environment image simultaneously, the robot of partially sweeping the floor all has the camera all around, also can be the robot of sweeping the floor all gather the surrounding image simultaneously this moment, when gathering the surrounding image, also note the position of gathering the surrounding image this moment, the position of the robot of sweeping the floor of record, the starting point of sweeping the robot is for reference and is recorded, for example, can record the displacement and the direction of walking from the starting point to the point of gathering the surrounding image, the robot of sweeping the floor records the motion orbit of self in real time when cleaning. The preset rule for acquiring the environment image may be to acquire the environment image once every certain time or once every certain distance. The recharging command is generated, and may be a recharging command generated after the electric quantity of the sweeping robot is lower than a preset threshold value or a command for finishing sweeping sent by a user is received. When the vision sweeping robot starts recharging, the comparison module 2 compares the collected environment image with an image of a recharging seat which is placed in the sweeping robot in advance by a user, finds out an environment image with the highest similarity with the recharging seat, then the moving module 3 moves to a position corresponding to the environment image with the highest similarity through path planning, and the comparison module 2 and the moving module 3 are repeated until the position corresponding to the environment image can identify the identification mark arranged on the recharging seat. The robot of sweeping the floor then uses the discernment sign that sets up on the seat of recharging to carry out accurate counterpoint as the reference this moment, and is convenient the robot aims at the charging electrode, charges.
It should be noted that, in the present invention, the stored environment image is compared with a pre-stored image of the recharging seat, where the pre-stored image of the recharging seat includes that the user stores the image of the recharging seat in the visual floor sweeping robot in advance, and also includes an image of the recharging seat that is shot and stored by the visual floor sweeping robot when the recharging seat is charged. In addition, the vision sweeping robot updates the recharging seat image stored in the vision sweeping robot when the recharging seat position changes.
Referring to fig. 8, further, the preset condition is that the robot can recognize the identification mark arranged on the recharging seat; the moving module 3 further includes:
and the charging unit 31 is used for identifying the identification mark arranged on the recharging seat, taking the identification mark as a reference, and moving the recharging seat to a charging position.
In this embodiment, after the recharging seat is recognized, according to the identifier on the recharging seat, when the distance between the robot and the recharging seat is smaller than the preset value, the judgment may be a distance value measured by photographing with the camera, or may be a judgment of the distance between the distance sensor and the recharging seat through directional measurement, the charging unit 31 confirms the specific position of the recharging seat by using the identifier as a reference, and the sweeping robot moves to the charging position of the recharging seat.
Further, the identification mark is a two-dimensional code.
In this embodiment, the two-dimensional code is a black-and-white picture, and has high identification degree and simple manufacture. The floor sweeping robot is used as an identification mark and placed on the recharging seat, and the floor sweeping robot can conveniently identify and align.
Referring to fig. 9, further, the charging unit 31 includes:
a line segment subunit 311, configured to merge pixels with similar gradient information on the identifier into a line segment;
a polygon subunit 312, configured to connect the merged line segments to form a polygon;
the calculating subunit 313 is used for calculating the relative position relationship between the sweeping robot and the identification mark according to the internal parameters of the vision sensor of the vision sweeping robot;
and the recharging subunit 314 is configured to move to the recharging seat charging position according to the relative position relationship.
In this embodiment, the line segment subunit 313 calculates the gradient direction and the gradient magnitude of each pixel in the acquired image, and then, by using the similarity measurement of the pixel gradients, adjacent pixel points having similar gradient information are merged into a whole. By adopting a method similar to graph cutting, a node of the graph is a pixel point, and the weight of an edge is the gradient similarity of two pixel points (regions). Then, the polygon subunit 312 connects the detected lines by a spatial adjacency criterion to form polygons, and limits the number of polygons by limiting the side length of the polygons and the number of corner points formed by the polygons to obtain quadrilaterals, and combines the spatially adjacent quadrilaterals into a new quadrilaterals, thereby finally obtaining a large quadrilaterals containing many 0,1 codes (0,1 represents a small quadrilateral). After detecting the quadrangle, the calculating subunit 313 calculates the distance by comparing the code of the large quadrangle with the preset code type, and obtains a more accurate detection target. Calculating homography matrix and external parameters: the homography matrix represents the secondary transformation performed by projecting the 2D point on the two-dimensional code coordinate system to the camera coordinate system, and can be obtained by a direct linear transformation (directlower Transform algorithm). The camera internal parameters are denoted by P and comprise a camera focal length and a center deviation. The external reference is denoted by E. The homography matrix can be written as follows:
Figure BDA0001493483120000111
where Rij (i, j ═ 0,1,2) represents rotation parameters, and Tk (k ═ x, y, z) represents translation parameters.
Since the column of the rotation matrix must be the unit size, the size and direction of s can be obtained according to the corresponding direction information of the two-dimensional code and the camera (the two-dimensional code appears in front of the camera). The third column of the rotation matrix can be recovered by calculating the cross product of the two known columns, since the rotated column matrix must be orthogonal. Therefore, the relative position relation of the two-dimensional code relative to the camera can be obtained. And then the recharging sub-unit 313 controls the sweeping robot to move to a charging position of the recharging seat according to the relative position relationship for charging.
Referring to fig. 10, further, the charging unit 31 includes:
and the charging unit 315 is configured to identify two identification marks symmetrically arranged on the charging seat by using the charging electrode as a symmetry axis.
In the embodiment, two identification marks are arranged on the recharging seat, are positioned at the same height and are symmetrically distributed on two sides of the charging electrode; the sweeping robot moves to the symmetrical axis of the two identification marks; the charging subunit 315 controls the sweeping robot to adjust continuously to be on the symmetry axis of the two identification marks on the floor, so that the accuracy of aligning the charging electrodes is higher. After the charging electrode is aligned, the straight line driving is kept, and the plug-in charging can be smoothly carried out. Or the rechargeable seat is charged immediately after being charged. If the power is cut off in the house, the recharging seat does not have electricity, the charging electrode can be inserted, and after the power is turned on in the house, the recharging seat has electricity, and the sweeping robot charges immediately.
Referring to fig. 11, further, the photographing module 1 includes:
and a feature point unit 11 for extracting and storing feature points of the image.
In this embodiment, the feature point unit 11 performs feature extraction on the environment image, which can reduce the memory of the image, save more space during storage, and reduce the workload of comparison when comparing the feature points.
Referring to fig. 12, further, the alignment module 2 includes:
the matching unit 21 is configured to match the feature points of the stored environment image with the feature points of the pre-stored recharging seat image;
and a matching value unit 22, configured to calculate the number of interior points and generate a matching value.
In this embodiment, the feature points extracted from the environment image collected by the sweeping robot during sweeping are compared with the feature points of the recharging seat, and the matching unit 21 compares the extracted feature points of the collected environment image with the preset feature points of the recharging seat one by one, and performs matching by using a feature matching method; the interior points are feature points in the two images that have a one-to-one correspondence relationship, and the more similar the two images are, the more interior points are, the higher the matching value generated by the matching value unit 22 is. And then selecting the environment image with the highest similarity, namely the environment image with the highest matching value. And confirming that the object corresponding to the characteristic point with the highest matching value is the recharging seat.
Referring to fig. 13, further, the moving module 3 further includes:
and the storage unit 32 is used for acquiring and storing the posture of the sweeping robot at each moment by using a visual sensor or a laser sensor in the sweeping process.
In this embodiment, the vision sensor is configured to acquire the moving position of the sweeping robot by using the internal reference of the camera, and acquire the posture of the sweeping robot in real time, so that the sweeping robot can recognize the position of the sweeping robot in the environment. The laser sensor determines the posture of the sweeping robot by detecting the distance from the surrounding object, and the storage unit 32 stores the posture acquired at each moment in the sweeping robot.
Referring to fig. 14, further, the moving module 3 further includes:
the extension unit 33 is used for marking the current position of the sweeping robot, and extending the position which is not marked near the current position of the sweeping robot to generate a child node;
an evaluation value unit 34 configured to calculate an evaluation function value for each child node and mark the child node having the smallest evaluation function value;
a path unit 35, configured to stop expansion if the child node with the smallest evaluation value is the target node, and join all marked minimum child nodes to generate a path;
and a moving unit 36 for moving to the recharging seat charging position according to the path.
In this embodiment, when the sweeping robot returns to the position where the environmental image is collected, path planning is performed, and an a-Star algorithm, also called an a-Star algorithm, is a direct search method for solving the shortest path most effectively in a static road network. The specific calculation steps are as follows:
a) firstly, judging whether 8 nodes around the initial node have obstacle points, if so, firstly removing the obstacle points, then solving the node with the minimum cost from the nodes except the obstacle points, and adding the node into a result list.
b) Then, the node is solved, and the evaluation function f (n) ═ g (n) + h (n) according to the a-algorithm is obtained. Where g (n) is the cost that has been paid from the starting point to the current node n, and h (n) is the cost estimate from the current node n to the target node. When the current node and the starting node are on the same horizontal line or vertical line, the cost between the nodes is 10, that is, g (n) is 10; when the current node is on the same diagonal line as the starting node, the cost between the nodes is 14, i.e., g (n) is 14. h (n) (abscissa grid number between current node and target node + ordinate grid number between current node and target node) × 10, 8 nodes around the current node respectively solve h (n) with the target node.
c) And 8 h (n) solved each time are compared in size, and the minimum node is selected as the initial node to continue searching. The search is ended until the target node is encountered, and the path unit 35 solves for the optimal path. After obtaining the optimal path, the mobile unit 36 returns to the recharging seat for recharging according to the path.
In conclusion, the vision sweeping robot provided by the invention can find the recharging seat by acquiring the environment image in the sweeping process, so that the speed of finding the recharging seat is increased. By utilizing two-dimension codes symmetrically arranged on the charging electrode on the recharging seat, the sweeping robot can be inserted into the recharging seat without an electric signal transmitted by the recharging seat.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A recharging method of a vision sweeping robot is characterized by comprising the following steps:
s1, shooting and storing an environment image of the surrounding environment of the vision sweeping robot in the sweeping process;
s2, comparing the stored environment image with a pre-stored recharging seat image during recharging;
s3, when the similarity between the environment image and the image of the recharging seat is the highest, the visual sweeping robot moves to the recharging seat charging position according to the current position information of the visual sweeping robot and the environment scene image information with the highest similarity;
s4, repeating the steps S2 and S3 until a preset condition is reached;
the preset condition is that the vision sweeping robot can recognize the recognition mark arranged on the recharging seat; the step of moving to the recharging seat charging position comprises the following steps:
s31, recognizing the recognition mark on the recharging seat, and moving the recharging seat to a charging position by taking the recognition mark as a reference;
the identification mark on the recharging seat is identified, and the step of moving to the recharging seat charging position by taking the identification mark as a reference comprises the following steps:
s311, merging the pixel points of the similar gradient information on the identification marks into line segments;
s312, connecting the combined line segments to form a polygon; limiting the number of the polygons by limiting the side length of the polygons and limiting the number of angular points formed by the polygons to obtain quadrangles, combining the quadrangles adjacent to each other in space into a new quadrangle, and finally obtaining a large quadrangle;
s313, projecting the large quadrangle into a coordinate system of the visual sensor through a homography matrix according to internal parameters and external parameters of the visual sensor of the visual sweeping robot, and calculating to obtain the relative position relationship between the visual sweeping robot and the identification mark according to the corresponding direction information of the identification mark and the visual sensor;
and S314, moving to the recharging seat charging position according to the relative position relation.
2. The recharging method of the vision sweeping robot of claim 1, wherein the step of recognizing the recognition mark arranged on the recharging seat and moving the recharging seat to the charging position by taking the recognition mark as a reference comprises the steps of:
and S315, identifying two identification marks which are symmetrically arranged on the recharging seat by taking the charging electrode as a symmetry axis.
3. The method of claim 1, wherein the step of storing the environmental image of the environment surrounding the sweeping robot during the sweeping process comprises:
and S11, extracting and storing the characteristic points of the environment image.
4. The recharging method of the visual sweeping robot of claim 3, wherein the step of comparing the stored environmental image with the pre-stored recharging stand image comprises:
s21, matching the feature points of the stored environment image with the feature points of the pre-stored recharging seat image by using a feature matching method;
and S22, calculating the number of the interior points and generating a matching value.
5. A vision robot of sweeping floor, characterized in that includes:
the shooting module is used for shooting and storing an environment image of the surrounding environment of the visual floor sweeping robot in the sweeping process;
the comparison module is used for comparing the stored environment image with a pre-stored recharging seat image during recharging;
the moving module is used for moving the visual floor sweeping robot to the recharging seat charging position according to the current position information of the visual floor sweeping robot and the environment scene image information with the highest similarity when the similarity between the environment image and the recharging seat image is highest;
the condition module is used for calling the comparison module and the moving module until a preset condition is reached;
the preset condition is that the robot can recognize the recognition mark arranged on the recharging seat; the mobile module further comprises:
the charging unit is used for identifying an identification mark on the recharging seat and moving the recharging seat to a charging position by taking the identification mark as a reference;
the charging unit includes:
the line segment subunit is used for merging the pixel points of the similar gradient information on the identification mark into a line segment;
the polygon subunit is used for connecting the combined line segments to form a polygon; limiting the number of the polygons by limiting the side length of the polygons and limiting the number of angular points formed by the polygons to obtain quadrangles, combining the quadrangles adjacent to each other in space into a new quadrangle, and finally obtaining a large quadrangle;
the calculating subunit is used for projecting the large quadrangle into a camera coordinate system through a homography matrix according to internal parameters and external parameters of a vision sensor of the vision sweeping robot, and calculating to obtain the relative position relation between the vision sweeping robot and the identification mark according to the corresponding direction information of the identification mark and the camera;
and the recharging subunit is used for moving to the recharging seat charging position according to the relative position relationship.
6. The visual floor sweeping robot of claim 5, wherein the charging unit comprises:
and the charging subunit is used for identifying two identification marks which are symmetrically arranged on the recharging seat by taking the charging electrode as a symmetry axis.
7. The visual sweeping robot of claim 5, wherein the photographing module comprises:
and the characteristic point unit is used for extracting and storing the characteristic points of the image.
8. The visual floor sweeping robot of claim 7, wherein the comparison module comprises:
the matching unit is used for matching the feature points of the stored environment image with the feature points of the pre-stored recharging seat image;
and the matching value unit is used for calculating the number of the interior points and generating a matching value.
CN201711260465.0A 2017-12-04 2017-12-04 Visual floor sweeping robot and refilling method thereof Active CN107945233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711260465.0A CN107945233B (en) 2017-12-04 2017-12-04 Visual floor sweeping robot and refilling method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711260465.0A CN107945233B (en) 2017-12-04 2017-12-04 Visual floor sweeping robot and refilling method thereof

Publications (2)

Publication Number Publication Date
CN107945233A CN107945233A (en) 2018-04-20
CN107945233B true CN107945233B (en) 2020-11-24

Family

ID=61947555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711260465.0A Active CN107945233B (en) 2017-12-04 2017-12-04 Visual floor sweeping robot and refilling method thereof

Country Status (1)

Country Link
CN (1) CN107945233B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019109230A1 (en) * 2017-12-04 2019-06-13 深圳市沃特沃德股份有限公司 Visual sweeping robot and recharging method therefor
CN108599303A (en) * 2018-04-28 2018-09-28 广州视源电子科技股份有限公司 Mobile charging method, apparatus, system and computer readable storage medium
CN108614562B (en) * 2018-06-05 2021-05-07 北京智行者科技有限公司 Cleaning path optimization method
CN108733060A (en) * 2018-06-05 2018-11-02 北京智行者科技有限公司 A kind of processing method of operation cartographic information
CN110632915B (en) * 2018-06-21 2023-07-04 科沃斯家用机器人有限公司 Robot recharging path planning method, robot and charging system
CN109683605B (en) 2018-09-25 2020-11-24 上海肇观电子科技有限公司 Robot and automatic recharging method and system thereof, electronic equipment and storage medium
CN109512340B (en) * 2018-12-06 2021-05-25 深圳飞科机器人有限公司 Control method of cleaning robot and related equipment
CN109623816A (en) * 2018-12-19 2019-04-16 中新智擎科技有限公司 A kind of robot recharging method, device, storage medium and robot
CN109669457B (en) * 2018-12-26 2021-08-24 珠海市一微半导体有限公司 Robot recharging method and chip based on visual identification
CN110032196B (en) * 2019-05-06 2022-03-29 北京云迹科技股份有限公司 Robot recharging method and device
CN110378285A (en) * 2019-07-18 2019-10-25 北京小狗智能机器人技术有限公司 A kind of recognition methods of cradle, device, robot and storage medium
CN110301866B (en) * 2019-08-01 2023-07-21 商洛市虎之翼科技有限公司 Device for collecting refuse and method for operating the same
CN110597265A (en) * 2019-09-25 2019-12-20 深圳巴诺机器人有限公司 Recharging method and device for sweeping robot
CN113138596A (en) * 2021-03-31 2021-07-20 深圳市优必选科技股份有限公司 Robot automatic charging method, system, terminal device and storage medium
CN117137374B (en) * 2023-10-27 2024-01-26 张家港极客嘉智能科技研发有限公司 Sweeping robot recharging method based on computer vision

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1853552A (en) * 2005-04-20 2006-11-01 Lg电子株式会社 Cleaning robot having auto-return function to charching-stand and method using the same
KR20080017521A (en) * 2006-08-21 2008-02-27 문철홍 Method for multiple movement body tracing movement using of difference image
CN102545275A (en) * 2010-12-07 2012-07-04 上海新世纪机器人有限公司 Robot automatic charging device and robot automatic charging method
CN102866706A (en) * 2012-09-13 2013-01-09 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN105242670A (en) * 2015-10-26 2016-01-13 深圳拓邦股份有限公司 Robot having function of automatic return charging, system and corresponding method
CN106826821A (en) * 2017-01-16 2017-06-13 深圳前海勇艺达机器人有限公司 The method and system that robot auto-returned based on image vision guiding charges
CN106980320A (en) * 2017-05-18 2017-07-25 上海思岚科技有限公司 Robot charging method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803261A (en) * 2015-11-20 2017-06-06 沈阳新松机器人自动化股份有限公司 robot relative pose estimation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1853552A (en) * 2005-04-20 2006-11-01 Lg电子株式会社 Cleaning robot having auto-return function to charching-stand and method using the same
KR20080017521A (en) * 2006-08-21 2008-02-27 문철홍 Method for multiple movement body tracing movement using of difference image
CN102545275A (en) * 2010-12-07 2012-07-04 上海新世纪机器人有限公司 Robot automatic charging device and robot automatic charging method
CN102866706A (en) * 2012-09-13 2013-01-09 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN105242670A (en) * 2015-10-26 2016-01-13 深圳拓邦股份有限公司 Robot having function of automatic return charging, system and corresponding method
CN106826821A (en) * 2017-01-16 2017-06-13 深圳前海勇艺达机器人有限公司 The method and system that robot auto-returned based on image vision guiding charges
CN106980320A (en) * 2017-05-18 2017-07-25 上海思岚科技有限公司 Robot charging method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像识别的移动机器人自动完成充电;王建元 等;《电测与仪表》;20170525;第54卷(第10期);第103-107页第1-2节 *
王建元 等.基于图像识别的移动机器人自动完成充电.《电测与仪表》.2017,第54卷(第10期), *

Also Published As

Publication number Publication date
CN107945233A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107945233B (en) Visual floor sweeping robot and refilling method thereof
CN108406731B (en) Positioning device, method and robot based on depth vision
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
Pathak et al. Online three‐dimensional SLAM by registration of large planar surface segments and closed‐form pose‐graph relaxation
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
Stückler et al. Integrating depth and color cues for dense multi-resolution scene mapping using rgb-d cameras
Sabe et al. Obstacle avoidance and path planning for humanoid robots using stereo vision
CN103052968B (en) Article detection device and object detecting method
Luo et al. Enriched indoor map construction based on multisensor fusion approach for intelligent service robot
CN109669457B (en) Robot recharging method and chip based on visual identification
CN108481327B (en) Positioning device, positioning method and robot for enhancing vision
CN112346453A (en) Automatic robot recharging method and device, robot and storage medium
JP2004326264A (en) Obstacle detecting device and autonomous mobile robot using the same and obstacle detecting method and obstacle detecting program
CN110597265A (en) Recharging method and device for sweeping robot
US20210356293A1 (en) Robot generating map based on multi sensors and artificial intelligence and moving based on map
US11614747B2 (en) Robot generating map and configuring correlation of nodes based on multi sensors and artificial intelligence, and moving based on map, and method of generating map
Veth Navigation using images, a survey of techniques
Hertzberg et al. Experiences in building a visual SLAM system from open source components
Tamjidi et al. 6-DOF pose estimation of a portable navigation aid for the visually impaired
CN110827353A (en) Robot positioning method based on monocular camera assistance
Zhu et al. Real-time global localization with a pre-built visual landmark database
CN109313822B (en) Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment
JP2009217456A (en) Landmark device and control system for mobile robot
CN110838144A (en) Charging equipment identification method, mobile robot and charging equipment identification system
CN104952105A (en) Method and apparatus for estimating three-dimensional human body posture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190906

Address after: Room 402, 4th floor, Kanghe Sheng Building, New Energy Innovation Industrial Park, No. 1 Chuangsheng Road, Nanshan District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen Infinite Power Development Co., Ltd.

Address before: 518000 B, block 1079, garden city digital garden, Nanhai Road, Shekou, Shenzhen, Guangdong, 503, Nanshan District 602, China

Applicant before: SHENZHEN WOTE WODE CO., LTD.

GR01 Patent grant
GR01 Patent grant