CN111857143A - Robot path planning method, system, terminal and medium based on machine vision - Google Patents

Robot path planning method, system, terminal and medium based on machine vision Download PDF

Info

Publication number
CN111857143A
CN111857143A CN202010716297.7A CN202010716297A CN111857143A CN 111857143 A CN111857143 A CN 111857143A CN 202010716297 A CN202010716297 A CN 202010716297A CN 111857143 A CN111857143 A CN 111857143A
Authority
CN
China
Prior art keywords
image
robot
road sign
corner
connecting line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010716297.7A
Other languages
Chinese (zh)
Inventor
刘圭圭
李凡平
石柱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Issa Data Technology Co ltd
Beijing Yisa Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Anhui Issa Data Technology Co ltd
Beijing Yisa Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Issa Data Technology Co ltd, Beijing Yisa Technology Co ltd, Qingdao Yisa Data Technology Co Ltd filed Critical Anhui Issa Data Technology Co ltd
Priority to CN202010716297.7A priority Critical patent/CN111857143A/en
Publication of CN111857143A publication Critical patent/CN111857143A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot path planning method based on machine vision, which comprises the following steps: acquiring a manually designed road sign image; performing image processing on the road sign image, judging whether the processed image comprises a road sign, and if the road sign exists, calculating the position and the direction of the robot relative to the road sign image; preprocessing the global graph by adopting a free space method to obtain a basic connected graph; and calculating the shortest path of the robot by adopting an A-x algorithm. The method is matched with a manual road marking method to determine the position of the robot, a connected graph is constructed on a global graph by adopting a free space method, an A-star algorithm is adopted to calculate the optimal path, the calculated amount is small, the debugging is convenient, the accurate positioning of the robot in a small environment is realized, and the optimal path is automatically planned.

Description

Robot path planning method, system, terminal and medium based on machine vision
Technical Field
The invention relates to the technical field of computer data processing, in particular to a robot route planning method, a system, a terminal and a medium based on machine vision.
Background
Path planning is a basic task of mobile robot navigation, and it needs to search out one or more optimal or better collision-free paths from an initial state to a target state in a spatial structure model established by some environmental modeling method. Under such a requirement, the path planning problem is divided into two sub-problems: spatial structure modeling (environmental modeling) and path search strategies. The current space structure modeling methods include a Voronoi visual Graph method (V-Graph), a Free space method (Free space Approach), a regular grid method (Grids) and the like. How to accurately position the robot in a small environment and automatically plan an optimal path is a technical problem to be solved in the prior art.
Disclosure of Invention
Aiming at the defects in the prior art, the embodiment of the invention provides a robot path planning method, a system, a terminal and a medium based on machine vision, which have the advantages of low cost of manually manufacturing road signs, adoption of a free space method and an A-algorithm for path planning, small calculated amount, convenience in debugging, realization of accurate positioning in a small environment and automatic planning of an optimal path.
In a first aspect, a robot path planning method based on machine vision provided by an embodiment of the present invention includes:
acquiring a manually designed road sign image;
performing image processing on the road sign image, judging whether the processed image comprises a road sign, and if the road sign exists, calculating the position and the direction of the robot relative to the road sign image;
preprocessing the global graph by adopting a free space method to obtain a basic connected graph;
and calculating the shortest path of the robot by adopting an A-x algorithm.
In a second aspect, an embodiment of the present invention provides a robot path planning system based on machine vision, including: an acquisition module, a robot position calculation module, a connectivity graph calculation module and a shortest path calculation module,
the acquisition module is used for acquiring a manually designed road sign image;
the robot position calculating module is used for processing the road sign image, judging whether the processed image comprises a road sign or not, and calculating the position and the direction of the robot relative to the road sign image if the road sign exists;
the connected graph calculation module is used for preprocessing the global graph by adopting a free space method to obtain a basic connected graph;
the shortest path calculation module is used for calculating the shortest path of the robot by adopting an A-x algorithm.
In a third aspect, an intelligent terminal provided in an embodiment of the present invention includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method steps described in the foregoing embodiment.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions, which, when executed by a processor, cause the processor to perform the method steps described in the above embodiments.
The invention has the beneficial effects that:
the robot path planning method, the system, the terminal and the medium based on the machine vision provided by the embodiment of the invention are matched with a manual road marking method to determine the position of the robot, a connected graph is constructed on a global graph by adopting a free space method, an A-algorithm is adopted to calculate the optimal path, the calculated amount is small, the debugging is convenient, the accurate positioning of the robot in a small environment is realized, and the optimal path is automatically planned.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 shows a flowchart of a robot path planning method based on machine vision according to a first embodiment of the present invention;
FIG. 2 shows an artificially designed roadmap image in a first embodiment;
fig. 3 shows another artificially designed landmark image in the first embodiment;
FIG. 4 is a diagram illustrating the determination of an optimal connection line in the first embodiment;
FIG. 5 is a diagram illustrating a first embodiment of determining a non-optimal connection line;
fig. 6 is a schematic structural diagram illustrating a robot path planning system based on machine vision according to another embodiment of the present invention;
fig. 7 shows a schematic structural diagram of an intelligent terminal according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
As shown in fig. 1, there is shown a flowchart of a robot path planning method based on machine vision according to a first embodiment of the present invention, the method includes the following steps:
and S1, acquiring the manually designed road sign image.
In particular, the road sign is designed manually. When carrying out artificial design and making the road sign, discern the road sign according to the pixel size of road sign image and the symmetry of road sign image connected domain when considering that the robot discerns, all road signs placed need be at the same level of relative robot and be the axisymmetric figure, guarantee the discernment accuracy, when considering that the robot discerns the road sign, need judge the direction of road sign and the point location label of road sign, can set up the road sign into the figure of square axisymmetric but non-centrosymmetric, and change the connected domain size of road sign axisymmetric, in order to distinguish the point location label of road sign. In the embodiment, 2 road signs are designed for the prototype with the bitter force in the game minecraft, as shown in fig. 2 and 3, the two road signs represent the difference of the road signs through different eye sights, and are coded as the number 1 and the number 2 road signs. The road sign is made to be an axisymmetric graph, so that the centroids of the two eyes and the mouth can be calculated by using a regionprops function in a Matlab software graph processing toolbox when the direction is calculated, the center point of the centroids of the two eyes is calculated and is connected with the centroid of the mouth, and thus a vector from the centroid of the mouth as a starting point to the center point of the centroids of the two eyes can be used as a direction vector of the road sign for calculating a direction angle.
And S2, processing the road sign image, judging whether the processed image includes a road sign, and if the road sign exists, calculating the position and the direction of the robot relative to the road sign image.
Specifically, detecting a landmark area: the method comprises the steps of performing image turning and median filtering on a road sign image, performing Sobel edge detection processing, performing median filtering on the processed image to remove noise, obtaining the number of connected regions of the image and information of each connected region by using a regionprops function, dividing the screened connected regions into a plurality of images, storing the images into a plurality of matrixes, and performing regionprops function processing on the images to obtain the number of sub-connected regions of each image, wherein the image with the number of the sub-connected regions of 4 (the number of the sub-connected regions of the road sign image designed in the embodiment is 4) is the road sign image.
The specific method for calculating the position and the direction of the robot relative to the road sign image comprises the following steps: dividing a road sign image according to a minimum boundary frame of a connected domain of a processed image, judging a symmetry axis, a direction and a point label of the road sign image according to the divided road sign image, judging that the center of the image and the Y-axis direction of the image are in the center of the road sign image according to an integral image, calculating a scale corresponding proportion of the length of a pixel point in a shot image and the length of a plane of the robot by taking the direction of the road sign image as the relative position and direction of the Y-axis, determining the position and the direction of the robot relative to the road sign in the plane of the robot through scale conversion, and calculating the position and the direction of the robot in a global image.
Particularly, the area communicated with the road sign image is largeAnd small similar threshold value screening connected domains, and adjusting the camera to be vertical upwards so that the ceiling is parallel to the lens. On the premise that the shot picture containing the road sign image is turned over up and down, the course of the robot can be known to be the vertical upward direction of the image, and the course angle theta of the robot relative to the road sign can be calculated through coordinate axis transformation as long as the included angle between the front direction of the road sign and the x axis of the image after turning over up and down in the image is known0And if the counterclockwise included angle between the road sign direction and the positive x-axis direction is alpha, the counterclockwise included angle from the positive x-axis direction to the road sign center is beta, the angle theta is pi- (alpha-beta), and alpha-beta is defined as delta, then theta is pi-delta. From which the absolute position and heading of the robot position relative to the landmark can be determined. The position of the robot is determined by the distances rho and theta from the center of the image to the center of the road sign, and the course of the robot is determined by the counterclockwise included angle theta between the vertical upward direction of the overturned image and the direction of the road sign0And (4) determining.
And S3, preprocessing the global graph by adopting a free space method to obtain a basic connected graph.
Specifically, the method comprises the following steps of obtaining corner points of all obstacles by a Fast corner point detection method for a global image, and calculating to obtain a connected image through corner point information and obstacle marks, wherein the method specifically comprises the following steps:
s31, corner detection;
and S32, modeling by a free space method.
The corner point detection method comprises the following steps:
s311, one point is selected to judge whether the point is a characteristic point, and the characteristic value of the point is set as Ip.
S312, a proper threshold t is set.
S313, a circle having a radius of 3 pixels with this point as the center, and 16 pixels on the boundary.
S314, if their pixel values are either all larger than Ip + t or Ip-t smaller on this circle, then it is a corner point.
S315, in the actual detection process, the problem that a plurality of corner points are detected on one corner often occurs, in order to solve the problem, a non-maximum suppression method can be used for removing redundant corner points on one corner, and the corner point with the most corner point characteristics is reserved.
The free space modeling method comprises the following steps:
s321, importing the global environment map.
S322, carrying out corner detection on the environment global image to determine corner coordinates, and storing the corner coordinates into a corner array.
And S323, traversing the corner points, and connecting a connecting line from the current corner point to other corner points and a perpendicular line to the space boundary.
And S324, deleting the connecting line passing through the obstacle.
S325, detecting whether two external angles of each connecting line of the current corner point are both smaller than 180 degrees.
If so (as shown in FIG. 4, θ)k1<180°,θk2<180 deg., adding free connecting line for the best connecting line, discarding other connecting lines of the corner, and detecting the next corner according to the corner array.
If not (as shown in fig. 5), add the pending free connection line, find the next connection line of the corner point and repeat step S325.
If all the connecting lines traversing the corner point do not have the optimal connecting line, selecting the shortest one of the undetermined free connecting lines to be added into the free connecting line, abandoning other undetermined free connecting lines, and detecting the next corner point according to the corner point array; and after all the angular points are traversed, connecting the middle points of the free connecting lines pairwise.
And S326, deleting the connecting line passing through the obstacle.
And S4, calculating the shortest path of the robot by adopting an A-star algorithm.
Specifically, the pixel position of the robot in the global graph is determined as a starting point, a connected graph node which can be directly connected with the starting point without passing through an obstacle is determined, the actual movement cost G (namely the length of a connecting line) of each adjacent node and the estimated cost H from the adjacent node to an end point are calculated, and the estimated cost H of the adjacent node has multiple algorithms.
The robot path planning method based on the machine vision provided by the embodiment of the invention is used for determining the position of the robot by matching with a manual road marking method, constructing a connected graph for a global graph by adopting a free space method, calculating the optimal path by adopting an A-algorithm, and has the advantages of small calculation amount, convenience in debugging, realization of accurate positioning of the robot in a small environment and automatic planning of the optimal path.
In the first embodiment, a robot path planning method based on machine vision is provided, and correspondingly, the application also provides a robot path planning system based on machine vision. Please refer to fig. 6, which is a schematic diagram of a robot path planning system based on machine vision according to a second embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points.
As shown in fig. 6, there is shown a block diagram of a robot path planning system based on machine vision according to a second embodiment of the present invention, where the system includes: an acquisition module, a robot position calculation module, a connectivity graph calculation module and a shortest path calculation module,
the acquisition module is used for acquiring a manually designed road sign image;
the robot position calculating module is used for processing the road sign image, judging whether the processed image comprises a road sign or not, and calculating the position and the direction of the robot relative to the road sign image if the road sign exists;
the connected graph calculation module is used for preprocessing the global graph by adopting a free space method to obtain a basic connected graph;
the shortest path calculation module is used for calculating the shortest path of the robot by adopting an A-x algorithm.
The robot position calculation module comprises an image processing submodule, and the image processing submodule is used for carrying out binarization and sobel operator extraction edge processing on the acquired road sign image.
The robot position calculating module comprises a landmark region detecting submodule and a robot position determining submodule,
the landmark region detection submodule is used for segmenting a landmark image according to the minimum bounding box of the connected domain of the processed image and judging a symmetry axis, a direction and a point position label of the landmark image according to the segmented landmark image;
the robot position determining submodule is used for judging the relative position and direction of the picture center and the picture Y-axis direction in the center of the road sign image by taking the direction of the road sign image as the Y-axis through the whole picture, calculating the scale corresponding proportion of the pixel point length in the shot picture and the plane length of the robot, determining the position and direction of the robot relative to the road sign in the plane of the robot through scale conversion, and calculating the position and direction of the robot in the global image.
The connected graph calculation module comprises a corner detection submodule and a free space method modeling submodule,
the corner detection submodule is used for obtaining the corners of all obstacles by the global image through a Fast corner detection method,
the free space method modeling submodule is used for carrying out corner detection on the global image to determine corner coordinates and storing the corner coordinates into a corner array;
traversing the angular points, and connecting the connecting lines from the current angular point to other angular points and the vertical lines of the space boundary;
deleting the connecting line passing through the barrier;
detecting whether two external angles of each connecting line of the current corner point are both smaller than 180 degrees;
if so, adding a free connecting line for the optimal connecting line, discarding other connecting lines of the corner point, and detecting the next corner point according to the corner point array;
if not, adding a undetermined free connecting line, searching the next connecting line of the angular point, and repeatedly detecting whether two external angles of each connecting line of the current angular point are less than 180 degrees;
if all the connecting lines traversing the corner point do not have the optimal connecting line, selecting the shortest undetermined free connecting line to be added into the free connecting line, abandoning other undetermined free connecting lines, and detecting the next corner point according to the corner point array;
and after all the angular points are traversed, connecting the midpoints of the free connecting lines pairwise, and deleting the connecting lines passing through the obstacles, wherein all the midpoints do not pass through the obstacles.
The robot path planning system based on the machine vision provided by the embodiment of the invention is matched with a manual road marking method to determine the position of the robot, a connected graph is constructed on a global graph by adopting a free space method, an A-star algorithm is adopted to calculate the optimal path, the calculated amount is small, the debugging is convenient, the accurate positioning of the robot in a small environment is realized, and the optimal path is automatically planned.
As shown in fig. 7, a schematic diagram of an intelligent terminal according to a third embodiment of the present invention is provided, where the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used for storing a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the first embodiment.
It should be understood that in the embodiments of the present invention, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input devices may include a touch pad, microphone, etc., and the output devices may include a display (LCD, etc.), speakers, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the processor, the input device, and the output device described in the embodiments of the present invention may execute the implementation described in the method embodiments provided in the embodiments of the present invention, and may also execute the implementation described in the system embodiments in the embodiments of the present invention, which is not described herein again.
The invention also provides an embodiment of a computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions that, when executed by a processor, cause the processor to carry out the method described in the above embodiment.
The computer readable storage medium may be an internal storage unit of the terminal described in the foregoing embodiment, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A robot path planning method based on machine vision is characterized by comprising the following steps:
acquiring a manually designed road sign image;
performing image processing on the road sign image, judging whether the processed image comprises a road sign, and if the road sign exists, calculating the position and the direction of the robot relative to the road sign image;
preprocessing the global graph by adopting a free space method to obtain a basic connected graph;
and calculating the shortest path of the robot by adopting an A-x algorithm.
2. The method of claim 1, wherein the method of image processing the landmark image comprises: and carrying out binarization and sobel operator extraction edge processing on the acquired road sign image.
3. The method of claim 1, wherein the specific method of calculating the position and orientation of the robot relative to the roadmap image comprises:
dividing a road sign image according to a minimum boundary frame of a connected domain of a processed image, judging a symmetry axis, a direction and a point label of the road sign image according to the divided road sign image, judging that the center of the image and the Y-axis direction of the image are in the center of the road sign image according to an integral image, calculating a scale corresponding proportion of the length of a pixel point in a shot image and the length of a plane of the robot by taking the direction of the road sign image as the relative position and direction of the Y-axis, determining the position and the direction of the robot relative to the road sign in the plane of the robot through scale conversion, and calculating the position and the direction of the robot in a global image.
4. The method of claim 1, wherein the specific method for preprocessing the global graph by the free space method to obtain the basic connected graph comprises:
detecting the corners of all obstacles by a Fast corner detection method for the global image, and processing a connected domain of the global image to obtain all the corners and corresponding labels of the obstacles;
carrying out corner detection on the global image to determine corner coordinates, and storing the corner coordinates into a corner array;
traversing the angular points, and connecting the connecting lines from the current angular point to other angular points and the vertical lines of the space boundary;
deleting the connecting line passing through the barrier;
detecting whether two external angles of each connecting line of the current corner point are both smaller than 180 degrees;
if so, adding a free connecting line for the optimal connecting line, discarding other connecting lines of the corner point, and detecting the next corner point according to the corner point array;
if not, adding a undetermined free connecting line, searching the next connecting line of the angular point, and repeatedly detecting whether two external angles of each connecting line of the current angular point are less than 180 degrees;
if all the connecting lines traversing the corner point do not have the optimal connecting line, selecting the shortest undetermined free connecting line to be added into the free connecting line, abandoning other undetermined free connecting lines, and detecting the next corner point according to the corner point array;
and after all the angular points are traversed, connecting the midpoints of the free connecting lines pairwise, and deleting the connecting lines passing through the obstacles, wherein all the midpoints do not pass through the obstacles.
5. The method of claim 4, wherein the specific method of calculating the shortest path of the robot using the a-x algorithm comprises:
determining the pixel position of the robot in the global graph as a starting point, judging a connected graph node which can be directly connected with the starting point without passing through a barrier, calculating the actual movement cost of each adjacent node and the estimated cost from the adjacent node to a terminal, obtaining total cost estimation according to the sum of the actual movement cost and the estimated cost, using the node with the minimum total cost estimation as the next node, calculating the total cost estimation of the next node, and repeatedly calculating the total cost estimation of the nodes until the terminal to obtain the shortest path of the robot.
6. A robot path planning system based on machine vision, comprising: an acquisition module, a robot position calculation module, a connectivity graph calculation module and a shortest path calculation module,
the acquisition module is used for acquiring a manually designed road sign image;
the robot position calculating module is used for processing the road sign image, judging whether the processed image comprises a road sign or not, and calculating the position and the direction of the robot relative to the road sign image if the road sign exists;
the connected graph calculation module is used for preprocessing the global graph by adopting a free space method to obtain a basic connected graph;
the shortest path calculation module is used for calculating the shortest path of the robot by adopting an A-x algorithm.
7. The system of claim 6, wherein the robot position calculation module includes a landmark region detection sub-module and a robot position determination sub-module,
the landmark region detection submodule is used for segmenting a landmark image according to the minimum bounding box of the connected domain of the processed image and judging a symmetry axis, a direction and a point position label of the landmark image according to the segmented landmark image;
the robot position determining submodule is used for judging the relative position and direction of the picture center and the picture Y-axis direction in the center of the road sign image by taking the direction of the road sign image as the Y-axis through the whole picture, calculating the scale corresponding proportion of the pixel point length in the shot picture and the plane length of the robot, determining the position and direction of the robot relative to the road sign in the plane of the robot through scale conversion, and calculating the position and direction of the robot in the global image.
8. The system of claim 6, wherein the connectivity graph computation module includes a corner detection sub-module and a free space method modeling sub-module,
the corner detection submodule is used for obtaining the corners of all obstacles by the global image through a Fast corner detection method,
the free space method modeling submodule is used for carrying out corner detection on the global image to determine corner coordinates and storing the corner coordinates into a corner array;
traversing the angular points, and connecting the connecting lines from the current angular point to other angular points and the vertical lines of the space boundary;
deleting the connecting line passing through the barrier;
detecting whether two external angles of each connecting line of the current corner point are both smaller than 180 degrees;
if so, adding a free connecting line for the optimal connecting line, discarding other connecting lines of the corner point, and detecting the next corner point according to the corner point array;
if not, adding a undetermined free connecting line, searching the next connecting line of the angular point, and repeatedly detecting whether two external angles of each connecting line of the current angular point are less than 180 degrees;
if all the connecting lines traversing the corner point do not have the optimal connecting line, selecting the shortest undetermined free connecting line to be added into the free connecting line, abandoning other undetermined free connecting lines, and detecting the next corner point according to the corner point array;
and after all the angular points are traversed, connecting the midpoints of the free connecting lines pairwise, and deleting the connecting lines passing through the obstacles, wherein all the midpoints do not pass through the obstacles.
9. An intelligent terminal comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being adapted to store a computer program, the computer program comprising program instructions, characterized in that the processor is configured to invoke the program instructions to perform the method steps according to any of claims 1 to 5.
10. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to carry out the method steps according to any one of claims 1 to 5.
CN202010716297.7A 2020-07-23 2020-07-23 Robot path planning method, system, terminal and medium based on machine vision Pending CN111857143A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010716297.7A CN111857143A (en) 2020-07-23 2020-07-23 Robot path planning method, system, terminal and medium based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010716297.7A CN111857143A (en) 2020-07-23 2020-07-23 Robot path planning method, system, terminal and medium based on machine vision

Publications (1)

Publication Number Publication Date
CN111857143A true CN111857143A (en) 2020-10-30

Family

ID=72950309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010716297.7A Pending CN111857143A (en) 2020-07-23 2020-07-23 Robot path planning method, system, terminal and medium based on machine vision

Country Status (1)

Country Link
CN (1) CN111857143A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101660908A (en) * 2009-09-11 2010-03-03 天津理工大学 Visual locating and navigating method based on single signpost
CN102135429A (en) * 2010-12-29 2011-07-27 东南大学 Robot indoor positioning and navigating method based on vision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
KR20150067483A (en) * 2013-12-10 2015-06-18 고려대학교 산학협력단 Method for localization of mobile robot using artificial landmark
CN105841687A (en) * 2015-01-14 2016-08-10 上海智乘网络科技有限公司 Indoor location method and indoor location system
CN106681334A (en) * 2017-03-13 2017-05-17 东莞市迪文数字技术有限公司 Automatic-guided-vehicle dispatching control method based on genetic algorithm
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign
CN106970620A (en) * 2017-04-14 2017-07-21 安徽工程大学 A kind of robot control method based on monocular vision
US20170212521A1 (en) * 2016-01-27 2017-07-27 Hon Hai Precision Industry Co., Ltd. Computer vision positioning system and method for the same
CN107992038A (en) * 2017-11-28 2018-05-04 广州智能装备研究院有限公司 A kind of robot path planning method
CN110006430A (en) * 2019-03-26 2019-07-12 智慧航海(青岛)科技有限公司 A kind of optimization method of Path Planning
CN110763247A (en) * 2019-10-21 2020-02-07 上海海事大学 Robot path planning method based on combination of visual algorithm and greedy algorithm

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101660908A (en) * 2009-09-11 2010-03-03 天津理工大学 Visual locating and navigating method based on single signpost
CN102135429A (en) * 2010-12-29 2011-07-27 东南大学 Robot indoor positioning and navigating method based on vision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
KR20150067483A (en) * 2013-12-10 2015-06-18 고려대학교 산학협력단 Method for localization of mobile robot using artificial landmark
CN105841687A (en) * 2015-01-14 2016-08-10 上海智乘网络科技有限公司 Indoor location method and indoor location system
US20170212521A1 (en) * 2016-01-27 2017-07-27 Hon Hai Precision Industry Co., Ltd. Computer vision positioning system and method for the same
CN106681334A (en) * 2017-03-13 2017-05-17 东莞市迪文数字技术有限公司 Automatic-guided-vehicle dispatching control method based on genetic algorithm
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign
CN106970620A (en) * 2017-04-14 2017-07-21 安徽工程大学 A kind of robot control method based on monocular vision
CN107992038A (en) * 2017-11-28 2018-05-04 广州智能装备研究院有限公司 A kind of robot path planning method
CN110006430A (en) * 2019-03-26 2019-07-12 智慧航海(青岛)科技有限公司 A kind of optimization method of Path Planning
CN110763247A (en) * 2019-10-21 2020-02-07 上海海事大学 Robot path planning method based on combination of visual algorithm and greedy algorithm

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN108885791B (en) Ground detection method, related device and computer readable storage medium
CN112734852B (en) Robot mapping method and device and computing equipment
US10949999B2 (en) Location determination using street view images
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
CN111210477A (en) Method and system for positioning moving target
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN109685764B (en) Product positioning method and device and terminal equipment
CN112017236A (en) Method and device for calculating position of target object based on monocular camera
CN114543819A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN110097064B (en) Picture construction method and device
CN111191557A (en) Mark identification positioning method, mark identification positioning device and intelligent equipment
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN108615025B (en) Door identification and positioning method and system in home environment and robot
CN111857143A (en) Robot path planning method, system, terminal and medium based on machine vision
US20230030660A1 (en) Vehicle positioning method and system for fixed parking scenario
EP3605463B1 (en) Crossing point detector, camera calibration system, crossing point detection method, camera calibration method, and recording medium
CN111860084B (en) Image feature matching and positioning method and device and positioning system
CN114463717A (en) Obstacle position judgment method and system, electronic device and storage medium
CN113674358A (en) Method and device for calibrating radar vision equipment, computing equipment and storage medium
CN117576200B (en) Long-period mobile robot positioning method, system, equipment and medium
CN113899357B (en) Incremental mapping method and device for visual SLAM, robot and readable storage medium
David Detection of building facades in urban environments
CN114155508B (en) Road change detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266400 No. 77, Lingyan Road, LINGSHANWEI sub district office, Huangdao District, Qingdao City, Shandong Province

Applicant after: Qingdao Issa Technology Co.,Ltd.

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant after: Anhui Issa Data Technology Co.,Ltd.

Address before: 100020 room 108, 1 / F, building 17, yard 6, Jingshun East Street, Chaoyang District, Beijing

Applicant before: BEIJING YISA TECHNOLOGY Co.,Ltd.

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant before: Anhui Issa Data Technology Co.,Ltd.

Address after: 266400 No. 77, Lingyan Road, LINGSHANWEI sub district office, Huangdao District, Qingdao City, Shandong Province

Applicant after: Issa Technology Co.,Ltd.

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant after: Anhui Issa Data Technology Co.,Ltd.

Address before: 266400 No. 77, Lingyan Road, LINGSHANWEI sub district office, Huangdao District, Qingdao City, Shandong Province

Applicant before: Qingdao Issa Technology Co.,Ltd.

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant before: Anhui Issa Data Technology Co.,Ltd.

CB02 Change of applicant information