CN117824662B - Robot path planning method, device, equipment and storage medium - Google Patents

Robot path planning method, device, equipment and storage medium Download PDF

Info

Publication number
CN117824662B
CN117824662B CN202410224399.5A CN202410224399A CN117824662B CN 117824662 B CN117824662 B CN 117824662B CN 202410224399 A CN202410224399 A CN 202410224399A CN 117824662 B CN117824662 B CN 117824662B
Authority
CN
China
Prior art keywords
shadow
illumination
robot
path
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410224399.5A
Other languages
Chinese (zh)
Other versions
CN117824662A (en
Inventor
周士博
黄占阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruichi Laser Shenzhen Co ltd
Original Assignee
Ruichi Laser Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruichi Laser Shenzhen Co ltd filed Critical Ruichi Laser Shenzhen Co ltd
Priority to CN202410224399.5A priority Critical patent/CN117824662B/en
Publication of CN117824662A publication Critical patent/CN117824662A/en
Application granted granted Critical
Publication of CN117824662B publication Critical patent/CN117824662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of intelligent robots, and discloses a robot path planning method, a device, equipment and a storage medium, wherein the method comprises the following steps: detecting a space vector from a shadow instance center to an object instance center in a preset image based on a preset shadow detection algorithm; determining an illumination direction angle between the spatial vector and the reference spatial vector; and determining a direction perpendicular to the illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to the illumination based on the executed path track. According to the invention, the space vector from the shadow example center to the object example center in the image is detected based on the deep learning, then the included angle between the shadow and the illumination is determined through the included angle between the shadow and the reference space vector, so that the illumination direction is calculated, and then the path of the robot is planned to be perpendicular to the illumination direction, so that the problems of sensor failure and robot function loss in a backlight scene can be effectively solved, and the working efficiency of the robot is improved.

Description

Robot path planning method, device, equipment and storage medium
Technical Field
The present invention relates to the field of intelligent robots, and in particular, to a method, an apparatus, a device, and a storage medium for planning a robot path.
Background
The robot cleans, lawn maintenance, ground construction and other scenes, and Footprint on the running path of the robot is usually required to carry out full coverage path planning on the ground of an area, so that full coverage operation on one area is realized. A common idea of full coverage path planning is the artificial potential field method, which guides the search direction of the path planning by adding some form of artificial potential field to the planned area. Two common full-coverage path planning modes can be divided into bow-type planning and Chinese character 'hui' type planning.
In the robot operation process, the image sensor is usually relied on for working environment sensing, real-time positioning and map construction, but the existing image sensor such as structured light, TOF, binocular cameras and the like cannot be completely adapted to outdoor backlight environments, so that the sensor is invalid, and the functions of the robot are affected. The conventional full-coverage path planning method does not sense the illumination direction no matter the bow shape or the return shape, and does not add illumination direction reference, so that the problem of sensor failure caused by backlight cannot be avoided.
Disclosure of Invention
The invention mainly aims to provide a robot path planning method, a device, equipment and a storage medium, and aims to solve the technical problem that in the prior art, a robot does not consider the illumination direction in the operation process, and a sensor is invalid due to a backlight environment.
In order to achieve the above object, the present invention provides a robot path planning method, the method comprising the steps of:
Inputting a preset image into a preset shadow detection algorithm to perform shadow detection, and obtaining a space vector from the center of a shadow instance to the center of an object instance in the preset image;
Determining an illumination direction angle between the space vector and a reference space vector, the illumination direction angle representing an included angle between the center of the shadow instance and illumination;
And determining a direction perpendicular to illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to illumination based on the executed path track.
Optionally, before determining the illumination direction angle between the spatial vector and the reference spatial vector, the method further includes:
And acquiring a reference shadow image of the preset object shot at a preset time point, inputting the reference shadow image into a preset shadow detection algorithm for shadow detection, and obtaining a reference space vector from the center of a shadow instance to the center of an object instance in the reference shadow image.
Optionally, the determining a direction perpendicular to the illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to the illumination based on the performed path trajectory includes:
determining a direction perpendicular to illumination according to the illumination direction angle;
judging whether the current direction in the executed path track is the direction perpendicular to illumination;
If not, correcting the current direction to be the direction perpendicular to illumination, and obtaining a corrected path so that the robot works according to the corrected path.
Optionally, if not, correcting the current direction to be the direction perpendicular to the illumination, and obtaining a corrected path, so that the robot works according to the corrected path, and then further including:
And recording the number of path direction correction times, wherein the number of path direction correction times is used for constructing a default working map.
Optionally, after the recording the number of times of path direction correction, the method further includes:
and periodically constructing a default working map based on the corrected path according to a preset cycle time and the corrected path direction times, so that the robot works according to the default working map.
Optionally, before the inputting the preset image into the preset shadow detection algorithm to perform shadow detection, obtaining a space vector from a shadow instance center to an object instance center in the preset image, the method further includes:
preprocessing the acquired shadow image training set to obtain a processed image training set;
Labeling the association pairs of the shadow examples and the object examples in the processed image training set to obtain a labeled image training set;
Training the shadow detection algorithm to be trained according to the noted image training set to obtain a preset shadow detection algorithm.
Optionally, training the shadow detection algorithm to be trained according to the noted image training set to obtain a preset shadow detection algorithm, including:
Training a shadow detection algorithm to be trained according to the noted image training set to obtain a trained shadow detection algorithm;
performing performance evaluation on the trained shadow detection algorithm according to a SOAP evaluation function;
and when the performance evaluation result meets the preset condition, obtaining a preset shadow detection algorithm.
In addition, in order to achieve the above object, the present invention also provides a robot path planning apparatus, including:
The space vector determining module is used for inputting a preset image into a preset shadow detection algorithm to detect shadows and obtaining a space vector from the center of a shadow instance to the center of an object instance in the preset image;
The illumination direction determining module is used for determining an illumination direction angle between the space vector and the reference space vector, and the illumination direction angle represents an included angle between the center of the shadow instance and illumination;
And the path planning module is used for determining a direction perpendicular to illumination according to the illumination direction angle and planning a path for the robot along the direction perpendicular to illumination based on the executed path track.
In addition, to achieve the above object, the present invention also proposes a robot path planning apparatus, the apparatus comprising: a memory, a processor and a robot path planning program stored on the memory and executable on the processor, the robot path planning program configured to implement the steps of the robot path planning method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a robot path planning program which, when executed by a processor, implements the steps of the robot path planning method as described above.
The invention discloses a method for detecting shadows by inputting a preset image into a preset shadow detection algorithm, wherein a space vector from a shadow instance center to an object instance center in the preset image is obtained; determining an illumination direction angle between the space vector and the reference space vector, wherein the illumination direction angle represents an included angle between the shadow instance and illumination; and determining a direction perpendicular to the illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to the illumination based on the executed path track. According to the invention, the space vector from the shadow example center to the object example center in the image is detected based on the deep learning, then the included angle between the shadow and the illumination is determined through the included angle between the shadow and the reference space vector, so that the illumination direction is calculated, and then the path of the robot is planned to be perpendicular to the illumination direction, so that the problems of sensor failure and robot function loss in a backlight scene can be effectively solved, and the working efficiency of the robot is improved.
Drawings
FIG. 1 is a schematic diagram of a robot path planning apparatus for a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first embodiment of a robot path planning method according to the present invention;
FIG. 3 is a schematic diagram of an arcuate full coverage path plan;
FIG. 4 is a schematic diagram of a full coverage path plan in the shape of a Chinese character 'Hui';
FIG. 5 is a schematic flow chart of the robot path planning method of the present invention for obtaining space vectors;
FIG. 6 is a schematic diagram of a planned path trajectory of the robot path planning method of the present invention;
FIG. 7 is a flow chart of a second embodiment of a robot path planning method according to the present invention;
FIG. 8 is a flow chart of a third embodiment of a robot path planning method according to the present invention;
FIG. 9 is a schematic diagram of an image training set labeled by the robot path planning method of the present invention;
Fig. 10 is a block diagram of a first embodiment of a robot path planning apparatus according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a robot path planning apparatus in a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the robot path planning apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the robotic path planning apparatus, and may include more or fewer components than illustrated, or may combine certain components, or may be a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a robot path planning program may be included in the memory 1005 as one type of storage medium.
In the robot path planning apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the robot path planning apparatus of the present invention may be disposed in the robot path planning apparatus, and the robot path planning apparatus invokes a robot path planning program stored in the memory 1005 through the processor 1001 and executes the robot path planning method provided by the embodiment of the present invention.
An embodiment of the invention provides a robot path planning method, referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the robot path planning method of the invention.
In this embodiment, the robot path planning method includes the following steps:
Step S10: and inputting a preset image into a preset shadow detection algorithm to perform shadow detection, and obtaining a space vector from the center of a shadow instance to the center of an object instance in the preset image.
It should be noted that, the execution body of the method of the embodiment may be a computing service device having functions of robot path planning, network communication and program running; the robot path planning apparatus may be the same or similar, or a robot equipped with the robot path planning apparatus. The present embodiment and the following embodiments will be described by taking a robot path planning apparatus as an example.
It is to be understood that the robot may be an outdoor work robot device applied to a scene such as sanitary cleaning, lawn maintenance, or ground construction, for example, may be an outdoor work robot device such as a mower, a cleaning robot, or the like. Two common full-coverage path planning modes can be divided into bow-type planning and Chinese character 'hui' type planning. The bow-shaped full coverage path planning can refer to fig. 3, fig. 3 is a schematic diagram of the bow-shaped full coverage path planning, and a single-side potential field is added in the left-side bow-shaped path planning, so that a right-side bow-shaped path is formed. The figure-back type full coverage path planning can refer to fig. 4, and fig. 4 is a schematic diagram of the figure-back type full coverage path planning, and the figure-back type on the left side plans the added potential field, so that the figure-back type path on the right side is formed.
It should be appreciated that, in order to solve the problem of sensor failure in a backlight environment, robot function loss may be performed by path planning according to courtyard grass ground illumination direction autonomous sensing and a navigation method based on illumination direction reference. Shadow detection and illumination direction determination can be performed based on a deep learning method. An object generating shadow under illumination can be selected in a working area of the robot, a photo is taken along the illumination direction to serve as a preset image, then shadow detection is carried out on the preset image based on a preset shadow detection algorithm, a space vector (also called as an offset vector) B from the center of a shadow instance to the center of an object instance in the preset image is calculated, a network reasoning process can refer to fig. 5, and fig. 5 is a flow diagram of the space vector obtained by the robot path planning method. The preset shadow detection algorithm can be designed according to the requirement, for example, instance Shadow Detection WITH A SINGLE-Stage Detector or Instance Shadow Detection, and the essential idea is to learn the association relationship between the shadow and the corresponding object.
Step S20: an illumination direction angle between the spatial vector and a reference spatial vector is determined, the illumination direction angle representing an angle between the shadow instance center and illumination.
It should be appreciated that a reference spatial vector may be chosen as a criterion, which may be chosen by calibration according to the actual situation, for example, a spatial vector corresponding to an object that is directly illuminated and not shadowed may be chosen. Then, calculating an included angle theta between the shadow example and the illumination by referring to the space vector A and combining the space vector B, wherein the included angle theta is the included angle between the shadow and the illumination, and the included angle can be calculated through the following formula:
Further, in order to calculate the illumination direction more accurately, a reference object may be selected in advance. The position of the reference object may be a position where the shadow is smallest in the light, that is, an object along the light direction, for example, if the light is sunlight, a photograph taken by an object at noon along the direction of the sunlight may be used as a reference shadow image and as a reference space vector. Thus, the step S20 includes: and acquiring a reference shadow image of the preset object shot at a preset time point, inputting the reference shadow image into a preset shadow detection algorithm for shadow detection, and obtaining a reference space vector from the center of a shadow instance to the center of an object instance in the reference shadow image.
It will be appreciated that a preset object within a preset time period may be selected as a standard, for example, if the robot works outdoors in daytime and the light is sunlight, but the sunlight has different directions at different times, a photo of the preset object may be taken along the sunlight during a time period when the shadow of the object is minimum, for example, a telegraph pole at noon may be taken as an object, and a photo is taken along the sunlight as a reference shadow image.
For example, if the afternoon illumination direction is calculated based on an object, the angle between the object shadow and the object offset vector in the afternoon and the angle between the object shadow and the object offset vector in the afternoon can be calculated, i.e. the offset angle of the afternoon illumination compared with the afternoon illumination direction. The offset vector of the noon shadow and the object instance is the standard vector and can be obtained through calibration. The space vector and the reference space vector can be obtained by taking a photo and inputting the photo into a preset shadow detection algorithm for shadow detection.
Step S30: and determining a direction perpendicular to illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to illumination based on the executed path track.
It should be understood that the method for generating the path of the reference illumination direction may be to obtain the illumination direction angle parameter θ and the already executed coverage path track output by the neural network model, determine the illumination direction by the angle θ based on the already executed coverage path track, and perform the coverage path along the direction of the perpendicular illumination, where for the angle θ, the cosine value cos θ is 0, which is the perpendicular illumination direction. The finally generated path may refer to fig. 6, and fig. 6 is a schematic diagram of a planned path trajectory of the robot path planning method according to the present invention. Therefore, the problem that the backlight scene sensor fails and the function of the robot is lost can be effectively solved, and the robot can be widely applied to products such as mowers and outdoor cleaning robots.
In the embodiment, inputting a preset image into a preset shadow detection algorithm to perform shadow detection, and obtaining a space vector from a shadow instance center to an object instance center in the preset image; determining an illumination direction angle between the space vector and the reference space vector, wherein the illumination direction angle represents an included angle between the shadow instance and illumination; and determining a direction perpendicular to the illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to the illumination based on the executed path track. According to the embodiment, the space vector from the shadow example center to the object example center in the image is detected based on the deep learning, then the included angle between the shadow and the illumination is determined through the included angle between the shadow and the reference space vector, so that the illumination direction is calculated, and then the path of the robot is planned to be perpendicular to the illumination direction, so that the problems of sensor failure and robot function loss in a backlight scene can be effectively solved, and the working efficiency of the robot is improved.
Referring to fig. 7, fig. 7 is a flowchart of a second embodiment of the robot path planning method according to the present invention.
Further, in order to avoid the failure of the sensor and the robot function caused by the backlight operation of the robot, whether the current path direction of the robot is perpendicular to the illumination direction or not can be detected in real time, and if the current path direction is not perpendicular to the illumination direction, the current direction is corrected to be perpendicular to the illumination direction, so that the robot operates according to the corrected path.
Therefore, based on the first embodiment, in the present embodiment, the step S30 includes:
step S301: and determining a direction perpendicular to the illumination according to the illumination direction angle.
Step S302: and judging whether the current direction in the executed path track is the direction perpendicular to illumination.
Step S303: if not, correcting the current direction to be the direction perpendicular to illumination, and obtaining a corrected path so that the robot works according to the corrected path.
It should be understood that it is possible to detect in real time whether the current path direction of the robot is perpendicular to the direction of illumination, and if so, to operate along the original path; if the direction is not the vertical direction, the current direction is corrected to be the direction vertical to the illumination, so that the robot works according to the corrected path.
Further, in the working process of the robot, in order to better plan the path, the path can be recorded when the path is corrected each time, if the number of times of path correction is larger, the complexity of the path is higher, and when a default map is subsequently constructed, the frequency of construction is more frequent, so that the path can be ensured to be more accurately kept perpendicular to illumination. Therefore, after the step S303, the method further includes: and recording the number of path direction correction times, wherein the number of path direction correction times is used for constructing a default working map.
It can be understood that, in order to provide a history data basis for the subsequent path planning, the correction times can be recorded when each path is corrected, and when the default working map is subsequently constructed, the constructed frequency can be dynamically adjusted according to the magnitude of the times of the path direction correction.
Further, the number of path planning corrections and resource consumption can be greatly reduced due to robust map construction. The default working map can be constructed periodically based on the paths corrected by the reference illumination direction, and the construction period has two indexes, namely the month period time and the path correction times in the planning process are dynamically adjusted. Therefore, after the step S303, the method further includes: and periodically constructing a default working map based on the corrected path according to a preset cycle time and the corrected path direction times, so that the robot works according to the default working map.
It is easy to understand that the robust map construction can greatly reduce the number of path planning corrections and resource consumption, so that the default working map can be constructed periodically based on the corrected path in the reference illumination direction, the construction period can have two indexes, specifically, the period time (for example, the period time can be weekly or monthly, etc.), the number of path corrections in the planning process, etc., and the construction period can be dynamically adjusted according to the indexes.
In this embodiment, determining a direction perpendicular to illumination from an illumination direction angle is disclosed; judging whether the current direction in the executed path track is perpendicular to the illumination direction or not; if not, correcting the current direction to be perpendicular to the illumination direction, and obtaining a corrected path so that the robot works according to the corrected path. In this embodiment, whether the current path direction of the robot is perpendicular to the direction of illumination is detected in real time, and if the current path direction is not perpendicular to the direction of illumination, the current direction is corrected to be perpendicular to the direction of illumination, so that the robot works according to the corrected path, and therefore the sensor and the robot function failure caused by the backlight work of the robot can be avoided.
Referring to fig. 8, fig. 8 is a flowchart of a third embodiment of the robot path planning method according to the present invention.
Further, in order to perform shadow detection more accurately, accuracy of the obtained space vector is improved, training samples can be collected in advance, association pairs of shadow examples and object examples in the training samples are marked, then a shadow detection algorithm is trained based on the marked image training set, and a high-precision preset shadow detection algorithm is obtained.
Therefore, based on the first embodiment, in this embodiment, before the step S10, the method further includes:
Step S01: preprocessing the obtained shadow image training set to obtain a processed image training set.
It should be appreciated that photographs taken of various objects under illumination containing shadows may be collected as a shadow image training set, which may be, for example, houses, trees, animals, or poles, etc. The image data is then subjected to preprocessing operations such as data enhancement, cropping, scaling, etc., so that the dataset meets the input requirements of the shadow detection algorithm.
Step S02: and marking the association pairs of the shadow examples and the object examples in the processed image training set to obtain a marked image training set.
It can be understood that the processed image training set can be marked, a shadow instance and object instance association data set is made, specifically, a shadow instance and object instance association pair marking is performed on each image, including a shadow instance mask, an object instance mask and a shadow-object association mask, the marking instance pair can refer to fig. 9, and fig. 9 is a schematic diagram of the image training set marked by the robot path planning method of the present invention.
Step S03: training the shadow detection algorithm to be trained according to the noted image training set to obtain a preset shadow detection algorithm.
It should be appreciated that RGB space pictures containing pairs of shadow instances and object instance associations may be input to the shadow detection algorithm to be trained, and the model trained, to obtain a preset shadow detection algorithm that may be used for accurate shadow detection.
Further, in order to obtain a preset shadow detection algorithm with higher accuracy and performance, performance evaluation can be performed on the model through a SOAP evaluation function in the training process, and an algorithm with the performance evaluation result meeting preset conditions is used as the preset shadow detection algorithm to perform path planning. Thus, the step S03 includes: training a shadow detection algorithm to be trained according to the noted image training set to obtain a trained shadow detection algorithm; performing performance evaluation on the trained shadow detection algorithm according to a SOAP evaluation function; and when the performance evaluation result meets the preset condition, obtaining a preset shadow detection algorithm.
It should be appreciated that in order to obtain a pre-set shadow detection algorithm with higher accuracy and performance, the model may be evaluated for performance by a SOAP evaluation function during training. The SOAP rating function is a method for evaluating the performance of a machine learning model, and includes four aspects: accuracy (S), recall (O), accuracy (a), and F1 score (P). The four aspects measure the performances of the model in different aspects respectively, and the performance of the model can be more comprehensively evaluated by integrating the indexes of the four aspects. The method comprises the steps of obtaining a plurality of trained shadow detection algorithms through parameter adjustment in a plurality of training, calculating corresponding evaluation results based on the SOAP evaluation function, and finally selecting an algorithm with optimal performance as a preset shadow detection algorithm.
In the embodiment, preprocessing is performed on the acquired shadow image training set to obtain a processed image training set; labeling the association pairs of the shadow examples and the object examples in the processed image training set to obtain a labeled image training set; training the shadow detection algorithm to be trained according to the marked image training set to obtain a preset shadow detection algorithm. In this embodiment, a training sample is collected in advance, association pairs of a shadow instance and an object instance in the training sample are marked, then a shadow detection algorithm is trained based on a marked image training set to obtain a high-precision preset shadow detection algorithm, and performance evaluation is performed on the shadow detection algorithm through a SOAP evaluation function, so that shadow detection can be performed more accurately, and accuracy of an obtained space vector is improved.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium is stored with a robot path planning program, and the robot path planning program realizes the steps of the robot path planning method when being executed by a processor.
Referring to fig. 10, fig. 10 is a block diagram illustrating a first embodiment of a robot path planning apparatus according to the present invention.
As shown in fig. 10, a robot path planning apparatus according to an embodiment of the present invention includes:
the space vector determining module 1001 is configured to input a preset image into a preset shadow detection algorithm to perform shadow detection, and obtain a space vector from a shadow instance center to an object instance center in the preset image;
An illumination direction determining module 1002 configured to determine an illumination direction angle between the spatial vector and a reference spatial vector, where the illumination direction angle represents an angle between the shadow instance center and illumination;
the path planning module 1003 is configured to determine a direction perpendicular to the illumination according to the illumination direction angle, and plan a path for the robot along the direction perpendicular to the illumination based on the performed path trajectory.
According to the embodiment, a preset image is input into a preset shadow detection algorithm to carry out shadow detection, so that a space vector from the center of a shadow instance to the center of an object instance in the preset image is obtained; determining an illumination direction angle between the space vector and the reference space vector, wherein the illumination direction angle represents an included angle between the shadow instance and illumination; and determining a direction perpendicular to the illumination according to the illumination direction angle, and planning a path for the robot along the direction perpendicular to the illumination based on the executed path track. According to the embodiment, the space vector from the shadow example center to the object example center in the image is detected based on the deep learning, then the included angle between the shadow and the illumination is determined through the included angle between the shadow and the reference space vector, so that the illumination direction is calculated, and then the path of the robot is planned to be perpendicular to the illumination direction, so that the problems of sensor failure and robot function loss in a backlight scene can be effectively solved, and the working efficiency of the robot is improved.
Based on the first embodiment of the robot path planning device of the present invention, a second embodiment of the robot path planning device of the present invention is proposed.
In this embodiment, the illumination direction determining module 1002 is further configured to obtain a reference shadow image of a preset object photographed at a preset time point, and input the reference shadow image to a preset shadow detection algorithm to perform shadow detection, so as to obtain a reference space vector from a shadow instance center to an object instance center in the reference shadow image.
As an embodiment, the path planning module 1003 is further configured to determine a direction perpendicular to the illumination according to the illumination direction angle; judging whether the current direction in the executed path track is the direction perpendicular to illumination; if not, correcting the current direction to be the direction perpendicular to illumination, and obtaining a corrected path so that the robot works according to the corrected path.
As an embodiment, the path planning module 1003 is further configured to record the number of path direction corrections, where the number of path direction corrections is used to construct a default working map.
As an implementation manner, the path planning module 1003 is further configured to periodically construct a default working map based on the corrected path according to a preset cycle time and the number of times of path direction correction, so that the robot works according to the default working map.
As an implementation manner, the spatial vector determining module 1001 is further configured to perform preprocessing on the acquired shadow image training set to obtain a processed image training set; labeling the association pairs of the shadow examples and the object examples in the processed image training set to obtain a labeled image training set; training the shadow detection algorithm to be trained according to the noted image training set to obtain a preset shadow detection algorithm.
As an implementation manner, the space vector determining module 1001 is further configured to train a shadow detection algorithm to be trained according to the annotated image training set, so as to obtain a trained shadow detection algorithm; performing performance evaluation on the trained shadow detection algorithm according to a SOAP evaluation function; and when the performance evaluation result meets the preset condition, obtaining a preset shadow detection algorithm.
Other embodiments or specific implementation manners of the robot path planning apparatus of the present invention may refer to the above method embodiments, and are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A robot path planning method, characterized in that the robot path planning method comprises:
Inputting a preset image into a preset shadow detection algorithm to perform shadow detection, and obtaining a space vector from the center of a shadow instance to the center of an object instance in the preset image;
Determining an illumination direction angle between the space vector and a reference space vector, the illumination direction angle representing an included angle between the center of the shadow instance and illumination;
determining a direction perpendicular to illumination according to the illumination direction angle;
judging whether the current direction in the executed path track is the direction perpendicular to illumination;
If not, correcting the current direction to be the direction perpendicular to illumination, and obtaining a corrected path so that the robot works according to the corrected path;
before determining the illumination direction angle between the space vector and the reference space vector, the method further comprises:
the method comprises the steps of obtaining a reference shadow image of a preset object shot at a preset time point, inputting the reference shadow image into a preset shadow detection algorithm for shadow detection, and obtaining a reference space vector from the center of a shadow instance to the center of an object instance in the reference shadow image, wherein the reference space vector is a space vector corresponding to an object which is opposite to illumination and does not generate shadow.
2. The method of claim 1, wherein if not, correcting the current direction to be the direction perpendicular to the illumination to obtain a corrected path, so that the robot operates according to the corrected path, further comprising:
And recording the number of path direction correction times, wherein the number of path direction correction times is used for constructing a default working map.
3. The robot path planning method according to claim 2, wherein after the recording of the number of path direction corrections, further comprising:
and periodically constructing a default working map based on the corrected path according to a preset cycle time and the corrected path direction times, so that the robot works according to the default working map.
4. The method for planning a path of a robot according to claim 1, wherein before inputting the preset image into a preset shadow detection algorithm to perform shadow detection, obtaining a space vector from a center of a shadow instance to a center of an object instance in the preset image, the method further comprises:
preprocessing the acquired shadow image training set to obtain a processed image training set;
Labeling the association pairs of the shadow examples and the object examples in the processed image training set to obtain a labeled image training set;
Training the shadow detection algorithm to be trained according to the noted image training set to obtain a preset shadow detection algorithm.
5. The method for planning a path of a robot according to claim 4, wherein training the shadow detection algorithm to be trained according to the labeled image training set to obtain a preset shadow detection algorithm comprises:
Training a shadow detection algorithm to be trained according to the noted image training set to obtain a trained shadow detection algorithm;
performing performance evaluation on the trained shadow detection algorithm according to a SOAP evaluation function;
and when the performance evaluation result meets the preset condition, obtaining a preset shadow detection algorithm.
6. A robot path planning apparatus, characterized in that the robot path planning apparatus comprises:
The space vector determining module is used for inputting a preset image into a preset shadow detection algorithm to detect shadows and obtaining a space vector from the center of a shadow instance to the center of an object instance in the preset image;
The illumination direction determining module is used for determining an illumination direction angle between the space vector and the reference space vector, and the illumination direction angle represents an included angle between the center of the shadow instance and illumination;
The path planning module is used for determining a direction perpendicular to illumination according to the illumination direction angle; judging whether the current direction in the executed path track is the direction perpendicular to illumination; if not, correcting the current direction to be the direction perpendicular to illumination, and obtaining a corrected path so that the robot works according to the corrected path;
the illumination direction determining module is further configured to obtain a reference shadow image of a preset object photographed at a preset time point, input the reference shadow image into a preset shadow detection algorithm, and perform shadow detection to obtain a reference space vector from a shadow instance center to an object instance center in the reference shadow image, where the reference space vector is a space vector corresponding to an object that is opposite to illumination and does not generate a shadow.
7. A robotic path planning apparatus, the apparatus comprising: a memory, a processor and a robot path planning program stored on the memory and executable on the processor, the robot path planning program being configured to implement the steps of the robot path planning method of any one of claims 1 to 5.
8. A storage medium having stored thereon a robot path planning program which, when executed by a processor, implements the steps of the robot path planning method according to any one of claims 1 to 5.
CN202410224399.5A 2024-02-29 2024-02-29 Robot path planning method, device, equipment and storage medium Active CN117824662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410224399.5A CN117824662B (en) 2024-02-29 2024-02-29 Robot path planning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410224399.5A CN117824662B (en) 2024-02-29 2024-02-29 Robot path planning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117824662A CN117824662A (en) 2024-04-05
CN117824662B true CN117824662B (en) 2024-05-28

Family

ID=90513646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410224399.5A Active CN117824662B (en) 2024-02-29 2024-02-29 Robot path planning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117824662B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014185908A (en) * 2013-03-22 2014-10-02 Pasco Corp Azimuth estimation device and azimuth estimation program
CN107622502A (en) * 2017-07-28 2018-01-23 南京航空航天大学 The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination
CN112020688A (en) * 2018-03-26 2020-12-01 捷普有限公司 Apparatus, system, and method for autonomous robotic navigation using depth assessment
KR102289752B1 (en) * 2020-10-13 2021-08-13 주식회사 스페이스소프트인더스트리 A drone for performring route flight in gps blocked area and methed therefor
CN115855092A (en) * 2022-11-22 2023-03-28 阿里巴巴(中国)有限公司 Navigation path planning method, device, equipment and storage medium
CN116518996A (en) * 2023-04-28 2023-08-01 重庆长安汽车股份有限公司 Path planning method, path planning device, vehicle and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11678140B2 (en) * 2020-06-29 2023-06-13 The United States Of America As Represented By The Secretary Of The Army Localization by using skyline data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014185908A (en) * 2013-03-22 2014-10-02 Pasco Corp Azimuth estimation device and azimuth estimation program
CN107622502A (en) * 2017-07-28 2018-01-23 南京航空航天大学 The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination
CN112020688A (en) * 2018-03-26 2020-12-01 捷普有限公司 Apparatus, system, and method for autonomous robotic navigation using depth assessment
KR102289752B1 (en) * 2020-10-13 2021-08-13 주식회사 스페이스소프트인더스트리 A drone for performring route flight in gps blocked area and methed therefor
CN115855092A (en) * 2022-11-22 2023-03-28 阿里巴巴(中国)有限公司 Navigation path planning method, device, equipment and storage medium
CN116518996A (en) * 2023-04-28 2023-08-01 重庆长安汽车股份有限公司 Path planning method, path planning device, vehicle and storage medium

Also Published As

Publication number Publication date
CN117824662A (en) 2024-04-05

Similar Documents

Publication Publication Date Title
US11385062B2 (en) Map creation method for mobile robot and path planning method based on the map
US20210326624A1 (en) Method, system and device for difference automatic calibration in cross modal target detection
CN108955718B (en) Visual odometer and positioning method thereof, robot and storage medium
CN108871353B (en) Road network map generation method, system, equipment and storage medium
EP3825903A1 (en) Method, apparatus and storage medium for detecting small obstacles
US20180225527A1 (en) Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN113674416B (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN115655262B (en) Deep learning perception-based multi-level semantic map construction method and device
CN112967339B (en) Vehicle pose determining method, vehicle control method and device and vehicle
CN111091023B (en) Vehicle detection method and device and electronic equipment
CN111931581A (en) Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium
LU500407B1 (en) Real-time positioning method for inspection robot
CN111161334A (en) Semantic map construction method based on deep learning
CN112926461A (en) Neural network training and driving control method and device
CN115683100A (en) Robot positioning method, device, robot and storage medium
CN114608521B (en) Monocular ranging method and device, electronic equipment and storage medium
CN111950428A (en) Target obstacle identification method and device and carrier
US20220114813A1 (en) Detecting obstacle
WO2022188333A1 (en) Walking method and apparatus, and computer storage medium
CN117824662B (en) Robot path planning method, device, equipment and storage medium
CN112148817B (en) SLAM optimization method, device and system based on panorama
CN117152719A (en) Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device
CN114140660B (en) Vehicle detection method, device, equipment and medium
CN112101303B (en) Image data processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant