CN110956662A - Carrier obstacle avoidance method and device and electronic equipment - Google Patents

Carrier obstacle avoidance method and device and electronic equipment Download PDF

Info

Publication number
CN110956662A
CN110956662A CN201911212714.8A CN201911212714A CN110956662A CN 110956662 A CN110956662 A CN 110956662A CN 201911212714 A CN201911212714 A CN 201911212714A CN 110956662 A CN110956662 A CN 110956662A
Authority
CN
China
Prior art keywords
carrier
motion
obstacle
depth image
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911212714.8A
Other languages
Chinese (zh)
Inventor
姚海鹏
秦泽宇
纪哲
张培颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201911212714.8A priority Critical patent/CN110956662A/en
Publication of CN110956662A publication Critical patent/CN110956662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a carrier obstacle avoidance method, a carrier obstacle avoidance device and electronic equipment, wherein the carrier obstacle avoidance method comprises the following steps: acquiring a depth image in the running direction of the carrier; calculating the distance from the carrier to the obstacle based on the depth image; judging whether the distance is greater than a preset threshold value or not; if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier; and acquiring a motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position. The technical problem that the carrier obstacle avoidance effect is not ideal in the complex environment is solved, and the carrier obstacle avoidance effect is improved.

Description

Carrier obstacle avoidance method and device and electronic equipment
Technical Field
The invention relates to the technical field of intelligent control, in particular to a carrier obstacle avoidance method, a carrier obstacle avoidance device and electronic equipment.
Background
The traditional artificial potential field method is a virtual force field method, which constructs a potential field in a task space by introducing the concept of field in physics, so that a carrier is moved to a target position under the action of the attraction force of the target position and the repulsion force of an obstacle in the potential field, and the carrier is ensured not to collide with the obstacle. However, in the conventional artificial potential field method, before the carrier reaches the target position, the resultant force applied to the carrier is zero in the virtual potential field due to some reason, so that the carrier is mistakenly assumed to have reached the target position and is not stopped, and thus the effect of carrier obstacle avoidance in a complex environment is not ideal.
Disclosure of Invention
In view of this, the present invention aims to provide a method, an apparatus and an electronic device for carrier obstacle avoidance, so as to alleviate the technical problem of unsatisfactory carrier obstacle avoidance effect in a complex environment and improve the carrier obstacle avoidance effect.
In a first aspect, an embodiment of the present invention provides a carrier obstacle avoidance method, where the carrier obstacle avoidance method includes:
acquiring a depth image in the running direction of the carrier;
calculating a distance of the carrier to an obstacle based on the depth image;
judging whether the distance is larger than a preset threshold value or not;
if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier;
and acquiring the motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the motion decision model is a model obtained by training based on a full convolution neural network, and the method for avoiding an obstacle by a carrier further includes:
acquiring a pre-stored picture training set, wherein the picture training set comprises a plurality of standard depth images containing obstacles, and each standard depth image carries a motion adjustment label;
and inputting the picture training set into the full convolution neural network for training to obtain the motion decision model.
With reference to the first possible implementation manner of the first aspect, the present invention provides a second possible implementation manner of the first aspect, wherein the movement adjusting tag includes a movement speed and a movement angle; the motion adjusting value of the carrier output by the motion decision model comprises a motion speed adjusting value and a motion angle adjusting value;
the step of controlling the carrier to avoid the obstacle according to the motion adjustment value includes:
acquiring the motion speed adjusting value and the motion angle adjusting value which are included in the motion adjusting value;
and adjusting the running speed and the movement angle of the carrier according to the movement speed adjusting value and the movement angle adjusting value so as to avoid the obstacle.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, wherein the step of calculating a distance from the carrier to an obstacle based on the depth image includes:
performing parallax calculation on the depth image to obtain the distances of all obstacles contained in the depth image;
generating an image matrix containing the distances of all the obstacles;
and traversing the image matrix, and determining the distance corresponding to the obstacle with the minimum distance to the carrier in the distances of all the obstacles as the distance from the carrier to the obstacle.
With reference to the third possible implementation manner of the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the depth image is acquired by a binocular camera;
the step of performing parallax calculation on the depth image to obtain the distances of all obstacles contained in the depth image comprises the following steps:
and extracting a disparity map of the depth information of each pixel in the depth image, and calculating the distances of all obstacles contained in the depth image according to the disparity information carried in the disparity map.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the method for avoiding an obstacle by using a carrier further includes:
and when the distance is larger than the preset threshold value, planning a path from the carrier to the target position according to kinematic constraint so as to enable the carrier to avoid the obstacle and reach the target position.
In a second aspect, an embodiment of the present invention further provides a carrier obstacle avoidance device, where the carrier obstacle avoidance device includes:
the acquisition module is used for acquiring a depth image in the running direction of the carrier;
a calculation module for calculating a distance from the carrier to an obstacle based on the depth image;
the judging module is used for judging whether the distance is larger than a preset threshold value or not;
the input module is used for inputting the depth image into a pre-trained motion decision model if the depth image is not in the motion adjustment value, so that the motion decision model outputs the motion adjustment value of the carrier;
and the control module is used for acquiring the motion adjusting value and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach a target position.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the carrier obstacle avoidance device further includes: and when the distance is larger than the preset threshold value, planning a path from the carrier to the target position according to kinematic constraint so as to enable the carrier to avoid the obstacle and reach the target position.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the carrier obstacle avoidance method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the carrier obstacle avoidance method according to the first aspect.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a carrier obstacle avoidance method, a carrier obstacle avoidance device and electronic equipment, wherein the carrier obstacle avoidance method comprises the following steps: acquiring a depth image in the running direction of the carrier; calculating the distance from the carrier to the obstacle based on the depth image; judging whether the distance is greater than a preset threshold value or not; if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier; and acquiring a motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position. The method and the device can relieve the technical problem that the carrier obstacle avoidance effect is not ideal in the complex environment, improve the effect of the carrier obstacle avoidance and improve the efficiency of the carrier reaching the target position.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an APF algorithm constructed potential field provided by an embodiment of the invention;
fig. 2 is a flowchart of a method for avoiding an obstacle by a carrier according to an embodiment of the present invention;
fig. 3 is a flowchart of another method for avoiding an obstacle for a carrier according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for training a motion decision model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a training of a motion decision model according to an embodiment of the present invention;
fig. 6 is a flowchart of another method for avoiding an obstacle for a carrier according to an embodiment of the present invention;
fig. 7 is a flowchart of another method for avoiding an obstacle for a carrier according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a distance between a carrier and an obstacle calculated based on a depth image according to an embodiment of the present invention;
fig. 9 is a schematic diagram of another distance calculation from a carrier to an obstacle based on a depth image according to an embodiment of the present invention;
fig. 10 is a schematic diagram of another distance calculation from a carrier to an obstacle based on a depth image according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a method for avoiding an obstacle of a carrier according to an embodiment of the present invention;
fig. 12 is a flowchart of another method for avoiding an obstacle for a carrier according to an embodiment of the present invention;
fig. 13 is a schematic view of a carrier obstacle avoidance apparatus according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Icon:
10-an acquisition module; 20-a calculation module; 30-a judging module; 40-an input module; 50-a control module; 60-a processor; 61-a memory; 62-a bus; 63-communication interface.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An APF (Artificial Potential Field) algorithm is a virtual force Field method, which constructs a Potential Field in a task space by introducing a concept of a Field in physics, as shown in fig. 1, in the Potential Field, a carrier moves to a target position under the action of the attraction of the target position, and an obstacle acts on the carrier as a repulsive Field, so as to ensure that the carrier does not collide with the obstacle. Therefore, under the action of the attraction force of the target position and the repulsion force of the obstacle, the carrier searches for a collision-free and safe path along the descending direction of the potential field function until reaching the target position.
However, the conventional APF algorithm is prone to a local minimum phenomenon, that is, before the carrier reaches the target position, due to some reason, in the virtual potential field, the resultant force of the attraction of the target position borne by the carrier and the repulsion of the obstacle is zero, so that the carrier is mistakenly assumed to have reached the target position and is not stopped, and thus the obstacle avoidance effect of the carrier in a complex environment is not ideal.
Aiming at the condition that the obstacle avoidance effect of the carrier is not ideal in the complex environment due to the APF algorithm, the embodiment of the invention provides a carrier obstacle avoidance method, a carrier obstacle avoidance device and electronic equipment, so that the technical problem that the obstacle avoidance effect of the carrier is not ideal in the complex environment is solved, the obstacle avoidance effect of the carrier is improved, and the efficiency of the carrier reaching a target position is improved.
For the convenience of understanding the embodiment, the carrier obstacle avoidance method provided by the embodiment of the present invention is described in detail below.
In a possible implementation manner, an embodiment of the present invention provides a carrier obstacle avoidance method, which may be applied to the fields of robots, unmanned aerial vehicles, and the like, and as shown in a flowchart of the carrier obstacle avoidance method shown in fig. 2, the carrier obstacle avoidance method includes the following steps:
step S102, obtaining a depth image in the carrier running direction;
specifically, the depth image in the carrier moving direction is acquired by a binocular camera, that is, the depth image in the carrier moving direction is acquired by an RGB-D camera. Here, when the carrier is an unmanned aerial vehicle, the acquired depth image is a depth image acquired by an RGB-D camera in the flight direction of the unmanned aerial vehicle; when the carrier is a robot, the depth image is a depth image of the robot in the direction of the target position, which is acquired by the RGB-D camera, and the carrier includes, but is not limited to, an unmanned aerial vehicle and a robot, which is not limited by the present invention.
Step S104, calculating the distance from the carrier to the obstacle based on the depth image;
in practical application, because the binocular camera has difference in the depth images acquired at the same time in the operation process of the carrier, namely, parallax exists between the two depth images, the distance from the carrier to the obstacle can be calculated according to the parallax. It should be noted that in the depth image, there may be a plurality of obstacles appearing in the carrier moving direction, and at this time, the distances from the carrier to all the obstacles in the depth image may be calculated according to the parallax between the two depth images without determining what type of obstacle is in the carrier moving direction. Therefore, in the embodiment of the invention, for any type of obstacles, according to the parallax of the depth image acquired by the binocular camera, the distance from the carrier to the obstacle in the depth image and the image matrix formed by the distance from the carrier to the obstacle in the depth image can be calculated, and the minimum value can be obtained by traversing the image matrix, namely the distance from the carrier to the nearest obstacle in the depth image.
Step S106, judging whether the distance is larger than a preset threshold value;
in practical application, traversing the image matrix, after obtaining the distance from the carrier to the nearest barrier in the depth image, judging whether the distance is greater than a preset threshold, if so, planning the path from the carrier to the target position according to kinematic constraint, and ensuring that the carrier avoids the barrier to reach the target position.
Step S108, if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier;
and if the distance is smaller than or equal to a preset threshold value, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier according to the depth image.
And step S110, acquiring a motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position.
In practical application, the motion adjustment value output by the motion decision model is obtained, and the carrier is controlled to avoid the obstacle according to the motion adjustment value so as to ensure that the carrier reaches the target position.
According to the carrier obstacle avoidance method provided by the embodiment of the invention, the depth image in the running direction of the carrier is obtained; calculating the distance from the carrier to the obstacle based on the depth image; judging whether the distance is greater than a preset threshold value or not; if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier; the method comprises the steps of obtaining a motion adjusting value, controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position, so that the technical problem that the obstacle avoiding effect of the carrier is not ideal under the complex environment can be solved, and the obstacle avoiding effect of the carrier and the efficiency of the carrier reaching the target position are improved.
On the basis of fig. 2, an embodiment of the present invention provides another method for avoiding an obstacle for a carrier, and fig. 3 is a flowchart of the another method for avoiding an obstacle for a carrier, which is provided by the embodiment of the present invention, and as shown in fig. 3, the method includes the following steps:
step S202, obtaining a depth image in the carrier running direction;
step S204, calculating the distance from the carrier to the obstacle based on the depth image;
step S206, judging whether the distance is greater than a preset threshold value; if not, executing steps S208-S210; if yes, go to step S212;
step S208, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier;
step S210, obtaining a motion adjustment value, and controlling a carrier to avoid an obstacle according to the motion adjustment value so as to reach a target position;
and step S212, planning a path from the carrier to the target position according to the kinematic constraint so that the carrier avoids the obstacle to reach the target position.
According to the carrier obstacle avoidance method provided by the embodiment of the invention, the depth image in the running direction of the carrier is obtained; calculating the distance from the carrier to the obstacle based on the depth image; judging whether the distance is greater than a preset threshold value or not; if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier; acquiring a motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position; if so, planning a path from the carrier to the target position according to the kinematic constraint so that the carrier avoids the obstacle to reach the target position, thereby relieving the technical problem that the obstacle avoidance effect of the carrier is not ideal in a complex environment, and improving the obstacle avoidance effect of the carrier and the efficiency of the carrier reaching the target position.
Further, the motion decision model in the embodiment of the present invention is a model obtained by training based on a full convolution neural network, and as shown in fig. 4, a flowchart of a training method of a motion decision model includes the following steps:
step S302, a pre-stored picture training set is obtained;
the image training set comprises a plurality of standard depth images containing obstacles, and each standard depth image carries a motion adjustment label.
Step S304, inputting the picture training set into a full convolution neural network for training so as to obtain a motion decision model.
In practical application, the full convolution neural network based on the embodiment of the present invention has 5 convolution layers, where convolution layer 1 has 150 filters, the number of channels is 2, the number of channels corresponds to the input picture training set, and a convolution kernel with a size of 3 × 3 is used; convolutional layers 2, 3 and 4 have 100, 100 and 50 filters, respectively, all using convolution kernels of 3 × 3 size, and the number of channels in each layer corresponds to the number of filters in the previous layer, so as to ensure dimension matching of tensor operation.
In addition, in the full convolution neural network, an attention mechanism is also adopted to process an input picture training set. The attention mechanism stems from the processing of sequence data, which is designed based on a time-step model. Most of the current data processing models are static models, and with the increase of the sequence length, the context information is limited to a fixed length, and the capacity of the whole model is limited. The attention mechanism is introduced to solve the above-mentioned type of problem, and it takes a part of the input, i.e. a subset of the input, as input in a structured way, so as to reduce the dimensionality of the input data and the computation amount. Meanwhile, the neural network can be more focused on utilizing the information which is obviously related to the current output in the input data, so that the function of removing the false and true is achieved. As shown in fig. 5, in the embodiment of the present invention, the input picture training set is processed by using the attention mechanism, and the processed picture training set is input to the convolutional layer 4, so as to ensure that the full convolutional neural network focuses more on using the information related to output in the picture training set, i.e., to achieve the effect of removing the artifacts and the truthful artifacts.
Further, in the above-described full convolutional neural network, convolutional layer 5 is a special convolutional layer, whose input space domain is 1 × 1, and the number of filters is 6, and a convolutional kernel of 1 × 1 size is used. Here, the convolutional layer 5 integrates the data with the number of channels of 50 into the data with the number of channels of 6 after calculation and processing, and each of the 6 channels corresponds to the score value of the optional action to be output, and in addition, the result output by the convolutional layer 5 in the embodiment of the present invention is converted into a probability vector through normalization processing, and at this time, the convolutional layer 5 is also referred to as a bottleneck layer. In practical applications, the bottleneck layer is a technique to replace the full link layer, and in the full convolution neural network, the bottleneck layer is generally used before the output layer, and the feature tensor is changed to a spatial size of 1 × 1 by convolution or pooling before the bottleneck layer, so that the spatial information of the entity in the image can be maximally retained.
In addition, in the embodiment of the present invention, the output of the convolutional layer 5 may be regarded as the plan of the current path of the carrier, as shown in fig. 5, that is, the output of the convolutional layer 5 may be regarded as the predicted scores of the movement of the carrier in 6 directions, respectively. In order to convert the output result of the convolutional layer 5 into a probability vector through normalization processing, a Softmax layer (not shown) is further connected to the convolutional layer 5, the Softmax layer normalizes the output result of the convolutional layer 5 through a Softmax function to obtain a probability vector, and a direction with the highest probability is selected for single-step prediction during final planning, so that the path planning of the carrier can be realized by predicting the moving direction of each step through the full convolutional neural network.
Therefore, the motion decision model of the embodiment of the invention can be obtained by inputting the pre-stored picture training set into the full convolution neural network for training, the motion decision model mainly depends on matrix operation, the calculation speed is far higher than that of other traditional algorithms, and the time of a carrier reaching a target position can be reduced.
Further, the movement adjusting label comprises a movement speed and a movement angle; the motion adjusting value of the carrier output by the motion decision model comprises a motion speed adjusting value and a motion angle adjusting value; therefore, on the basis of fig. 3, the flowchart of another carrier obstacle avoidance method proposed by the embodiment of the present invention as shown in fig. 6 includes the following steps:
step S402, acquiring a depth image in the carrier running direction;
step S404, calculating the distance between the carrier and the obstacle based on the depth image;
step S406, judging whether the distance is greater than a preset threshold value; if not, executing steps S408-S412, if yes, executing step S414;
step S408, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier;
in practical applications, the steps S402 to S408 refer to the steps S102 to S108 in the above embodiments, and will not be described in detail here.
Step S410, obtaining a movement speed regulating value and a movement angle regulating value which comprise movement regulating values;
specifically, the motion adjusting label comprises a motion speed and a motion angle, and the motion adjusting value of the carrier output by the motion decision model comprises a motion speed adjusting value and a motion angle adjusting value; the motion speed adjusting value is used for adjusting the running speed of the carrier, and the motion angle adjusting value is used for adjusting the running direction of the carrier, so that the carrier can avoid the obstacle.
Step S412, adjusting the running speed and the movement angle of the carrier according to the movement speed adjustment value and the movement angle adjustment value so as to avoid the obstacle and reach the target position;
in practical application, the running speed and the movement angle of the carrier are adjusted according to the movement speed adjusting value and the movement angle adjusting value output by the movement decision model, so that the carrier avoids obstacles, and the condition that the carrier is mistakenly thought to reach the target position and not be stopped before the carrier is avoided when the resultant force of the attraction of the target position borne by the carrier and the repulsion of the obstacles is zero, thereby ensuring that the carrier correctly reaches the target position and improving the obstacle avoidance effect of the carrier.
In addition, the motion speed adjusting value and the motion angle adjusting value are used as parameters of the motion decision model, the motion speed adjusting value and the motion angle adjusting value are adjusted by using a deep learning algorithm, and a value iteration network is introduced into the algorithm, so that the full convolution neural network has higher function fitting capacity while being easy to train, the trained motion decision model is ensured to have more accurate capacity of planning a carrier path, and the effect of carrier obstacle avoidance is improved.
And step S414, planning a path from the carrier to the target position according to the kinematic constraint so that the carrier avoids the obstacle to reach the target position.
Furthermore, the depth images in the carrier running direction are acquired by the binocular cameras, and the disparity exists between the two depth images because the depth images acquired by the binocular cameras at the same time in the carrier running process, namely the disparity exists between the two depth images, so that the distance from the carrier to the obstacle can be calculated according to the disparity. On the basis of fig. 3, as shown in fig. 7, a flowchart of another carrier obstacle avoidance method proposed by the embodiment of the present invention includes the following steps:
step S502, obtaining a depth image in the carrier running direction;
step S504, carrying out parallax calculation on the depth image to obtain the distances of all obstacles contained in the depth image;
in practical application, by calculating the parallax of the two depth images, namely directly measuring the distance of the front obstacle (the range shot by the depth images) in the motion direction of the carrier, the judgment on what type of obstacle the front obstacle appears is not needed. Therefore, for any type of obstacles, necessary early warning or braking can be performed according to the change of the distance information.
In addition, the principle of a binocular camera in the binocular camera is similar to that of human eyes, and the human eyes can perceive the distance of an object because the images presented by the two eyes to the same object have difference, namely parallax, and the parallax is smaller when the object distance is longer; conversely, the larger the parallax, and therefore, the magnitude of the parallax corresponds to the distance between the object and the eye. Therefore, in the embodiment of the present invention, the disparity calculation may be performed on the depth image to obtain the distances of all the obstacles included in the depth image.
In practical applications, four coordinate systems are mainly involved in image processing: world coordinate system, camera coordinate system, image coordinate system and pixel coordinate system, as shown in fig. 8, where Oω-XωYωZωRepresenting a world coordinate system for describing the positions of the binocular cameras; o isc-XcYcZcRepresenting a camera coordinate system with an optical center as an origin; o-xy represents the image coordinate systemTaking the optical center as the image center; uv represents a pixel coordinate system, and the origin is the upper left corner of the image; p represents a point in the world coordinate system, namely a real point in real life; p is an imaging point of the point P in the image coordinate system, the coordinates in the image coordinate system are (x, y), and the coordinates in the pixel coordinate system are (u, v); f is the focal length of the camera, and the size is equal to O and OcI.e. f ═ O-Oc||。
Therefore, for a binocular camera, in the embodiment of the present invention, the camera coordinate system uses the optical center of the left camera as the origin, the optical center connecting line of the left camera and the right camera as the X axis, the optical axis of the left camera as the Z axis, the optical center direction of the right camera as the positive direction of the X axis, and the direct front of the binocular camera as the positive direction of the Z axis, as shown in fig. 9, for any object to be measured, the position of which is P, according to the triangle similarity law, the following formula can be obtained:
Figure BDA0002294990360000131
wherein, in the above formula (1), (x, y, z) represents the coordinate of the point P in the world coordinate system, f represents the focal length of the camera, and x represents the focal length of the cameralA value representing the X-axis direction of the image of the object to be measured in the left camera from the midpoint of the image, XrA value y representing the X-axis direction of the image of the measured object from the midpoint of the image in the right cameralA value representing the Y-axis direction of the image of the object to be measured in the left camera from the midpoint of the image, YrA numerical value representing the Y-axis direction of the imaging distance image midpoint formed by the measured object in the right camera; here the Y-axis is not shown in the plan view, see in particular fig. 8; further, b represents a baseline distance; the baseline distance b can be specifically shown in fig. 10, and the embodiment of the present invention is not described herein.
From equation (1), we can obtain:
Figure BDA0002294990360000132
wherein (x, y, z) represents P point in world coordinate systemThe coordinate f is the focal length of the camera, xlA value representing the X-axis direction of the image of the object to be measured in the left camera from the midpoint of the image, XrA value y representing the X-axis direction of the image of the measured object from the midpoint of the image in the right cameralA value representing the Y-axis direction of the image of the object to be measured in the left camera from the midpoint of the image, YrA numerical value representing the Y-axis direction of the imaging distance image midpoint formed by the measured object in the right camera; b represents the baseline distance.
By collating equation (2), we can finally obtain:
Figure BDA0002294990360000141
wherein b represents the baseline distance, f represents the focal length of the camera, (x, y, z) represents the coordinate of the point P in the world coordinate system, and x represents the coordinate of the point P in the world coordinate systemlA value y representing the X-axis direction of the image of the object to be measured in the left camera from the midpoint of the imagelA value representing the Y-axis direction of the image of the object to be measured from the midpoint of the image, and d represents the parallax, i.e., d is xl-xrTherefore, the coordinates of the object to be measured in the world coordinate system can be obtained. Further, in the embodiment of the present invention, when the origin of the coordinate system is set as the focal point of the left camera, we can calculate the distance from the carrier to the obstacle according to the above equations (1) to (3).
Therefore, the embodiment of the invention can perform parallax calculation on the depth image acquired by the binocular camera to obtain the distance of all obstacles contained in the depth image. Specifically, a disparity map of depth information of each pixel in the depth image is first extracted, and then distances of all obstacles contained in the depth image are calculated according to the disparity information carried in the disparity map, and the specific distance calculation method may refer to the above-described embodiment.
Step S506, generating an image matrix containing the distances of all obstacles;
in practical application, since various obstacles may exist in the moving direction of the carrier, we perform parallax calculation on the depth image, obtain distances including all the obstacles, and generate an image matrix including the distances of all the obstacles.
Step S508, traversing the image matrix, and determining the distance corresponding to the obstacle with the minimum distance to the carrier in the distances of all the obstacles as the distance from the carrier to the obstacle;
at this time, for the image matrix including the distances of all the obstacles, traversal processing operation is required to obtain the distance corresponding to the obstacle with the smallest distance to the carrier among the distances of all the obstacles, and the distance is determined as the distance from the carrier to the obstacle.
Step S510, judging whether the distance is larger than a preset threshold value; if not, executing steps S512-S514; if so, go to step S516;
step S512, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of a carrier;
step S514, obtaining a motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position;
and step S516, planning a path from the carrier to the target position according to the kinematic constraint so that the carrier avoids the obstacle to reach the target position.
Therefore, the embodiment of the invention adopts the binocular camera to collect the depth image of the movement of the carrier and calculates the distance between the carrier and the obstacle based on the depth image, so that the distance accuracy between the carrier and the obstacle can be improved, the carrier is ensured to avoid the obstacle more safely, and the obstacle avoiding effect of the carrier is further improved.
Further, in the embodiment of the present invention, as shown in fig. 11, first, a depth image of a carrier is obtained, a distance from the carrier to an obstacle is calculated according to the depth image, when the distance from the carrier to the obstacle is greater than a preset threshold, a path from the carrier to a target position is planned according to a kinematic constraint, and when the distance from the carrier to the obstacle is smaller than the preset threshold, the distance is input to a pre-trained motion decision model, where a learning-reinforced reward and punishment rule is further formulated according to the distance from the carrier to the obstacle, specifically, the reward and punishment rule employs an A3C (Actor-critical Algorithm) that combines advantages of a Q-learning Algorithm and a Policy Gradient Algorithm, the A3C Algorithm trains a random Policy function and an action value function through the motion decision model, wherein the random Policy function and the action value function determine a motion Policy of the carrier, so that the carrier moves according to the motion Policy, as shown in fig. 12, the state at the next time is obtained, and the bonus value is determined according to the state at the next time, so as to construct a state space of the carrier, that is, a path plan for the carrier to safely reach the target position avoiding the obstacle. In addition, experiments and numerical analysis prove that the A3C algorithm has excellent adaptability and accuracy to environments with different scales and different structures and has good operation efficiency, so that the situation that a carrier is not stopped for reaching a target position by mistake can be effectively avoided in carrier obstacle avoidance path planning, and the effect of carrier obstacle avoidance is improved.
On the basis of the above embodiments, an embodiment of the present invention further provides a carrier obstacle avoidance device, and fig. 13 is a schematic view of the carrier obstacle avoidance device provided in the embodiment of the present invention. As shown in fig. 13, the carrier obstacle avoidance device includes:
the acquisition module 10 is used for acquiring a depth image in the carrier running direction;
a calculation module 20 for calculating a distance from the carrier to the obstacle based on the depth image;
the judging module 30 is used for judging whether the distance is greater than a preset threshold value;
the input module 40 is configured to input the depth image to a pre-trained motion decision model if the depth image is not in the motion adjustment value of the carrier, so that the motion decision model outputs the motion adjustment value of the carrier;
and the control module 50 is used for acquiring the motion adjusting value and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position.
Further, the motion decision model is obtained by training based on a full convolution neural network, and the carrier obstacle avoidance device is further configured to:
acquiring a pre-stored picture training set, wherein the picture training set comprises a plurality of standard depth images containing obstacles, and each standard depth image carries a motion adjustment label;
and inputting the picture training set into a full convolution neural network for training so as to obtain a motion decision model.
Further, the movement adjusting tag comprises a movement speed and a movement angle; the motion adjustment values of the carrier output by the motion decision model include a motion speed adjustment value and a motion angle adjustment value, and the control module 50 is further configured to:
obtaining a movement speed adjusting value and a movement angle adjusting value which comprise the movement adjusting values;
and adjusting the running speed and the movement angle of the carrier according to the movement speed adjusting value and the movement angle adjusting value so as to avoid the obstacle.
Further, the calculating module 20 is further configured to:
performing parallax calculation on the depth image to obtain the distances of all obstacles contained in the depth image;
generating an image matrix containing distances of all obstacles;
and traversing the image matrix, and determining the distance corresponding to the obstacle with the minimum distance to the carrier in the distances of all the obstacles as the distance from the carrier to the obstacle.
Further, the depth image is acquired by a binocular camera, and the calculating module 20 further includes:
and extracting a disparity map of depth information of each pixel in the depth image, and calculating the distances of all obstacles contained in the depth image according to the disparity information carried in the disparity map.
Further, the carrier obstacle avoidance device is further used for:
and when the distance is greater than a preset threshold value, planning a path from the carrier to the target position according to the kinematic constraint so that the carrier avoids the obstacle to reach the target position.
The carrier obstacle avoidance device provided by the embodiment of the invention has the same technical characteristics as the carrier obstacle avoidance method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 14, which is a schematic structural diagram of the electronic device, where the electronic device includes a processor 60 and a memory 61, where the memory 61 stores computer-executable instructions capable of being executed by the processor 60, and the processor 60 executes the computer-executable instructions to implement the above-mentioned carrier obstacle avoidance method.
In the embodiment shown in fig. 14, the electronic device further comprises a bus 62 and a communication interface 63, wherein the processor 60, the communication interface 63 and the memory 61 are connected by the bus 62.
The Memory 61 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 63 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 62 may be an ISA (Industry standard Architecture) bus, a PCI (Peripheral component interconnect) bus, an EISA (Extended Industry standard Architecture) bus, or the like. The bus 62 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one double-headed arrow is shown in FIG. 14, but that does not indicate only one bus or one type of bus.
The processor 60 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above-described carrier obstacle avoidance method may be implemented by an integrated logic circuit of hardware in the processor 60 or instructions in the form of software. The Processor 60 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the carrier obstacle avoidance method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and the processor 60 reads information in the memory and completes the steps of the carrier obstacle avoidance method of the foregoing embodiment in combination with hardware thereof.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the above-mentioned carrier obstacle avoidance method, and specific implementation may refer to the foregoing method embodiment, and is not described herein again.
The computer program product provided in the embodiment of the present invention includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A carrier obstacle avoidance method is characterized by comprising the following steps:
acquiring a depth image in the running direction of the carrier;
calculating a distance of the carrier to an obstacle based on the depth image;
judging whether the distance is larger than a preset threshold value or not;
if not, inputting the depth image into a pre-trained motion decision model so that the motion decision model outputs a motion adjusting value of the carrier;
and acquiring the motion adjusting value, and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach the target position.
2. The vehicle obstacle avoidance method according to claim 1, wherein the motion decision model is a model trained based on a full convolution neural network, and the vehicle obstacle avoidance method further includes:
acquiring a pre-stored picture training set, wherein the picture training set comprises a plurality of standard depth images containing obstacles, and each standard depth image carries a motion adjustment label;
and inputting the picture training set into the full convolution neural network for training to obtain the motion decision model.
3. The carrier obstacle avoidance method according to claim 2, wherein the movement regulation tag comprises a movement speed and a movement angle; the motion adjusting value of the carrier output by the motion decision model comprises a motion speed adjusting value and a motion angle adjusting value;
the step of controlling the carrier to avoid the obstacle according to the motion adjustment value includes:
acquiring the motion speed adjusting value and the motion angle adjusting value which are included in the motion adjusting value;
and adjusting the running speed and the movement angle of the carrier according to the movement speed adjusting value and the movement angle adjusting value so as to avoid the obstacle.
4. The carrier obstacle avoidance method according to claim 1, wherein the step of calculating the distance from the carrier to the obstacle based on the depth image includes:
performing parallax calculation on the depth image to obtain the distances of all obstacles contained in the depth image;
generating an image matrix containing the distances of all the obstacles;
and traversing the image matrix, and determining the distance corresponding to the obstacle with the minimum distance to the carrier in the distances of all the obstacles as the distance from the carrier to the obstacle.
5. The carrier obstacle avoidance method according to claim 4, wherein the depth image is acquired by a binocular camera;
the step of performing parallax calculation on the depth image to obtain the distances of all obstacles contained in the depth image comprises the following steps:
and extracting a disparity map of the depth information of each pixel in the depth image, and calculating the distances of all obstacles contained in the depth image according to the disparity information carried in the disparity map.
6. The carrier obstacle avoidance method according to claim 1, further comprising:
and when the distance is larger than the preset threshold value, planning a path from the carrier to the target position according to kinematic constraint so as to enable the carrier to avoid the obstacle and reach the target position.
7. The carrier obstacle avoidance device is characterized by comprising:
the acquisition module is used for acquiring a depth image in the running direction of the carrier;
a calculation module for calculating a distance from the carrier to an obstacle based on the depth image;
the judging module is used for judging whether the distance is larger than a preset threshold value or not;
the input module is used for inputting the depth image into a pre-trained motion decision model if the depth image is not in the motion adjustment value, so that the motion decision model outputs the motion adjustment value of the carrier;
and the control module is used for acquiring the motion adjusting value and controlling the carrier to avoid the obstacle according to the motion adjusting value so as to reach a target position.
8. The carrier obstacle avoidance device of claim 7, further comprising:
and when the distance is larger than the preset threshold value, planning a path from the carrier to the target position according to kinematic constraint so as to enable the carrier to avoid the obstacle and reach the target position.
9. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the carrier obstacle avoidance method of any of claims 1 to 6.
10. A computer-readable storage medium having stored thereon computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the carrier obstacle avoidance method of any of claims 1 to 6.
CN201911212714.8A 2019-11-29 2019-11-29 Carrier obstacle avoidance method and device and electronic equipment Pending CN110956662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212714.8A CN110956662A (en) 2019-11-29 2019-11-29 Carrier obstacle avoidance method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212714.8A CN110956662A (en) 2019-11-29 2019-11-29 Carrier obstacle avoidance method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110956662A true CN110956662A (en) 2020-04-03

Family

ID=69979283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212714.8A Pending CN110956662A (en) 2019-11-29 2019-11-29 Carrier obstacle avoidance method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110956662A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309035A (en) * 2020-05-14 2020-06-19 浙江远传信息技术股份有限公司 Multi-robot cooperative movement and dynamic obstacle avoidance method, device, equipment and medium
CN111352431A (en) * 2020-05-25 2020-06-30 北京小米移动软件有限公司 Movable touch display screen
CN112698653A (en) * 2020-12-23 2021-04-23 南京中朗智能技术有限公司 Robot autonomous navigation control method and system based on deep learning
CN112720465A (en) * 2020-12-15 2021-04-30 大国重器自动化设备(山东)股份有限公司 Control method of artificial intelligent disinfection robot
CN113282088A (en) * 2021-05-21 2021-08-20 潍柴动力股份有限公司 Unmanned driving method, device and equipment of engineering vehicle, storage medium and engineering vehicle
CN113342031A (en) * 2021-05-18 2021-09-03 江苏大学 Missile track online intelligent planning method
CN115576329A (en) * 2022-11-17 2023-01-06 西北工业大学 Obstacle avoidance method of unmanned AGV (automatic guided vehicle) based on computer vision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203134A (en) * 2017-06-02 2017-09-26 浙江零跑科技有限公司 A kind of front truck follower method based on depth convolutional neural networks
CN109213147A (en) * 2018-08-01 2019-01-15 上海交通大学 A kind of robot obstacle-avoiding method for planning track and system based on deep learning
CN109275094A (en) * 2018-11-02 2019-01-25 北京邮电大学 A kind of continuous covering method of high energy efficiency unmanned plane covering point and a device
CN109410234A (en) * 2018-10-12 2019-03-01 南京理工大学 A kind of control method and control system based on binocular vision avoidance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203134A (en) * 2017-06-02 2017-09-26 浙江零跑科技有限公司 A kind of front truck follower method based on depth convolutional neural networks
CN109213147A (en) * 2018-08-01 2019-01-15 上海交通大学 A kind of robot obstacle-avoiding method for planning track and system based on deep learning
CN109410234A (en) * 2018-10-12 2019-03-01 南京理工大学 A kind of control method and control system based on binocular vision avoidance
CN109275094A (en) * 2018-11-02 2019-01-25 北京邮电大学 A kind of continuous covering method of high energy efficiency unmanned plane covering point and a device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩信: "基于双目视觉的轮式机器人动态避障研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309035A (en) * 2020-05-14 2020-06-19 浙江远传信息技术股份有限公司 Multi-robot cooperative movement and dynamic obstacle avoidance method, device, equipment and medium
CN111309035B (en) * 2020-05-14 2022-03-04 浙江远传信息技术股份有限公司 Multi-robot cooperative movement and dynamic obstacle avoidance method, device, equipment and medium
CN111352431A (en) * 2020-05-25 2020-06-30 北京小米移动软件有限公司 Movable touch display screen
CN111352431B (en) * 2020-05-25 2020-09-18 北京小米移动软件有限公司 Movable touch display screen
CN112720465A (en) * 2020-12-15 2021-04-30 大国重器自动化设备(山东)股份有限公司 Control method of artificial intelligent disinfection robot
CN112698653A (en) * 2020-12-23 2021-04-23 南京中朗智能技术有限公司 Robot autonomous navigation control method and system based on deep learning
CN113342031A (en) * 2021-05-18 2021-09-03 江苏大学 Missile track online intelligent planning method
CN113342031B (en) * 2021-05-18 2022-07-22 江苏大学 Missile track online intelligent planning method
CN113282088A (en) * 2021-05-21 2021-08-20 潍柴动力股份有限公司 Unmanned driving method, device and equipment of engineering vehicle, storage medium and engineering vehicle
CN115576329A (en) * 2022-11-17 2023-01-06 西北工业大学 Obstacle avoidance method of unmanned AGV (automatic guided vehicle) based on computer vision

Similar Documents

Publication Publication Date Title
CN110956662A (en) Carrier obstacle avoidance method and device and electronic equipment
US20220108546A1 (en) Object detection method and apparatus, and computer storage medium
US11232286B2 (en) Method and apparatus for generating face rotation image
CN113819890B (en) Distance measuring method, distance measuring device, electronic equipment and storage medium
JP2021523443A (en) Association of lidar data and image data
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN111797983A (en) Neural network construction method and device
CN110097050B (en) Pedestrian detection method, device, computer equipment and storage medium
CN108367436B (en) Active camera movement determination for object position and range in three-dimensional space
CN112258565B (en) Image processing method and device
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
CN110610130A (en) Multi-sensor information fusion power transmission line robot navigation method and system
Babu et al. An autonomous path finding robot using Q-learning
CN114091554A (en) Training set processing method and device
CN113112525A (en) Target tracking method, network model, and training method, device, and medium thereof
CN111985300A (en) Automatic driving dynamic target positioning method and device, electronic equipment and storage medium
CN114387462A (en) Dynamic environment sensing method based on binocular camera
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN114972182A (en) Object detection method and device
Prasetyo et al. Spatial Based Deep Learning Autonomous Wheel Robot Using CNN
CN113639782A (en) External parameter calibration method and device for vehicle-mounted sensor, equipment and medium
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN116246119A (en) 3D target detection method, electronic device and storage medium
EP4296896A1 (en) Perceptual network and data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403