CN118212383A - Depth sparse map-based backhaul charging method and system for small unmanned equipment - Google Patents

Depth sparse map-based backhaul charging method and system for small unmanned equipment Download PDF

Info

Publication number
CN118212383A
CN118212383A CN202410629977.3A CN202410629977A CN118212383A CN 118212383 A CN118212383 A CN 118212383A CN 202410629977 A CN202410629977 A CN 202410629977A CN 118212383 A CN118212383 A CN 118212383A
Authority
CN
China
Prior art keywords
image
sparse map
depth
small unmanned
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410629977.3A
Other languages
Chinese (zh)
Inventor
罗彬�
唐超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tangmi Technology Co ltd
Original Assignee
Chengdu Tangmi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tangmi Technology Co ltd filed Critical Chengdu Tangmi Technology Co ltd
Priority to CN202410629977.3A priority Critical patent/CN118212383A/en
Publication of CN118212383A publication Critical patent/CN118212383A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a backhaul charging method and a backhaul charging system for small unmanned equipment based on a depth sparse map, and belongs to the technical field of intelligent control. The method comprises the following steps: acquiring image data detected by a binocular vision sensor; calculating depth information of a plurality of registration points in the image data based on the image data; determining a volume of a plurality of target spaces based on depth information of the plurality of registration points; constructing a depth sparse map based on volumes of the plurality of target spaces; and controlling the small unmanned equipment to carry out backhaul charging based on the depth sparse map. The mode can be adapted to the small unmanned aerial vehicle, other sensors are not required to be configured on the small unmanned aerial vehicle, and the configuration cost of the small unmanned aerial vehicle is reduced, and meanwhile the body type of the small unmanned aerial vehicle is not influenced.

Description

Depth sparse map-based backhaul charging method and system for small unmanned equipment
Technical Field
The invention relates to the technical field of intelligent control, in particular to a backhaul charging method and system for small unmanned equipment based on a depth sparse map.
Background
With the continuous development of the internet of things technology, unmanned equipment is gradually popularized, and a plurality of aspects such as logistics, manufacturing industry, agriculture, urban traffic and the like are covered; unmanned equipment includes cleaning robots, transporting robots, and the like. Unmanned equipment realizes the capability of motion, navigation, perception and decision making under the condition of direct operation of human beings through various sensors such as cameras, radars, inertial navigation and the like. Unmanned equipment can work in severe weather or places where people cannot reach, so that not only is the labor cost reduced, but also the corresponding service range is widened.
The inventor finds that in the process of operation of unmanned equipment, backhaul charging is an important and basic link, some unmanned equipment with a plurality of sensors can easily complete backhaul charging tasks through multimode data fusion, but how to accurately complete backhaul charging tasks is a great challenge for small unmanned equipment which only comprises binocular vision sensors and has no planning tasks. That is, there is no way to realize accurate backhaul charging control for small-sized unmanned apparatuses at present, and most manufacturers do not configure too many sensors on the small-sized unmanned apparatuses in view of economy and apparatus body types, but it is still desired that the small-sized unmanned apparatuses can automatically and accurately complete backhaul charging tasks at present.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a backhaul charging method and a backhaul charging system for small unmanned equipment based on a depth sparse map.
In a first aspect, an embodiment of the present application provides a backhaul charging method for a small-sized unmanned device based on a depth sparse map, including: acquiring image data detected by a binocular vision sensor; calculating depth information of a plurality of registration points in the image data based on the image data; determining volumes of a plurality of target spaces based on depth information of the plurality of registration points; the target space is a space in which the small unmanned equipment can move; constructing a depth sparse map based on the volumes of the plurality of target spaces; in the depth sparse map, the change factors of the volumes of N continuous target spaces are taken as child nodes, and the change factors of the volumes of the small unmanned equipment at the starting time of the charging base station and the volumes of N-1 continuous target spaces are taken as root nodes; n is a positive integer greater than or equal to 2; and controlling the small unmanned equipment to carry out backhaul charging based on the depth sparse map.
In an optional implementation of the first aspect, the image data comprises a plurality of image pairs; each image pair includes a matching first image and a second image; the first image is from a left-hand vision sensor and the second image is from a right-hand vision sensor; the binocular vision sensor includes the left-eye vision sensor and the right-eye vision sensor; based on any one of the image pairs, depth information for a plurality of registration points in the image pair is calculated, comprising: acquiring the same plurality of registration points in the first image and the second image in the image pair; and calculating depth information of each registration point by the same plurality of registration points in the first image and the second image.
In an optional implementation manner of the first aspect, before the acquiring the plurality of registration points in the pair of images, the method further includes: coordinate correction is carried out on registration points to be corrected in the first image and the second image, and a plurality of registration points in the first image and a plurality of registration points in the second image are determined; the coordinate correction of the registration point to be corrected in the first image comprises the following steps: determining the quadrant of the registration point to be corrected; acquiring a correction function corresponding to the quadrant where the registration point to be corrected is located; and carrying out coordinate correction on the registration points to be corrected based on the corresponding correction function.
In an optional implementation manner of the first aspect, determining correction functions corresponding to different quadrants in the target image acquired by the left-eye vision sensor or the right-eye vision sensor includes: dividing the target image into 8 quadrants; dividing the target image into 4 quadrants based on the origin point of the target image and the X axis and the Y axis, and dividing the target image into 8 quadrants by taking the origin point and a preset radius as a circle; wherein the target image is divided into four areas based on an X axis and a Y axis, and each area comprises two quadrants which are arranged inside and outside; and (3) performing multiple quadratic regression calculation based on the reference points contained in the quadrants inside each region, and determining the correction function corresponding to the quadrants contained in the region.
In an optional implementation manner of the first aspect, the obtaining the preset radius includes: determining the length and width of the target image; and determining the minimum value of one fourth of the length of the target image and one fourth of the width of the target image as the preset radius.
In an alternative embodiment of the first aspect, N has a value of 3.
In an optional implementation manner of the first aspect, the method further includes: continuously updating the depth sparse map in the moving process of the small unmanned equipment; determining an updating mode of the depth sparse map by judging whether a new change factor and the constructed depth sparse map have a superposition part or not; and when the new change factor has an overlapping part with the constructed depth sparse map, removing the overlapping part, reconstructing the new change factor, and removing the repeated sub-nodes.
In an optional implementation manner of the first aspect, the controlling the small-sized unmanned device to perform backhaul charging based on the depth sparse map includes: judging whether the change factor of the child node is met when the current node of the small unmanned equipment is judged; if yes, continuing to maintain the return charging route of the small unmanned equipment; and if the child nodes which do not meet the change factors exist, changing the backhaul direction of the small unmanned equipment until the child nodes return to the root node.
In an optional implementation manner of the first aspect, the changing a backhaul direction of the small unmanned device includes: changing the return direction of the small unmanned equipment, and shifting a preset angle rightwards; wherein the preset angle is 10-20 degrees.
In a second aspect, an embodiment of the present application provides a backhaul charging system for a small-sized unmanned device based on a depth sparse map, including: the acquisition module is used for acquiring the image data detected by the binocular vision sensor; a calculation module for calculating depth information of a plurality of registration points in the image data based on the image data; a determining module, configured to determine volumes of a plurality of target spaces based on depth information of the plurality of registration points; the target space is a space in which the small unmanned equipment can move; the construction module is used for constructing a depth sparse map based on the volumes of the plurality of target spaces; in the depth sparse map, the change factors of the volumes of N continuous target spaces are taken as child nodes, and the change factors of the volumes of the small unmanned equipment at the starting time of the charging base station and the volumes of N-1 continuous target spaces are taken as root nodes; n is a positive integer greater than or equal to 2; and the charging module is used for controlling the small unmanned equipment to carry out backhaul charging based on the depth sparse map.
The beneficial effects of the application include: firstly, according to the depth sparse map-based small unmanned equipment backhaul charging method, the whole process can automatically complete the construction of the map only by the image data detected by the binocular vision sensor, and the backhaul charging task can be completed according to the map.
Secondly, the application provides a brand-new construction mode of the depth sparse map, which takes the change factors of the volumes of the target spaces calculated in a plurality of images as road signs (sub-nodes) to form a topological structure of the depth sparse map, and the mode has higher control precision after testing of the small unmanned equipment, so that the small unmanned equipment can accurately complete a return stroke charging task.
Drawings
Fig. 1 is a step flowchart of a backhaul charging method for a small-sized unmanned device based on a depth sparse map according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first image division according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a second image division according to an embodiment of the present invention;
fig. 4 is a block diagram of a backhaul charging system of a small-sized unmanned device based on a depth sparse map according to an embodiment of the present invention;
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context.
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The inventor finds that in the process of operation of unmanned equipment, backhaul charging is an important and basic link, some unmanned equipment with a plurality of sensors can easily complete backhaul charging tasks through multimode data fusion, but how to accurately complete backhaul charging tasks is a great challenge for small unmanned equipment which only comprises binocular vision sensors and has no planning tasks. That is, there is no way to realize accurate backhaul charging control for small-sized unmanned apparatuses at present, and most manufacturers do not configure too many sensors on the small-sized unmanned apparatuses in view of economy and apparatus body types, but it is still desired that the small-sized unmanned apparatuses can automatically and accurately complete backhaul charging tasks at present.
In view of the above problems, the present application proposes the following embodiments to solve the above technical problems.
Referring to fig. 1, an embodiment of the present application provides a backhaul charging method for a small-sized unmanned device based on a depth sparse map, including: steps 101 to 105.
Step 101: image data detected by the binocular vision sensor is acquired.
The binocular vision sensor is provided on the small unmanned apparatus, and includes a left-eye vision sensor and a right-eye vision sensor.
Here, the control small-sized unmanned apparatus acquires image data detected by the binocular vision sensor at the time of the initial task.
Step 102: depth information of a plurality of registration points in the image data is calculated based on the image data.
Typically, binocular vision positioning is used to match registration points on two vision sensors, where the depth of the registration points is calculated based on image data detected by the two vision sensors.
Step 103: based on depth information of the plurality of registration points, volumes of the plurality of target spaces are determined.
The target space is a space in which the small unmanned equipment can move. The target space may also be understood as the space between the small unmanned device and the obstacle.
The height of the target space may be matched to the height of the small unmanned device.
Step 104: a depth sparse map is constructed based on volumes of the plurality of target spaces.
In the depth sparse map, the change factors of the volumes of N continuous target spaces are taken as child nodes, and the change factors of the volumes of the small unmanned equipment at the starting time of the charging base station and the volumes of N-1 continuous target spaces are taken as root nodes; n is a positive integer greater than or equal to 2.
In other words, the embodiment of the present application provides a topology structure for forming a depth sparse map based on a change factor of the volume of a target space calculated in a plurality of images as a landmark (child node).
Step 105: and controlling the small unmanned equipment to carry out backhaul charging based on the depth sparse map.
Finally, the small unmanned device can be guided to complete the backhaul charging task according to the topological relation of the widen depth sparse map.
In summary, the small unmanned equipment backhaul charging method based on the depth sparse map provided by the embodiment of the application has the following beneficial effects:
firstly, according to the depth sparse map-based small unmanned equipment backhaul charging method, the whole process can automatically complete the construction of the map only by the image data detected by the binocular vision sensor, and the backhaul charging task can be completed according to the map.
Secondly, the application provides a brand-new construction mode of the depth sparse map, which takes the change factors of the volumes of the target spaces calculated in a plurality of images as road signs (sub-nodes) to form a topological structure of the depth sparse map, and the mode has higher control precision after testing of the small unmanned equipment, so that the small unmanned equipment can accurately complete a return stroke charging task.
Optionally, the image data comprises a plurality of image pairs; each image pair includes a matching first image and a second image; the first image is from a left-hand vision sensor and the second image is from a right-hand vision sensor.
Here, the image data are each RGB images.
The above image data may be specifically expressed as:
Wherein, Represents the/>Image pair,/>Represents the/>First image in image pair,/>Represents the/>And a second image in the pair of images. Similarly, other symbol parameters may be explained with reference to the/>Interpretation of the image pairs.
Accordingly, based on any one image pair, depth information of a plurality of registration points in the image pair is calculated, including: acquiring the same multiple registration points in the first image and the second image in the image pair; and calculating depth information of each registration point by using the same plurality of registration points in the first image and the second image.
Exemplary, the specific steps of acquiring the same plurality of registration points in the first image and the second image in the image pair are as follows: ; wherein/> Represents the/>Registering the points; /(I)For/>Alignment points in the first image of alignment points,/>For/>And aligning the registration points in the second image in the registration points. Similarly, other symbol parameters may be explained with reference to the/>Interpretation of the registration points.
Then, depth information of each registration point may be calculated based on the same plurality of registration points in the first image and the second image, and the formula may be:
Wherein, Representing three-dimensional space coordinates of the alignment points, namely depth information of the alignment points; /(I)A baseline parallax representing the binocular vision sensor; /(I)And/>Is an intermediate parameter,/>;/>Representing a horizontal viewing angle, W being the width of the image; /(I)Is a registration point in the first image and represents taking an X-axis coordinate value; /(I)Is a registration point in the second image and represents taking an X-axis coordinate value; /(I)Representing the vertical viewing angle.
Through the calculation, accurate depth information of each registration point can be obtained.
Optionally, before acquiring the same plurality of registration points in the first image and the second image in the image pair, the method further includes: coordinate correction is carried out on registration points to be corrected in the first image and the second image, and a plurality of registration points in the first image and a plurality of registration points in the second image are determined; the method for correcting the coordinates of the registration points to be corrected in the first image comprises the following steps: determining the quadrant of the registration point to be corrected; acquiring a correction function corresponding to the quadrant where the registration point to be corrected is located; and carrying out coordinate correction on the to-be-corrected registration points based on the corresponding correction function.
In consideration of the problem that the binocular sensor vision sensor is prone to losing calibration, namely calibration failure or image distortion, the embodiment of the application also provides a method for correcting the registration points so as to improve the follow-up calculation accuracy and the control accuracy.
In one embodiment, the image may be segmented into 4 quadrants; different correction functions are preset in each quadrant, so that after the registration points to be corrected are determined later, the correction function corresponding to the quadrant where the registration points to be corrected are located is found out for coordinate correction.
The correction function may be set based on the coordinate deviation in the history image.
Optionally, the embodiment of the present application provides a method for determining a correction function, that is, determining correction functions corresponding to different quadrants in a target image acquired by a left-eye vision sensor or a right-eye vision sensor, including: dividing a target image into 8 quadrants; dividing the target image into 4 quadrants based on the origin of the target image and the X axis and the Y axis, and dividing the target image into 8 quadrants by taking the origin and a preset radius as circles; wherein, the target image is divided into four areas based on X axis and Y axis, each area includes two quadrants which are arranged inside and outside; and (3) performing multiple quadratic regression calculation based on the reference points contained in the quadrants inside each region, and determining the correction function corresponding to the quadrants contained in the region.
As shown in fig. 2, the target image is here divided into 8 quadrants. Here, in particular, four regions divided based on the X-axis and the Y-axis are divided, and then, circles are made with a preset radius among the four regions, thereby dividing each region into two parts, i.e., inside and outside the circles.
In an embodiment, the number of the quadrants may be divided into quadrant 1, quadrant 2, quadrant 3 and quadrant 4 by one turn anticlockwise outside the circle; a circle counterclockwise within the circle is divided into quadrant 5, quadrant 6, quadrant 7, and quadrant 8. Wherein, the quadrant 1 and the quadrant 5 are two quadrants belonging to the same area; quadrant 2 and quadrant 6 are two quadrants belonging to the same region; quadrant 3 and quadrant 7 are two quadrants belonging to the same region; quadrant 4 and quadrant 8 are two quadrants belonging to the same region.
Then, the correction function corresponding to quadrant 1 can be obtained by performing a quadratic regression calculation on the reference points included in quadrant 5. Similarly, the correction function corresponding to quadrant 2 can be obtained by performing a quadratic regression calculation on the reference points included in quadrant 6.
The expression of the correction function obtained by the above quadratic regression calculation may be:
Wherein, Representing the correction function corresponding to quadrant 1; /(I)All represent the calculated different correction coefficients; /(I)And/>Representing the coordinates of the reference point in quadrant 5. It should be noted that the above expression is expressed as correction function/>, corresponding to quadrant 1For example, the expressions of the correction functions corresponding to the other quadrants may refer to the above formula.
After the correction function corresponding to the quadrant 1 is obtainedThereafter, the correction function/>, corresponding to the quadrant 1, may be based onAnd carrying out coordinate correction on the registration points to be corrected in the quadrant 1, wherein the reference formula is as follows:
Wherein, And/>Coordinate values representing the X-axis and Y-axis of the corrected alignment point; /(I)And/>And the coordinate value of the registration point to be corrected is represented. It should be noted that, the formula of the coordinate matrix process of the correction function corresponding to the other quadrant may also refer to the above formula.
Optionally, the step of obtaining the preset radius includes: determining the length and width of a target image; a minimum value of one fourth of the length of the target image and one fourth of the width of the target image is determined as a preset radius.
Namely, the preset radius has the following value:;/> Representing the length of the target; /(I) Representing the width of the object.
It should be noted that, by determining the minimum value of one-fourth of the length of the target image and one-fourth of the width of the target image as the preset radius, it is possible to avoid exceeding the range of the image when the quadrant division is performed, and at the same time, it is also possible to improve the accuracy of the correction function by determining the minimum value of one-fourth of the length of the target image and one-fourth of the width of the target image as the preset radius.
In other embodiments, the preset radius may also be a set value, which is not limited herein.
Referring to fig. 3, in another embodiment, the target image may be further divided into 8 quadrants by setting a rectangular frame. The size of the rectangular frame may be set according to the requirement or the size of the target image, which is not limited in this case.
The calculation of the volumes of the plurality of target spaces is described below: volume of target spaceThe calculation of (2) may refer to the following formula:
Wherein, Representing three-dimensional space coordinates of the alignment points, namely depth information of the alignment points; /(I)Representing the area of the closed image formed by the outermost registration points; /(I)Representing the mean.
After the volumes of a plurality of target spaces are obtained, the volumes of three continuous target spaces can be obtained) To calculate the volume change factor/>The reference formula is as follows:
That is to say, The change value interval of Z is indicated, and the change value interval of Z is set to 3. Above/>Reference may be made to the description in the preceding embodiments.
That is, in the embodiment of the present application, the value of N is 3. By using the change factors of the volumes of the three target spaces as the child nodes, the calculation amount can be controlled, and meanwhile, the accuracy of the follow-up control by using the change factors of the volumes of the three target spaces as the child nodes can be ensured.
Optionally, the method further comprises: continuously updating the depth sparse map in the moving process of the small unmanned equipment; determining an updating mode of the depth sparse map by judging whether a new change factor and the constructed depth sparse map have a superposition part or not; and when the new change factor has an overlapping part with the constructed depth sparse map, removing the overlapping part, reconstructing the new change factor, and removing the repeated sub-nodes.
In other words, in the moving process of the small unmanned equipment, the depth sparse map is continuously updated, and in this way, the latest map condition can be continuously and timely obtained, so that after the environment is changed, the situation can be timely responded, and the route adjustment can be made.
In addition, in the embodiment, the updating mode of the depth sparse map is determined by judging whether a superposition part exists between the new change factor and the constructed depth sparse map; specifically, when a new change factor has an overlapping part with the constructed depth sparse map, the overlapping part is removed, the new change factor is reconstructed, and repeated child nodes are removed to minimize the depth. In other words, by the method, the storage pressure of the small unmanned equipment can be reduced, and the problem that the subsequent backhaul charging task cannot be completed due to the fact that the equipment is limited in storage and the topology tree cannot be stored too deeply is avoided.
Optionally, the controlling the small unmanned device to perform backhaul charging based on the depth sparse map includes: judging whether the change factor of the child node is met when the current node of the small unmanned equipment is judged; if yes, continuing to maintain the return charging route of the small unmanned equipment; and if the child nodes which do not meet the change factors exist, changing the backhaul direction of the small unmanned equipment until the child nodes return to the root node.
It should be noted that, the depth sparse map constructed by the embodiment of the application is specifically a topology tree. And judging whether the change factor of the child node is met during return, if so, continuing to maintain the travel route until no node meeting the change factor value is in the tree, and changing the travel direction until the travel route returns to the root node.
Optionally, changing the backhaul direction of the small unmanned device includes: changing the return direction of the small unmanned equipment, and shifting the return direction to the right by a preset angle; wherein the preset angle is 10-20 degrees.
The predetermined angle may be specifically 15 °. By changing the return direction of the small unmanned equipment and shifting by 10-20 degrees to the right, the subsequent path planning and map updating can be performed through smaller adjustment, and the operation efficiency of the small unmanned equipment is improved.
In summary, the embodiments of the present application provide a backhaul charging method for a small unmanned device based on a depth sparse map, which cannot meet the requirement of executing a backhaul task by only using a binocular camera in the prior art, and constructs a sparse map by calculating a change factor of a space volume as a road sign, constructs a topology tree by using a charging base station as a root node and other road signs as child nodes, and continuously updates the topology tree to optimize the depth of the tree in task execution, and completes guidance of backhaul charging by means of structural relationship description of the topology tree on a scene. In addition, as the problem of standard losing (calibration failure) of the binocular camera is easy to occur, the application provides a quadrant-dividing calibration method when calculating the depth, so as to ensure higher calculation accuracy under the condition of standard losing.
Referring to fig. 4, based on the same inventive concept, an embodiment of the present application provides a backhaul charging system 400 for a small-sized unmanned device based on a depth sparse map, including:
An acquisition module 401 for acquiring image data detected by the binocular vision sensor;
a calculation module 402, configured to calculate depth information of a plurality of registration points in the image data based on the image data;
A determining module 403, configured to determine volumes of a plurality of target spaces based on depth information of the plurality of registration points; the target space is a space in which the small unmanned equipment can move;
A construction module 404 for constructing a depth sparse map based on volumes of the plurality of target spaces; in the depth sparse map, the change factors of the volumes of N continuous target spaces are taken as child nodes, and the change factors of the volumes of the small unmanned equipment at the starting time of the charging base station and the volumes of N-1 continuous target spaces are taken as root nodes; n is a positive integer greater than or equal to 2;
And the charging module 405 is configured to control the small unmanned device to perform backhaul charging based on the deep sparse map.
Referring to fig. 5, based on the same inventive concept, an embodiment of the present application provides a module frame of an electronic device 500 applying the above method. The electronic device 500 includes: at least one processor 501 (only one shown in fig. 5), a memory 502, a computer program 503 stored in the memory 502 and executable on the at least one processor 501, the processor 501 implementing the steps of the method in any of the embodiments described above when executing the computer program 503.
The electronic device 500 may be a server, a personal computer, a notebook computer, or the like.
It will be appreciated by those skilled in the art that fig. 5 is merely an example of an electronic device 500 and is not meant to be limiting of the electronic device 500, and may include more or fewer components than shown, or may combine certain components, or different components.
The Processor 501 may be a central processing unit (Central Processing Unit, CPU), the Processor 501 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 502 may in some embodiments be an internal storage unit of the electronic device 500, such as a hard disk or a memory of the electronic device 500. The memory 502 may also be an external storage device of the electronic device 500 in other embodiments, such as a plug-in hard disk provided on the electronic device 500, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 500.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a camera device/electronic apparatus, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. The backhaul charging method for the small unmanned equipment based on the depth sparse map is characterized by comprising the following steps of:
Acquiring image data detected by a binocular vision sensor;
Calculating depth information of a plurality of registration points in the image data based on the image data;
Determining volumes of a plurality of target spaces based on depth information of the plurality of registration points; the target space is a space in which the small unmanned equipment can move;
Constructing a depth sparse map based on the volumes of the plurality of target spaces; in the depth sparse map, the change factors of the volumes of N continuous target spaces are taken as child nodes, and the change factors of the volumes of the small unmanned equipment at the starting time of the charging base station and the volumes of N-1 continuous target spaces are taken as root nodes; n is a positive integer greater than or equal to 2;
and controlling the small unmanned equipment to carry out backhaul charging based on the depth sparse map.
2. The depth sparse map based small unmanned device backhaul charging method of claim 1, wherein the image data comprises a plurality of image pairs; each image pair includes a matching first image and a second image; the first image is from a left-hand vision sensor and the second image is from a right-hand vision sensor; the binocular vision sensor includes the left-eye vision sensor and the right-eye vision sensor;
based on any one of the image pairs, depth information for a plurality of registration points in the image pair is calculated, comprising:
acquiring the same plurality of registration points in the first image and the second image in the image pair;
and calculating depth information of each registration point by the same plurality of registration points in the first image and the second image.
3. The depth sparse map based small form factor unmanned device backhaul charging method of claim 2, wherein prior to said acquiring the same plurality of registration points in the first image and the second image in the image pair, the method further comprises:
coordinate correction is carried out on registration points to be corrected in the first image and the second image, and a plurality of registration points in the first image and a plurality of registration points in the second image are determined;
the coordinate correction of the registration point to be corrected in the first image comprises the following steps:
determining the quadrant of the registration point to be corrected;
acquiring a correction function corresponding to the quadrant where the registration point to be corrected is located;
and carrying out coordinate correction on the registration points to be corrected based on the corresponding correction function.
4. The depth sparse map based small unmanned aerial vehicle backhaul charging method of claim 3, wherein determining correction functions corresponding to different quadrants in a target image acquired by the left-hand vision sensor or the right-hand vision sensor comprises:
Dividing the target image into 8 quadrants; dividing the target image into 4 quadrants based on the origin point of the target image and the X axis and the Y axis, and dividing the target image into 8 quadrants by taking the origin point and a preset radius as a circle; wherein the target image is divided into four areas based on an X axis and a Y axis, and each area comprises two quadrants which are arranged inside and outside;
And (3) performing multiple quadratic regression calculation based on the reference points contained in the quadrants inside each region, and determining the correction function corresponding to the quadrants contained in the region.
5. The depth sparse map-based small unmanned device backhaul charging method of claim 4, wherein obtaining the preset radius comprises:
Determining the length and width of the target image;
And determining the minimum value of one fourth of the length of the target image and one fourth of the width of the target image as the preset radius.
6. The depth sparse map based small unmanned device backhaul charging method of claim 1, wherein N has a value of 3.
7. The depth sparse map based small unmanned device backhaul charging method of claim 1, further comprising:
continuously updating the depth sparse map in the moving process of the small unmanned equipment;
Determining an updating mode of the depth sparse map by judging whether a new change factor and the constructed depth sparse map have a superposition part or not; and when the new change factor has an overlapping part with the constructed depth sparse map, removing the overlapping part, reconstructing the new change factor, and removing the repeated sub-nodes.
8. The depth sparse map-based backhaul charging method of a small-sized unmanned aerial vehicle of claim 1, wherein the controlling the small-sized unmanned aerial vehicle to backhaul charge based on the depth sparse map comprises:
judging whether the change factor of the child node is met when the current node of the small unmanned equipment is judged;
If yes, continuing to maintain the return charging route of the small unmanned equipment;
And if the child nodes which do not meet the change factors exist, changing the backhaul direction of the small unmanned equipment until the child nodes return to the root node.
9. The depth sparse map based small form factor unmanned device backhaul charging method of claim 8, the changing the backhaul direction of the small form factor unmanned device comprising:
changing the return direction of the small unmanned equipment, and shifting a preset angle rightwards;
wherein the preset angle is 10-20 degrees.
10. Small-size unmanned aerial vehicle return stroke charging system based on degree of depth sparse map, characterized by comprising:
the acquisition module is used for acquiring the image data detected by the binocular vision sensor;
a calculation module for calculating depth information of a plurality of registration points in the image data based on the image data;
a determining module, configured to determine volumes of a plurality of target spaces based on depth information of the plurality of registration points; the target space is a space in which the small unmanned equipment can move;
The construction module is used for constructing a depth sparse map based on the volumes of the plurality of target spaces; in the depth sparse map, the change factors of the volumes of N continuous target spaces are taken as child nodes, and the change factors of the volumes of the small unmanned equipment at the starting time of the charging base station and the volumes of N-1 continuous target spaces are taken as root nodes; n is a positive integer greater than or equal to 2;
And the charging module is used for controlling the small unmanned equipment to carry out backhaul charging based on the depth sparse map.
CN202410629977.3A 2024-05-21 2024-05-21 Depth sparse map-based backhaul charging method and system for small unmanned equipment Pending CN118212383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410629977.3A CN118212383A (en) 2024-05-21 2024-05-21 Depth sparse map-based backhaul charging method and system for small unmanned equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410629977.3A CN118212383A (en) 2024-05-21 2024-05-21 Depth sparse map-based backhaul charging method and system for small unmanned equipment

Publications (1)

Publication Number Publication Date
CN118212383A true CN118212383A (en) 2024-06-18

Family

ID=91453888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410629977.3A Pending CN118212383A (en) 2024-05-21 2024-05-21 Depth sparse map-based backhaul charging method and system for small unmanned equipment

Country Status (1)

Country Link
CN (1) CN118212383A (en)

Similar Documents

Publication Publication Date Title
CN108765487B (en) Method, device, equipment and computer readable storage medium for reconstructing three-dimensional scene
CN108827249B (en) Map construction method and device
EP4080248A1 (en) Method and apparatus for vehicle positioning, controller, smart car and system
CN107636679A (en) A kind of obstacle detection method and device
CN108280866B (en) Road point cloud data processing method and system
CN113074727A (en) Indoor positioning navigation device and method based on Bluetooth and SLAM
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN104517275A (en) Object detection method and system
CN112166458B (en) Target detection and tracking method, system, equipment and storage medium
CN113706702A (en) Mining area three-dimensional map construction system and method
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN111856499B (en) Map construction method and device based on laser radar
CN111681172A (en) Method, equipment and system for cooperatively constructing point cloud map
CN111950428A (en) Target obstacle identification method and device and carrier
CN117824666A (en) Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method
CN112233149A (en) Scene flow determination method and device, storage medium and electronic device
CN112912894A (en) Road boundary identification method and device
CN116228917A (en) Intersection surface virtual lane line generation method and device based on high-precision map
CN118212383A (en) Depth sparse map-based backhaul charging method and system for small unmanned equipment
CN112396630A (en) Method and device for determining state of target object, storage medium and electronic device
CN112115930B (en) Method and device for determining pose information
CN113390425B (en) Map data processing method, device, equipment and storage medium
CN113902047A (en) Image element matching method, device, equipment and storage medium
CN113932793A (en) Three-dimensional coordinate positioning method and device, electronic equipment and storage medium
CN116917936A (en) External parameter calibration method and device for binocular camera

Legal Events

Date Code Title Description
PB01 Publication