CN117635865A - Map true value generation method, device, equipment and storage medium - Google Patents

Map true value generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN117635865A
CN117635865A CN202311793259.1A CN202311793259A CN117635865A CN 117635865 A CN117635865 A CN 117635865A CN 202311793259 A CN202311793259 A CN 202311793259A CN 117635865 A CN117635865 A CN 117635865A
Authority
CN
China
Prior art keywords
cloud data
map
point cloud
image
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311793259.1A
Other languages
Chinese (zh)
Inventor
李宗剑
郑彬
魏华敬
谢锴
张新
谷靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huitian Aerospace Technology Co Ltd
Original Assignee
Guangdong Huitian Aerospace Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huitian Aerospace Technology Co Ltd filed Critical Guangdong Huitian Aerospace Technology Co Ltd
Priority to CN202311793259.1A priority Critical patent/CN117635865A/en
Publication of CN117635865A publication Critical patent/CN117635865A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of computers, and discloses a map true value generation method, a map true value generation device, map true value generation equipment and a storage medium. According to the invention, the view cone interception is carried out on map point cloud data to obtain intercepted point cloud data under the field of view of the equipment, wherein the map point cloud data contains object texture information; generating a projection image based on the conversion point cloud data of the intercepted point cloud data under the equipment space coordinate system and the scene image; extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs; and carrying out parameter calculation based on the key feature point pairs, the conversion point cloud data and the equipment calibration external parameters, determining target conversion parameters, and determining a target map true value based on the target conversion parameters. By the method, accuracy and efficiency of map true value generation are improved, scene full coverage can be achieved, universality is good, accurate true value data can be generated in a flow and batch mode, and the requirement of true value production under the condition of large-field long distance in the air is met.

Description

Map true value generation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a map true value generation method, apparatus, device, and storage medium.
Background
The current method for determining the truth value data based on the point cloud data cannot guarantee the accuracy of the truth value data, meanwhile, the coverage area is small, and the truth value manufacturing requirement under any condition cannot be met, so that the current dense binocular network lacks accurate and batched truth value data to supervise the deep learning network training.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a map true value generation method, a device, equipment and a storage medium, and aims to solve the technical problem of how to provide a method capable of accurately generating true value data in a flow and batch mode.
In order to achieve the above object, the present invention provides a map true value generation method, which includes the steps of:
intercepting the map point cloud data by using a cone to obtain intercepted point cloud data under the field of view of the device, wherein the map point cloud data contains object texture information;
generating a projection image based on the conversion point cloud data of the interception point cloud data under the equipment space coordinate system and the scene image;
extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs;
and carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter.
Optionally, the capturing the map point cloud data according to the cone body to obtain captured point cloud data under the field of view of the device includes:
acquiring an initial binocular image and pose information of navigation equipment;
performing image preprocessing on the initial binocular image to obtain a scene image;
and intercepting the map point cloud data by using a cone according to the pose information and the scene image to obtain intercepted point cloud data under the field of view of the equipment.
Optionally, the capturing map point cloud data according to the pose information and the scene image according to a cone of view to obtain captured point cloud data under the field of view of the device includes:
determining a target visual cone and visual cone parameters of the target visual cone according to the pose information and the image parameters corresponding to the scene image;
determining a spatial relationship between the map point cloud data and the target cone according to the cone parameters;
and intercepting the map point cloud data according to the spatial relationship between the map point cloud data and the target cone to obtain intercepted point cloud data in the field of view of the equipment.
Optionally, before generating the projection image based on the conversion point cloud data of the truncated point cloud data in the device space coordinate system and the scene image, the method further includes:
acquiring navigation track data and odometer data;
calibrating external parameters between the navigation equipment and the image acquisition equipment according to the navigation track data and the odometer data to obtain equipment calibration external parameters;
coordinate transformation is carried out on the map point cloud data according to the equipment calibration external parameters, and initial point cloud data under an equipment space coordinate system is obtained;
and carrying out grid processing on the initial point cloud data to obtain conversion point cloud data of the intercepted point cloud data under a device space coordinate system.
Optionally, the feature point extracting is performed according to the projection image and the scene image, and determining a plurality of key feature point pairs includes:
extracting key points of the projection image and the scene image, and determining a plurality of first characteristic point pairs;
performing rule constraint on a plurality of first characteristic point pairs according to the equipment calibration external parameters, and screening a plurality of second characteristic point pairs from the plurality of first characteristic point pairs;
and screening the plurality of second characteristic point pairs according to a preset grid division mode, and determining a plurality of key characteristic point pairs.
Optionally, the determining the target conversion parameter based on the parameter calculation performed by each key feature point pair, the conversion point cloud data and the equipment calibration external parameter includes:
according to the key feature point pairs and the conversion point cloud data, determining three-dimensional coordinates of the key feature points on the projection image under the equipment space coordinate system;
according to the three-dimensional coordinates of each key feature point on the projection image in the equipment space coordinate system and each key feature point pair, carrying out gesture calculation to determine conversion correction parameters;
and calculating parameters according to the transformation correction parameters and the equipment calibration external parameters, and determining target transformation parameters.
Optionally, the determining a target map truth value based on the target transformation parameter includes:
performing image conversion on the projection image according to the target conversion parameters to obtain a map true value depth image;
performing background point filtering on the map true value depth image;
and obtaining a true value of the target map according to the filtering result.
In addition, in order to achieve the above object, the present invention also provides a map true value generating device, which includes:
the intercepting module is used for intercepting the cone of the map point cloud data to obtain intercepted point cloud data under the field of view of the equipment, wherein the map point cloud data contains object texture information;
the generation module is used for generating a projection image based on the conversion point cloud data of the intercepted point cloud data in the equipment space coordinate system and the scene image;
the extraction module is used for extracting feature points according to the projection image and the scene image and determining a plurality of key feature point pairs;
and the processing module is used for carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter.
In addition, in order to achieve the above object, the present invention also proposes a map true value generation apparatus including: a memory, a processor, and a map truth generation program stored on the memory and executable on the processor, the map truth generation program configured to implement the steps of the map truth generation method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a map truth value generation program which, when executed by a processor, implements the steps of the map truth value generation method as described above.
According to the invention, the cone interception is carried out on map point cloud data to obtain intercepted point cloud data under the field of view of the equipment, wherein the map point cloud data contains object texture information; generating a projection image based on the conversion point cloud data of the interception point cloud data under the equipment space coordinate system and the scene image; extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs; and carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter. According to the method, the projection image is generated based on the conversion point cloud data of the intercepted point cloud data in the equipment vision and the scene image in the equipment space coordinate system, the plurality of key feature point pairs, the conversion point cloud data and the equipment calibration external parameters are obtained by extracting the feature points according to the projection image and the scene image, the point cloud data contains texture information, the accuracy and the efficiency of map true value generation are improved, the scene full coverage can be realized in the method, the universality is good, the accurate true value data can be generated in a flow and batch mode, and the true value manufacturing requirement under the condition of large vision in the air and long distance is met.
Drawings
FIG. 1 is a schematic diagram of a map truth generating apparatus for a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a map truth generating method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of map point cloud data according to an embodiment of the present invention;
FIG. 4 is a diagram showing effects of an embodiment of a map truth generating method according to the present invention;
FIG. 5 is a schematic view of a projection image of an embodiment of a map truth generating method according to the present invention;
FIG. 6 is a flowchart illustrating an overall process of an embodiment of a map truth generating method according to the present invention;
FIG. 7 is a flowchart of a map truth generating method according to a second embodiment of the present invention;
FIG. 8 is a flowchart of a map truth generating method according to a third embodiment of the present invention;
FIG. 9 is a schematic diagram of feature point matching according to an embodiment of the map truth-value generation method of the present invention;
FIG. 10 is a schematic diagram of a culling effect of an embodiment of a map truth generating method according to the present invention;
FIG. 11 is a schematic diagram illustrating background filtering according to an embodiment of the map truth generation method of the present invention;
FIG. 12 is a schematic diagram illustrating the transformation of parameters according to an embodiment of the map truth generation method of the present invention;
FIG. 13 is a diagram illustrating a data overlay effect according to an embodiment of the map truth generation method of the present invention;
fig. 14 is a block diagram showing a map truth generating apparatus according to a first embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a map truth value generating device of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the map truth generating apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 does not constitute a limitation of the map truth generating apparatus, and may include more or fewer components than shown, or certain components in combination, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a map truth value generation program may be included in the memory 1005 as one type of storage medium.
In the map truth generating apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the map truth value generating apparatus of the present invention may be provided in the map truth value generating apparatus, which invokes the map truth value generating program stored in the memory 1005 through the processor 1001, and executes the map truth value generating method provided by the embodiment of the present invention.
Referring to fig. 2, fig. 2 is a schematic flow chart of a map truth value generation method according to a first embodiment of the present invention.
In this embodiment, the map truth value generation method includes the following steps:
step S10: and intercepting the map point cloud data by using a cone to obtain intercepted point cloud data under the field of view of the device, wherein the map point cloud data contains object texture information.
It should be noted that, the execution body of the embodiment is a map truth value generating device, where the map truth value generating device has functions of data processing, data communication, program running, etc., and the map truth value generating device may be an integrated controller, a control computer, etc., or may be other devices with similar functions, which is not limited in this embodiment.
It can be understood that the point cloud data obtained by the laser radar in the prior art has low point cloud density, and the density is generally below 20pts/m < m > -at a distance of 100 meters; the coverage area is small, and the vertical FOV is generally below 60 degrees; meanwhile, the lack of texture information of the point cloud can not update the projection accuracy of the point cloud through post-processing, so that the current technology can not meet the requirement of long-distance truth value production in a large field of view in the air. In order to solve the above problem and realize the flow and batch of the truth data generation mode, the map truth generation method of the embodiment is provided.
In a specific implementation, three-dimensional map point cloud data is acquired by using mapping equipment, the map point cloud data has texture information of various object surfaces in a scene, the texture information of the various object surfaces in the scene is the object texture information, and the map point cloud data visualization effect is shown in fig. 3.
It should be noted that, because there is point cloud data that is not in the view of the image capturing device in the map point cloud data, according to the view range of the image capturing device, the cone of view is intercepted on the map point cloud data, so as to obtain point cloud data in the view of the image capturing device in the map point cloud data, where the point cloud data in the view of the image capturing device in the map point cloud data is intercepted point cloud data in the view of the device, in this embodiment, the image capturing device may be a binocular camera or other devices, and this embodiment is not limited thereto.
Step S20: and generating a projection image based on the conversion point cloud data of the interception point cloud data under the equipment space coordinate system and the scene image.
It should be noted that, in the embodiment, the device space coordinate system refers to an image acquisition device space coordinate system, the truncated point cloud data is converted into the device space coordinate system, and is subjected to grid processing, so as to obtain three-channel floating point cloud data, and the display effect of the three-channel floating point cloud data of the truncated point cloud data under each channel is shown in fig. 4. In this embodiment, three-channel floating point cloud data of the intercepted point cloud data in the equipment space coordinate system is conversion point cloud data, and point cloud data obtained after the intercepted point cloud data is converted into the equipment space coordinate system can also be directly used as conversion point cloud data.
It can be understood that the scene image refers to a binocular image after image preprocessing, the binocular image is acquired by an image acquisition device, and the conversion point cloud data is projected to the scene image according to the internal parameters of the image acquisition device, so as to obtain a projection image, and corresponding texture information is added to the projection image. For example, a scene image and corresponding projection image are shown in fig. 5. The left side in fig. 5 is a scene image, and the right side is a projection image.
In a specific implementation, in order to obtain accurate conversion point cloud data, before generating a projection image based on the conversion point cloud data of the interception point cloud data in the equipment space coordinate system and the scene image, the method further includes: acquiring navigation track data and odometer data; calibrating external parameters between the navigation equipment and the image acquisition equipment according to the navigation track data and the odometer data to obtain equipment calibration external parameters; coordinate transformation is carried out on the map point cloud data according to the equipment calibration external parameters, and initial point cloud data under an equipment space coordinate system is obtained; and carrying out grid processing on the initial point cloud data to obtain conversion point cloud data of the intercepted point cloud data under a device space coordinate system.
In this embodiment, a navigation device and an image acquisition device are mounted on the unmanned aerial vehicle at the same time, and based on navigation track data corresponding to the navigation device and visual odometer data corresponding to the image acquisition device, external parameter calibration between the navigation device and the image acquisition device is completed, so that a calibration external parameter between the navigation device and the image acquisition device is obtained, and the calibration external parameter between the navigation device and the image acquisition device is the device calibration external parameter.
It can be understood that, based on the device calibration external parameters, the map point cloud data is converted into the device space coordinate system, so that the map point cloud data after coordinate conversion is the initial point cloud data, and the grid processing is performed on the initial point cloud data, so as to obtain three-channel floating point cloud data corresponding to the initial point cloud data. In this embodiment, three-channel floating point cloud data corresponding to the initial point cloud data is conversion point cloud data of the intercepted point cloud data under the equipment space coordinate system.
Step S30: and extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs.
It should be noted that, extracting key points from the projection image and the scene image, and matching based on the extracted key points to obtain a plurality of matched initial feature point pairs, because the number of the plurality of initial feature point pairs is larger and there are points with matching errors, the plurality of initial feature point pairs are screened, so as to obtain feature point pairs with small number and correct matching, which can be used for subsequent conversion parameter calculation. In this embodiment, the feature point pair that can be used for performing conversion parameter calculation later is a key feature point pair.
Step S40: and carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter.
The target transformation parameters refer to accurate transformation parameters from the map point cloud data to the equipment space coordinate system, the projection image is transformed based on the accurate transformation parameters, and background point filtering is performed on the projection image, so that the obtained data is the true value of the target map.
It can be understood that, as shown in fig. 6, map point cloud data with large field of view, long distance, high density and high precision is obtained by using mapping equipment, a basis is provided for subsequent large field of view, long distance and high density truth data production, according to texture information in the map point cloud data and combining a conversion relationship between a map and a camera and a scene image, rasterized point cloud heterogeneous image data is obtained, finally, the problem of accurate alignment between the map and the image is solved by a matching algorithm, background filtering processing is performed on projected depth truth data, and the problem of abnormal depth data is further solved, so that a final map truth value is output.
According to the embodiment, interception point cloud data under the field of view of the device is obtained by intercepting the map point cloud data according to the cone, wherein the map point cloud data contains object texture information; generating a projection image based on the conversion point cloud data of the interception point cloud data under the equipment space coordinate system and the scene image; extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs; and carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter. According to the method, the projection image is generated based on the conversion point cloud data of the intercepted point cloud data in the equipment vision and the scene image in the equipment space coordinate system, the plurality of key feature point pairs, the conversion point cloud data and the equipment calibration external parameters are obtained by extracting the feature points according to the projection image and the scene image, the point cloud data contains texture information, the accuracy and the efficiency of map true value generation are improved, the scene full coverage can be realized in the method, the universality is good, the accurate true value data can be generated in a flow and batch mode, and the true value manufacturing requirement under the condition of large vision in the air and long distance is met.
Referring to fig. 7, fig. 7 is a flowchart illustrating a map truth value generating method according to a second embodiment of the present invention.
Based on the above-mentioned first embodiment, the map true value generating method of the present embodiment includes, at the step S10:
step S11: and acquiring the initial binocular image and pose information of the navigation equipment.
It should be noted that, the initial binocular image refers to an unprocessed binocular image acquired by the image acquisition device, and pose information of the navigation device includes, but is not limited to, position information and pose information of the navigation device.
Step S12: and performing image preprocessing on the initial binocular image to obtain a scene image.
It should be noted that, the image preprocessing includes, but is not limited to, performing de-distortion and stereo correction on an initial binocular image, and the initial binocular image after the image preprocessing is a scene image.
Step S13: and intercepting the map point cloud data by using a cone according to the pose information and the scene image to obtain intercepted point cloud data under the field of view of the equipment.
It should be noted that, using pose information of the navigation device and image parameters corresponding to a scene image, intercepting the map point cloud data according to a cone, so as to obtain screenshot point cloud data under the field of view of the device, and further, in order to ensure accuracy of the intercepting process, intercepting the map point cloud data according to the pose information and the scene image according to the cone, so as to obtain intercepting point cloud data under the field of view of the device, including: determining a target visual cone and visual cone parameters of the target visual cone according to the pose information and the image parameters corresponding to the scene image; determining a spatial relationship between the map point cloud data and the target cone according to the cone parameters; and intercepting the map point cloud data according to the spatial relationship between the map point cloud data and the target cone to obtain intercepted point cloud data in the field of view of the equipment.
It can be appreciated that, based on pose information of the navigation device and image parameters corresponding to the scene image, cone-viewing parameters corresponding to the target cone-viewing body can be calculated, where the cone-viewing parameters include, but are not limited to, a field angle, a position and a direction of the cone-viewing body, and the cone-viewing parameters describe an observation range and an observation direction of the image acquisition device.
In a specific implementation, based on the viewing cone parameters, determining the spatial relationship between the map point cloud data and the target viewing cone, wherein the spatial relationship between the map point cloud data and the target viewing cone can reflect whether the map point cloud data is in the target viewing cone, the map point cloud data in the target viewing cone indicates that the map point cloud data is in the view of the image acquisition equipment, and the map point cloud data of the part is reserved, so that intercepting point cloud data in the view of the equipment is obtained.
It should be noted that, determining whether the map point cloud data is in the target vertebral body may be calculated as follows: vector quantityIs the vector from the vertex of tetrahedron to three points on the ground,/->Is the base area of a small rectangular pyramid (the small rectangular pyramid formed by the four points of the judging point and the bottom surface of the target view cone)>Is of small rectangular pyramid height->For determining that the point forms a rectangular pyramid volume with five points on the viewing cone, < >>For the volume of the viewing cone, S is the base area of the viewing cone, and h is the height of the viewing coneThe degree, sigma is an error value, when the volume enclosed by the decision point and the view cone end point in the map point cloud data is within the error allowable range, the point is decided to be reserved as the point inside the view cone, otherwise, the external point is removed, and the decision condition is that
The embodiment obtains the initial binocular image and the pose information of the navigation equipment; performing image preprocessing on the initial binocular image to obtain a scene image; and intercepting the map point cloud data by using a cone according to the pose information and the scene image to obtain intercepted point cloud data under the field of view of the equipment. By the method, accuracy in point cloud data interception is guaranteed, and a foundation is laid for subsequent true value generation.
Referring to fig. 8, fig. 8 is a flowchart illustrating a map truth generating method according to a third embodiment of the present invention.
Based on the above-mentioned first embodiment, the map true value generating method of the present embodiment includes, at the step S40:
step S41: and determining three-dimensional coordinates of each key feature point on the projection image under the equipment space coordinate system according to each key feature point pair and the conversion point cloud data.
It should be noted that, based on the plurality of key feature point pairs and the conversion point cloud data, a mapping relationship between the feature point positions in the projection image and the three-channel floating point cloud data may be obtained, so as to determine the three-dimensional coordinates of each two-dimensional matching point in the projection image under the equipment space coordinate system. Each two-dimensional matching point on the projection image is the key feature point on the projection image.
Step S42: and carrying out attitude calculation according to the three-dimensional coordinates of each key feature point on the projection image under the equipment space coordinate system and each key feature point pair, and determining conversion correction parameters.
It should be noted that, based on the three-dimensional coordinates of each key feature Point on the projection image in the device space coordinate system and each key feature Point pair, the 2D feature Point of the scene image and the 3D feature Point obtained by mapping the projection image may be determined, and PNP (persistent-n-Point) calculation is performed based on the 2D feature Point of the scene image and the 3D feature Point obtained by mapping the projection image, so as to obtain the conversion correction parameter from the Point cloud data to the projection image in the device space coordinate system.
It will be appreciated that PNP (Perspotive-n-Point) computation is a method for computing camera pose, typically for computing camera position and pose in the world coordinate system. The input of the PNP algorithm typically includes several known points in three-dimensional space (such as coordinates in the world coordinate system) and the corresponding two-dimensional coordinates of these points in the camera image. From the spatial coordinates of these known points and their projected coordinates in the camera image, the PNP algorithm can calculate the position and pose of the camera. In this embodiment, the PNP algorithm may be one of EPnP (Efficient Perspective-n-Point) algorithm, OPnP (Optimal Perspective-n-Point) algorithm, and DLS (Direct Linear Transform) algorithm.
Step S43: and calculating parameters according to the transformation correction parameters and the equipment calibration external parameters, and determining target transformation parameters.
On the basis of the conversion correction parameters, the device calibration external parameters between the navigation device and the image acquisition device are overlapped, so that the accurate conversion parameters from the point cloud data to the device space coordinate system can be obtained, and the accurate conversion parameters from the point cloud data to the device space coordinate system are the target conversion parameters.
It may be appreciated that, for accurate multiple key feature point pairs, further, the feature point extracting according to the projection image and the scene image, determining multiple key feature point pairs includes: extracting key points of the projection image and the scene image, and determining a plurality of first characteristic point pairs; performing rule constraint on a plurality of first characteristic point pairs according to the equipment calibration external parameters, and screening a plurality of second characteristic point pairs from the plurality of first characteristic point pairs; and screening the plurality of second characteristic point pairs according to a preset grid division mode, and determining a plurality of key characteristic point pairs.
In specific implementation, a FAST key point extraction algorithm is adopted to extract key points of a scene image and a projection image, a SAR heterogeneous image matching algorithm is adopted to determine a plurality of matched initial characteristic point pairs, and the plurality of matched initial characteristic point pairs are first characteristic point pairs.
It should be noted that, the SAR (synthetic aperture radar) heterologous image matching algorithm is to perform feature point pair training on an optical image and an SAR image by using a CNN network to obtain a pre-training parameter model, then perform feature descriptor calculation on key point data by using the pre-training model, and finally adopt a preliminary feature point matching result obtained by violent matching, wherein an effect diagram of feature point detection and matching of the heterologous image is shown as 9.
It can be understood that, according to the device calibration external parameters between the navigation device and the image acquisition device, rule constraint is carried out on the plurality of first characteristic point pairs, and the characteristic point pairs which are in error match are removed, so that a plurality of second characteristic point pairs are screened out from the plurality of first characteristic point pairs.
In a specific implementation, aiming at the problem that feature point pairs are concentrated and inconvenient for subsequent optimization calculation, a preset grid division mode is adopted in the embodiment to further reject feature point pairs with relatively close second feature point pairs, so that a plurality of key feature point pairs are obtained. In this embodiment, the distribution effect of the feature points after the elimination by adopting the preset grid division mode is shown in fig. 10.
It should be noted that, in order to ensure accuracy of the true value of the map, the abnormal value is removed, and further, the determining the true value of the target map based on the target transformation parameter includes: performing image conversion on the projection image according to the target conversion parameters to obtain a map true value depth image; performing background point filtering on the map true value depth image; and obtaining a true value of the target map according to the filtering result.
It can be understood that the projected image is subjected to image conversion according to the target conversion parameters to obtain a map truth value depth image, and the object shielding visibility analysis algorithm in computer graphics is adopted to perform background point filtering on the two-dimensional map truth value depth image and output a final target map truth value aiming at the background penetration problem in the map truth value depth image. In this embodiment, as shown in fig. 11, the left side is an image that is not filtered by the background point, and the right side is a filtered image.
In a specific implementation, the target correction parameter calculation process specifically includes:in the form of a spatial point coordinate,for the coordinates in the device space coordinate system, p is the pixel coordinate of the space point mapped onto the image, ω is the depth value of the current point in the camera coordinate system, +.>Equivalent focal length for X-direction and Y-direction of camera, < >>Is the principal point coordinate (i.e. the image center point coordinate),>for each component corresponding to the rotation matrix, K is called an internal parameter of the image acquisition device, R and t are external parameters of the image acquisition device to be solved, and the whole transformation schematic diagram is shown in fig. 12. The superposition effect of map data and scene image data obtained after correction according to the external parameters is shown in FIG. 13, and ++>
According to the embodiment, three-dimensional coordinates of each key feature point on the projection image under the equipment space coordinate system are determined according to each key feature point pair and the conversion point cloud data; according to the three-dimensional coordinates of each key feature point on the projection image in the equipment space coordinate system and each key feature point pair, carrying out gesture calculation to determine conversion correction parameters; and calculating parameters according to the transformation correction parameters and the equipment calibration external parameters, and determining target transformation parameters. By the method, the accuracy of calculation of the target transformation parameters can be ensured based on the three-dimensional coordinates of the key feature points under the equipment space coordinate system and the external parameters of the combination equipment calibration, and the accuracy of the subsequent map true value generation is ensured.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores a map truth value generating program, and the map truth value generating program realizes the steps of the map truth value generating method when being executed by a processor.
Referring to fig. 14, fig. 14 is a block diagram showing the structure of a map truth generating apparatus according to the first embodiment of the present invention.
As shown in fig. 14, the map truth value generating apparatus provided in the embodiment of the present invention includes:
the intercepting module 10 is configured to intercept the map point cloud data according to a cone, and obtain intercepted point cloud data under the field of view of the device, where the map point cloud data includes object texture information.
The generating module 20 is configured to generate a projection image based on the converted point cloud data of the truncated point cloud data in the device space coordinate system and the scene image.
And the extracting module 30 is used for extracting the characteristic points according to the projection image and the scene image, and determining a plurality of key characteristic point pairs.
And the processing module 40 is configured to perform parameter calculation based on each key feature point pair, the conversion point cloud data and the equipment calibration external parameter, determine a target conversion parameter, and determine a target map true value based on the target conversion parameter.
According to the embodiment, interception point cloud data under the field of view of the device is obtained by intercepting the map point cloud data according to the cone, wherein the map point cloud data contains object texture information; generating a projection image based on the conversion point cloud data of the interception point cloud data under the equipment space coordinate system and the scene image; extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs; and carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter. According to the method, the projection image is generated based on the conversion point cloud data of the intercepted point cloud data in the equipment vision and the scene image in the equipment space coordinate system, the plurality of key feature point pairs, the conversion point cloud data and the equipment calibration external parameters are obtained by extracting the feature points according to the projection image and the scene image, the point cloud data contains texture information, the accuracy and the efficiency of map true value generation are improved, the scene full coverage can be realized in the method, the universality is good, the accurate true value data can be generated in a flow and batch mode, and the true value manufacturing requirement under the condition of large vision in the air and long distance is met.
In an embodiment, the capturing module 10 is further configured to, in an embodiment, the capturing module 10 is further configured to obtain an initial binocular image and pose information of the navigation apparatus;
performing image preprocessing on the initial binocular image to obtain a scene image;
and intercepting the map point cloud data by using a cone according to the pose information and the scene image to obtain intercepted point cloud data under the field of view of the equipment.
In an embodiment, the intercepting module 10 is further configured to determine a target cone and a cone parameter of the target cone according to the pose information and the image parameter corresponding to the scene image;
determining a spatial relationship between the map point cloud data and the target cone according to the cone parameters;
and intercepting the map point cloud data according to the spatial relationship between the map point cloud data and the target cone to obtain intercepted point cloud data in the field of view of the equipment.
In one embodiment, the generating module 20 is further configured to obtain navigation track data and odometry data;
calibrating external parameters between the navigation equipment and the image acquisition equipment according to the navigation track data and the odometer data to obtain equipment calibration external parameters;
coordinate transformation is carried out on the map point cloud data according to the equipment calibration external parameters, and initial point cloud data under an equipment space coordinate system is obtained;
and carrying out grid processing on the initial point cloud data to obtain conversion point cloud data of the intercepted point cloud data under a device space coordinate system.
In an embodiment, the extracting module 30 is further configured to perform key point extraction on the projection image and the scene image, and determine a plurality of first feature point pairs;
performing rule constraint on a plurality of first characteristic point pairs according to the equipment calibration external parameters, and screening a plurality of second characteristic point pairs from the plurality of first characteristic point pairs;
and screening the plurality of second characteristic point pairs according to a preset grid division mode, and determining a plurality of key characteristic point pairs.
In an embodiment, the processing module 40 is further configured to determine three-dimensional coordinates of each key feature point on the projected image in the device space coordinate system according to each key feature point pair and the converted point cloud data;
according to the three-dimensional coordinates of each key feature point on the projection image in the equipment space coordinate system and each key feature point pair, carrying out gesture calculation to determine conversion correction parameters;
and calculating parameters according to the transformation correction parameters and the equipment calibration external parameters, and determining target transformation parameters.
In an embodiment, the processing module 40 is further configured to perform image conversion on the projection image according to the target transformation parameter to obtain a map true depth image;
performing background point filtering on the map true value depth image;
and obtaining a true value of the target map according to the filtering result.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with other steps or at least a portion of the other steps or stages.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of embodiments, it will be clear to a person skilled in the art that the above embodiment method may be implemented by means of software plus a necessary general hardware platform, but may of course also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A map truth value generation method, characterized in that the map truth value generation method comprises:
intercepting the map point cloud data by using a cone to obtain intercepted point cloud data under the field of view of the device, wherein the map point cloud data contains object texture information;
generating a projection image based on the conversion point cloud data of the interception point cloud data under the equipment space coordinate system and the scene image;
extracting feature points according to the projection image and the scene image, and determining a plurality of key feature point pairs;
and carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter.
2. The map truth value generation method of claim 1, wherein the performing cone interception on the map point cloud data to obtain intercepted point cloud data in the field of view of the device comprises:
acquiring an initial binocular image and pose information of navigation equipment;
performing image preprocessing on the initial binocular image to obtain a scene image;
and intercepting the map point cloud data by using a cone according to the pose information and the scene image to obtain intercepted point cloud data under the field of view of the equipment.
3. The map true value generation method according to claim 2, wherein the performing cone interception on map point cloud data according to the pose information and the scene image to obtain intercepted point cloud data under a field of view of the device includes:
determining a target visual cone and visual cone parameters of the target visual cone according to the pose information and the image parameters corresponding to the scene image;
determining a spatial relationship between the map point cloud data and the target cone according to the cone parameters;
and intercepting the map point cloud data according to the spatial relationship between the map point cloud data and the target cone to obtain intercepted point cloud data in the field of view of the equipment.
4. The map truth generating method according to claim 1, wherein before the generating of the projection image based on the converted point cloud data of the truncated point cloud data in the device space coordinate system and the scene image, further comprises:
acquiring navigation track data and odometer data;
calibrating external parameters between the navigation equipment and the image acquisition equipment according to the navigation track data and the odometer data to obtain equipment calibration external parameters;
coordinate transformation is carried out on the map point cloud data according to the equipment calibration external parameters, and initial point cloud data under an equipment space coordinate system is obtained;
and carrying out grid processing on the initial point cloud data to obtain conversion point cloud data of the intercepted point cloud data under a device space coordinate system.
5. The map truth generating method according to claim 1, wherein the feature point extraction from the projection image and the scene image determines a plurality of key feature point pairs, comprising:
extracting key points of the projection image and the scene image, and determining a plurality of first characteristic point pairs;
performing rule constraint on a plurality of first characteristic point pairs according to the equipment calibration external parameters, and screening a plurality of second characteristic point pairs from the plurality of first characteristic point pairs;
and screening the plurality of second characteristic point pairs according to a preset grid division mode, and determining a plurality of key characteristic point pairs.
6. The map truth value generation method of claim 1, wherein the determining the target transformation parameters based on parameter calculation of each key feature point pair, the conversion point cloud data, and the device calibration external parameters comprises:
according to the key feature point pairs and the conversion point cloud data, determining three-dimensional coordinates of the key feature points on the projection image under the equipment space coordinate system;
according to the three-dimensional coordinates of each key feature point on the projection image in the equipment space coordinate system and each key feature point pair, carrying out gesture calculation to determine conversion correction parameters;
and calculating parameters according to the transformation correction parameters and the equipment calibration external parameters, and determining target transformation parameters.
7. The map truth value generation method of claim 1, wherein the determining a target map truth value based on the target transformation parameters comprises:
performing image conversion on the projection image according to the target conversion parameters to obtain a map true value depth image;
performing background point filtering on the map true value depth image;
and obtaining a true value of the target map according to the filtering result.
8. A map truth value generation apparatus, characterized in that the map truth value generation apparatus includes:
the intercepting module is used for intercepting the cone of the map point cloud data to obtain intercepted point cloud data under the field of view of the equipment, wherein the map point cloud data contains object texture information;
the generation module is used for generating a projection image based on the conversion point cloud data of the intercepted point cloud data in the equipment space coordinate system and the scene image;
the extraction module is used for extracting feature points according to the projection image and the scene image and determining a plurality of key feature point pairs;
and the processing module is used for carrying out parameter calculation based on each key characteristic point pair, the conversion point cloud data and the equipment calibration external parameters, determining a target conversion parameter, and determining a target map true value based on the target conversion parameter.
9. A map truth value generation apparatus, characterized in that the apparatus comprises: a memory, a processor, and a map truth generation program stored on the memory and executable on the processor, the map truth generation program configured to implement the map truth generation method of any of claims 1-7.
10. A storage medium having stored thereon a map truth generation program which when executed by a processor implements the map truth generation method of any of claims 1 to 7.
CN202311793259.1A 2023-12-22 2023-12-22 Map true value generation method, device, equipment and storage medium Pending CN117635865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311793259.1A CN117635865A (en) 2023-12-22 2023-12-22 Map true value generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311793259.1A CN117635865A (en) 2023-12-22 2023-12-22 Map true value generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117635865A true CN117635865A (en) 2024-03-01

Family

ID=90020016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311793259.1A Pending CN117635865A (en) 2023-12-22 2023-12-22 Map true value generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117635865A (en)

Similar Documents

Publication Publication Date Title
CN111353969B (en) Method and device for determining road drivable area and computer equipment
US10086955B2 (en) Pattern-based camera pose estimation system
CN109993793B (en) Visual positioning method and device
JP5430456B2 (en) Geometric feature extraction device, geometric feature extraction method, program, three-dimensional measurement device, object recognition device
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN113050116A (en) Robot positioning method and device, robot and readable storage medium
CN111862214B (en) Computer equipment positioning method, device, computer equipment and storage medium
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN112154448A (en) Target detection method and device and movable platform
CN113313765A (en) Positioning method, positioning device, electronic equipment and storage medium
US11361502B2 (en) Methods and systems for obtaining aerial imagery for use in geospatial surveying
CN117197339A (en) Model display method, device and equipment based on DEM and storage medium
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN116912417A (en) Texture mapping method, device, equipment and storage medium based on three-dimensional reconstruction of human face
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN114882085B (en) Three-dimensional point cloud registration method and system based on single cube
CN117635865A (en) Map true value generation method, device, equipment and storage medium
US11747141B2 (en) System and method for providing improved geocoded reference data to a 3D map representation
CN115100287A (en) External reference calibration method and robot
CN115409960A (en) Model construction method based on illusion engine, electronic device and storage medium
CN117788593B (en) Method, device, medium and equipment for eliminating dynamic points in three-dimensional laser data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination