CN116645406A - Depth map generation method and device, computer readable storage medium and electronic equipment - Google Patents

Depth map generation method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN116645406A
CN116645406A CN202310585457.2A CN202310585457A CN116645406A CN 116645406 A CN116645406 A CN 116645406A CN 202310585457 A CN202310585457 A CN 202310585457A CN 116645406 A CN116645406 A CN 116645406A
Authority
CN
China
Prior art keywords
point cloud
cloud data
point
mapping
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310585457.2A
Other languages
Chinese (zh)
Inventor
上官王毅
张云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Original Assignee
Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Anting Horizon Intelligent Transportation Technology Co ltd filed Critical Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Priority to CN202310585457.2A priority Critical patent/CN116645406A/en
Publication of CN116645406A publication Critical patent/CN116645406A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the disclosure discloses a depth map generation method, a device, a computer-readable storage medium and electronic equipment, wherein the method comprises the following steps: generating a first depth map based on a point cloud data set acquired by a target scene; mapping the first depth map to a camera coordinate system of the target camera based on parameters of the target camera to obtain a second depth map; determining a mapping point set consisting of mapping points corresponding to point cloud data included in the point cloud data set in the second depth map; determining invalid points from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and deleting the invalid points from the mapping point set; a third depth map representing depth truth values under the camera coordinate system is determined based on the set of mapped points after the invalid points are deleted. The embodiment of the disclosure not only can effectively reduce the noise of the depth truth value in the depth map, but also can efficiently and low-cost generate the high-quality depth map.

Description

Depth map generation method and device, computer readable storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of computer vision, in particular to a depth map generation method, a depth map generation device, a computer readable storage medium and electronic equipment.
Background
Image depth estimation refers to estimating the depth of a scene in an image, i.e., the distance from each pixel point in the image to the imaging plane of a camera. The depth estimation model is trained based on the supervised deep learning model, and is an important method for estimating the image depth at present.
In a typical depth estimation model, a range sensor is required to acquire depth truth values before training, and a common sensor is a laser radar. However, due to the difference between the mounting positions of the camera and the laser radar, the sparsity of the point cloud collected by the laser radar, and the like, the problem that the points projected into the image are not matched with the actual scene exists in the direct projection of the three-dimensional point cloud to the image, for example, the point cloud behind the obstacle also appears on the image plane, so that a depth truth value contains a large amount of error data (namely noise). Therefore, how to reduce noise in depth truth is a problem that needs to be addressed at present.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present disclosure provide a depth map generating method, apparatus, computer readable storage medium, and electronic device, so as to solve the problem of how to efficiently reduce noise included in a depth truth value of a depth image.
The embodiment of the disclosure provides a depth map generation method, which comprises the following steps: generating a first depth map based on a point cloud data set acquired by a target scene; mapping the first depth map to a camera coordinate system of the target camera based on parameters of the target camera to obtain a second depth map; determining a mapping point set consisting of mapping points corresponding to point cloud data included in the point cloud data set in the second depth map; determining invalid points from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and deleting the invalid points from the mapping point set; a third depth map representing depth truth values under the camera coordinate system is determined based on the set of mapped points after the invalid points are deleted.
According to another aspect of the embodiments of the present disclosure, there is provided a depth map generating apparatus, including: the generating module is used for generating a first depth map based on the point cloud data set acquired by the target scene; the mapping module is used for mapping the first depth map to a camera coordinate system of the target camera based on the parameters of the target camera to obtain a second depth map; the first determining module is used for determining a mapping point set formed by mapping points corresponding to the point cloud data included in the point cloud data set in the second depth map; the second determining module is used for determining invalid points from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set and deleting the invalid points from the mapping point set; and the third determining module is used for determining a third depth map which represents depth true values under a camera coordinate system based on the mapping point set after the invalid points are deleted.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for execution by a processor to perform the above-described depth map generation method.
According to another aspect of an embodiment of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the depth map generation method described above.
According to another aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed by an instruction processor, perform the depth map generation method proposed by the present disclosure.
According to the depth map generation method, the depth map generation device, the computer-readable storage medium and the electronic equipment provided by the embodiment of the disclosure, the mapping point set corresponding to the point cloud data set is determined in the depth map under the camera coordinate system, the invalid points are determined from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and the invalid points are deleted from the mapping point set, so that the low-noise depth map is obtained. According to the method, complex processing is not required to be carried out on the image, consistency verification is not required to be carried out on the mapping points of the point cloud in the depth map by using a plurality of sensors, the effectiveness of the spatial distribution of the point cloud is only required to be judged by using the coordinate sequence of the point cloud data, and invalid points which are not matched with the shooting angle of a camera under a camera coordinate system are obtained, so that the noise of depth true values in the depth map can be effectively reduced, and the high-quality depth map can be efficiently and low-cost generated.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments thereof in more detail with reference to the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps;
FIG. 1 is a system diagram to which the present disclosure is applicable;
FIG. 2 is a flow chart of a depth map generation method provided by an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of determining invalid map points under a camera coordinate system according to an embodiment of the present disclosure;
FIG. 4 is a flow chart diagram of a depth map generation method provided by another exemplary embodiment of the present disclosure;
FIG. 5 is a flow chart of a depth map generation method provided by another exemplary embodiment of the present disclosure;
FIG. 6A is a schematic diagram of an azimuthal range of a point cloud acquisition device of an embodiment of the present disclosure divided into a plurality of angular regions;
FIG. 6B is a schematic diagram of a laser beam across a top view angle of a point cloud acquisition device of an embodiment of the present disclosure;
FIG. 7 is a flow chart of a depth map generation method provided by another exemplary embodiment of the present disclosure;
FIG. 8 is a flow chart of a depth map generation method provided by another exemplary embodiment of the present disclosure;
FIG. 9 is a flow chart diagram of a depth map generation method provided by another exemplary embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a depth map generating apparatus according to an exemplary embodiment of the present disclosure;
fig. 11 is a schematic structural view of a depth map generating apparatus provided in another exemplary embodiment of the present disclosure;
fig. 12 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
For the purpose of illustrating the present disclosure, exemplary embodiments of the present disclosure will be described in detail below with reference to the drawings, it being apparent that the described embodiments are only some, but not all embodiments of the present disclosure, and it is to be understood that the present disclosure is not limited by the exemplary embodiments.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Summary of the application
In order to reduce the noise of the depth truth value in the depth map, the current common methods include using pooling operation in the image or morphological methods to remove point clouds inconsistent with vision, etc., so that the noise removing effect of the methods is poor, and the depth value noise cannot be comprehensively detected and removed. The consistency verification can be performed by using a plurality of sensors, so that the noise of depth truth values in the depth map is reduced, but the method is high in use cost, and the characteristics of cameras are generally required to be coupled.
Exemplary System
Fig. 1 illustrates an exemplary system architecture 100 in which a depth map generation method or depth map generation apparatus of embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a terminal device 101, a network 102, a server 103, and a point cloud acquisition device 104 and a camera 105.
Network 102 is a medium used to provide communication links between terminal device 101 and server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The point cloud acquisition device 104 is configured to perform point cloud acquisition on a target scene, so as to obtain a point cloud data set that represents a position of an object in the target scene. The point cloud acquisition device 104 may include, but is not limited to, at least one of the following: lidar, binocular stereo cameras, and the like. The point cloud collecting device 104 may be disposed at an arbitrary position, for example, the point cloud collecting device 104 may be disposed on a vehicle, and the target scene may be a scene of a road on which the vehicle travels, a parking lot, or the like. As another example, the point cloud acquisition device 104 may be disposed on an aircraft, and the target scene may be a scene in which the aircraft is flying.
The camera 105 is used for capturing images of the target scene, and the point cloud data set acquired by the point cloud acquisition device 104 can be mapped into the images captured by the camera 105.
Terminal device 101 may interact with server 103 via network 102 to receive or send messages, etc. Various applications, such as a monitoring-type application, a navigation-type application, and the like, may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices including, but not limited to, mobile terminals such as in-vehicle terminals, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), and the like, and fixed terminals such as digital televisions, desktop computers, and the like.
The server 103 may be a server providing various services, for example, a background server that generates a depth map by receiving a point cloud data set, a two-dimensional image, or the like uploaded by the terminal device 101.
It should be noted that, the depth map generating method provided by the embodiment of the present disclosure may be executed by the server 103 or may be executed by the terminal device 101, and accordingly, the depth map generating apparatus may be provided in the server 103 or may be provided in the terminal device 101.
It should be understood that the number of terminal devices, networks, servers, point cloud acquisition devices, and cameras in fig. 1 are merely illustrative. There may be any number of terminal devices, networks, servers, point cloud acquisition devices, and cameras, as desired for implementation. For example, in the case where the point cloud data set and the two-dimensional image do not need to be acquired from a remote location, the above system architecture may not include a network, but only include a terminal device, a point cloud acquisition device, and a camera.
Exemplary method
Fig. 2 is a flow chart of a depth map generating method according to an exemplary embodiment of the present disclosure. The present embodiment is applicable to an electronic device (such as the terminal device 101 or the server 103 shown in fig. 1), and as shown in fig. 2, the method includes the steps of:
step 201, generating a first depth map based on a point cloud data set acquired by a target scene.
Wherein each point cloud data included in the point cloud data set generally includes a coordinate value representing a position of a point within the target scene under the coordinate system of the point cloud acquisition device 104 shown in fig. 1 (i.e., the coordinate system established with the position of the point cloud acquisition device 104 as the origin). The three-dimensional coordinate values corresponding to the pixel points in the first depth map can be obtained according to the point cloud data corresponding to the pixel points. For example, a rectangular coordinate system using the position of the point cloud acquisition device 104 as an origin includes three coordinate axes of x, y, and z, the z axis being a coordinate axis in the vertical direction, the x axis being a coordinate value in the optical axis direction of the point cloud acquisition device, and the y axis being a coordinate axis in the horizontal direction. The three-dimensional coordinate value corresponding to the pixel point in the first depth map is an (x, y, z) coordinate value, wherein the x coordinate value can be used as a depth value to represent the distance between the three-dimensional space point corresponding to the pixel point and the point cloud acquisition equipment.
Step 202, mapping the first depth map to the camera coordinate system of the target camera based on the parameters of the target camera, to obtain a second depth map.
The target camera may be a camera 105 as shown in fig. 1, where the camera 105 is located differently from the point cloud acquisition device 104, and therefore the first depth map needs to be mapped under the camera coordinate system. In general, parameters of a camera may include external parameters, which refer to parameters representing a mapping relationship between points in a world coordinate system and points in a camera coordinate system. As an example, in this embodiment, the external parameters of the point cloud collecting device 104 may be used to map the pixel points in the first depth map to the world coordinate system, and then the external parameters of the target camera may be used to map the points in the world coordinate system to the camera coordinate system of the target camera, so as to obtain the second depth map.
And 203, determining a mapping point set consisting of mapping points corresponding to the point cloud data included in the point cloud data set in the second depth map.
Since the mapping relationship between the pixel points in the first depth map and the pixel points in the second depth map is obtained in step 202, and the mapping relationship between the point cloud data included in the point cloud data set and the pixel points in the first depth map is known, a mapping point set composed of mapping points corresponding to the point cloud data included in the point cloud data set may be determined in the second depth map.
Step 204, determining invalid points from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and deleting the invalid points from the mapping point set.
Specifically, the coordinate sequence of the point cloud data represents the arrangement sequence of the spatial point set indicated by the point cloud data set in space, and because the spatial point represented by the point cloud data is located on the surface of the object, if no object is present to block the spatial point in the camera coordinate system, the coordinate sequence of the mapping point in the mapping point set in the camera coordinate system is consistent with the coordinate sequence of the point cloud data in the point cloud data set. If occlusion occurs, the arrangement order of the coordinates of the occluded points and the coordinates of the non-occluded points may be disordered, and at this time, the mapping points with disordered may be determined as invalid points.
As shown in fig. 3, O1 is the origin of coordinates of the point cloud acquisition device, and O2 is the origin of coordinates of the target camera. 301 is an object in the target scene, the point A, B, C is located on the surface of the object, and the points A, B, C respectively correspond to one point cloud data. As can be seen from the figure, in the coordinate system of the point cloud acquisition device, the pitch angle of the line connecting the point A, B, C and O1 is C, B, A from small to large. In the camera coordinate system of the target camera, since the line between the point a and the O2 passes through the object 301, that is, the object 301 blocks the point a, the arrangement order of the points A, B, C is changed, that is, in the camera coordinate system, the order of the pitch angle of the line between the point A, B, C and the O2 from small to large is C, A, B, not C, B, A, and the point a is blocked by the object 301, which is the cause of the order change, so the point a can be determined as an invalid point.
Step 205, determining a third depth map representing depth truth values under the camera coordinate system based on the set of mapping points after the invalid points are deleted.
Specifically, the depth value of the pixel corresponding to the invalid point in the second depth map may be deleted or set to a preset depth value, so as to obtain the third depth map. The depth value corresponding to the pixel point in the third depth map is a depth truth value, and the depth truth value can accurately reflect the distance between the object corresponding to the pixel point in the image shot by the target camera and the target camera.
According to the method provided by the embodiment of the disclosure, the mapping point set corresponding to the point cloud data set is determined in the depth map under the camera coordinate system, the invalid points are determined from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and the invalid points are deleted from the mapping point set, so that the low-noise depth map is obtained. According to the method, complex processing is not required to be carried out on the image, consistency verification is not required to be carried out on the mapping points of the point cloud in the depth map by using a plurality of sensors, the effectiveness of the spatial distribution of the point cloud is only required to be judged by using the coordinate sequence of the point cloud data, and invalid points which are not matched with the shooting angle of a camera under a camera coordinate system are obtained, so that the noise of depth true values in the depth map can be effectively reduced, and the high-quality depth map can be efficiently and low-cost generated.
In some alternative implementations, as shown in fig. 4, step 204 includes:
step 2041, dividing the point cloud data set into at least two point cloud data subsets.
The manner of dividing the point cloud data set may include various manners. For example, the azimuth range (i.e., the range of horizontal swing) of the point cloud acquisition device may be divided into at least two angular regions, where the point cloud data corresponding to the points in each angular region is a subset of the point cloud data. Or, the pitch angle range (i.e. the range of vertical swing) of the point cloud acquisition device may be divided into at least two angle areas, where the point cloud data corresponding to the points in each angle area is a subset of the point cloud data.
Step 2042, for each of the at least two point cloud data subsets, determining a sequence number of each of the point cloud data subsets based on a preset arrangement direction.
It should be understood that each of the at least two point cloud data subsets for steps 2042-2044, i.e. one of the point cloud data subsets is described at the time of data processing, is processed in the same way as the other point cloud data subsets.
The preset arrangement direction is an arrangement direction in the space points corresponding to the point cloud data included in the point cloud data subset. For example, if the point cloud data subsets are divided according to azimuth angles, the preset arrangement direction may be a variation direction (or a vertical direction) of the pitch angle; if the point cloud data subsets are divided according to the pitch angle, the preset arrangement direction may be a variation direction (or a horizontal direction) of the azimuth angle.
The sequence numbers of the point cloud data may be generated based on coordinates of the point cloud data, for example, for one set of point cloud data, the sequence numbers (e.g., elevation_0, elevation_1 … …) are set in order of the pitch angle of each of the point cloud data from small to large.
Step 2043, determining a mapping point subset composed of mapping points corresponding to each point cloud data in the point cloud data subset in the mapping point set.
Since step 2042 has already determined the correspondence between the point cloud data set and the mapped point set, this step can determine the mapped point subset corresponding to each point cloud data subset.
Step 2044, determining, from the subset of mapping points, that the mapping points, for which the sequence numbers of the corresponding point cloud data do not conform to the preset arrangement sequence, are invalid points according to the preset arrangement direction.
As an example, as shown in fig. 3, if the preset arrangement direction is the variation direction of the pitch angle, in the coordinate system of the point cloud collecting device, the sequence number of the three point cloud data included in the point cloud data subset is C, B, A according to the initial sequence from small to large pitch angle, and in the camera coordinate system, the sequence number of the three point cloud data is C, A, B according to the sequence from small to large pitch angle, so that the arrangement position of a makes the arrangement sequence of the three point cloud data no longer coincide with the initial sequence, and the mapping point corresponding to a is determined as an invalid point. The reason why the mapping point of a should be determined as the invalid point is that the object 301 causes an occlusion of a in the camera coordinate system.
According to the embodiment, the point cloud data set is divided into at least two point cloud data subsets, and the invalid mapping points are determined based on the arrangement sequence of the point cloud data in each point cloud data subset, and because the space range represented by the point cloud data subset relative to the whole point cloud data set is smaller, the problem that when the point cloud data is processed, the distribution rule of the point cloud data is difficult to judge due to the fact that more point cloud data are contained in a larger space can be avoided, and the distribution rule of the point cloud data in a small range can be accurately judged, so that the point cloud data can be judged effectively in a higher precision mode, the identification efficiency of the effective point cloud data is further improved, and meanwhile the accuracy of depth true value generation can be ensured.
In some alternative implementations, as shown in fig. 5, step 2041 includes:
step 20411, dividing the space into at least two regions according to a first dimension of the space in which the point cloud data set is distributed.
The first dimension may be a preset dimension in any direction. As an example, the first dimension may be an azimuth dimension (or horizontal view angle dimension), which refers to a range of lateral scan angles of the point cloud acquisition device with the location of the point cloud acquisition device as an origin. Thus, the azimuth angle of the space in which the point cloud data set is distributed may be divided into at least two angular regions. As shown in fig. 6A, which is a top view of an acquisition range of the point cloud acquisition device, a range of azimuth angles is divided into a plurality of angle areas, an angle of each angle area is α, and a spatial area (an area in which a vertical plane on which a straight line shown in fig. 6A is located is a boundary surface divided area) corresponding to each angle area is an area in which a space of the point cloud data set is divided.
Alternatively, the first dimension may be a pitch angle dimension (or a vertical view angle dimension), and the space may be divided into at least two regions according to a method similar to the above-described region division method based on the azimuth angle dimension.
Step 20412, determining the point cloud data of the same area in the at least two areas as one point cloud data subset, and obtaining at least two point cloud data subsets.
According to the embodiment, the space distributed by the point cloud data set is divided on the basis of the first dimension, so that the divided area can be matched with the attributes such as the scanning direction of the point cloud acquisition equipment and the offset of the scanning direction, the characteristics of the actual three-dimensional space scanned by the point cloud acquisition equipment can be accurately reflected by each point cloud data subset, and the judgment of the effectiveness of the point cloud data can be more accurately facilitated.
In some alternative implementations, as shown in fig. 7, step 2042 includes:
in response to determining that the subset of point cloud data includes first point cloud data having a corresponding laser beam serial number, a serial number of the laser beam serial number is determined 20421 as the serial number of the first point cloud data.
Wherein, the laser beam serial numbers are distributed according to a preset arrangement direction. Specifically, when the point cloud collecting device is a laser radar, the laser radar scans according to a preset scanning angle resolution, so that the scanned point cloud data can contain a laser beam serial number. As shown in fig. 6B, which shows a pitch angle range (or vertical angle of view) in which the laser radar scans in the vertical direction, the vertical angle of view range shown in fig. 6 is [ -25.0 °, +25° ], the number of laser beams is 128, and the laser beam numbers are eleration_0, eleration_1, … …, eleration_127, and thus the order of the laser beam numbers can reflect the arrangement direction of the point cloud data.
Step 20422, in response to determining that the subset of point cloud data includes second point cloud data that does not have a corresponding laser beam serial number, determining a serial number of the second point cloud data in the subset of point cloud data based on a magnitude of coordinate values of the second point cloud data in the subset of point cloud data.
Specifically, in some application scenarios, if the laser radar does not have a function of generating a laser beam serial number in the point cloud data, or if the situation that the laser beam serial number is not added to the point cloud data due to an acquisition error occurs, the acquired point cloud data set includes second point cloud data that does not have a corresponding laser beam serial number, and if a certain point cloud data subset includes the second point cloud data, the serial number of the second point cloud data may be generated according to the coordinate value of the second point cloud data.
As an example, the coordinate value of the second point cloud data includes a coordinate representing a height, and the height coordinate may be determined as a sequence number of the second point cloud data. Alternatively, the pitch angle may be calculated according to the coordinate values of the second point cloud data, and then the laser beam serial number of the second point cloud data may be calculated according to the pitch angle range and the laser beam serial number range shown in fig. 6B.
It should be appreciated that if the subset of point cloud data includes both the first point cloud data and the second point cloud data, the sequence number of the second point cloud data should be the same dimension as the sequence number of the first point cloud data, i.e., the sequence number of the first point cloud data and the sequence number of the second point cloud data are comparable. For example, the sequence number of the first point cloud data and the sequence number of the second point cloud data may both be pitch angle values.
According to the embodiment, the laser beam serial number is directly read to serve as the serial number of the point cloud data when the point cloud data comprises the laser beam serial number, and the serial number of the point cloud data is generated according to the coordinate value when the point cloud data does not comprise the laser beam serial number, so that the serial numbers of all the point cloud data included in the point cloud data subset can be compared, and universality and accuracy of judging the effectiveness of the point cloud data are improved.
In some alternative implementations, step 20422 may be performed as follows:
first, a preset arrangement direction is determined based on a second dimension of a space in which the point cloud data set is distributed.
As an example, when the first dimension direction is an azimuth dimension, the second dimension may be a pitch angle dimension, and the preset arrangement direction may be a direction in which the pitch angle is from small to large; when the first dimension direction is a pitch angle dimension, the second dimension may be an azimuth angle dimension, and the preset arrangement direction may be a direction in which the azimuth angle is from small to large.
And then, determining the angle of the second point cloud data in the point cloud data subset relative to the preset arrangement direction based on the coordinate value of the second point cloud data in the point cloud data subset.
Generally, the coordinate value of the second point cloud data is a coordinate in a rectangular coordinate system, that is, the coordinate value includes three coordinate values of x, y and z, and based on the x, y and z, a pitch angle can be calculated and obtained, and the pitch angle is determined as a preset arrangement direction angle. Or based on x, y and z, the azimuth angle can be calculated, and the azimuth angle is determined as a preset arrangement direction angle.
And finally, determining the sequence number of the second point cloud data in the point cloud data subset based on the angle and the preset angle resolution.
As an example, when the angle is a pitch angle, the pitch angle may be divided by the pitch angle resolution to obtain a sequence number of the second point cloud data. Alternatively, a value obtained by dividing the pitch angle by the pitch angle resolution is mapped into the laser beam serial number range in accordance with the laser beam serial number range such as shown in fig. 6B, thereby obtaining the serial number of the second point cloud data.
Similarly, when the angle is an azimuth angle, the sequence number of the second point cloud data may be determined based on the azimuth angle and the azimuth angle resolution in a similar manner.
According to the method and the device for determining the distribution sequence of the point cloud data in the point cloud data subset, the angle of the second point cloud data in the point cloud data subset is determined, the sequence number obtained can accurately represent the distribution sequence of the point cloud data in the point cloud data subset, the sequence number of the second point cloud data and the sequence number of the first point cloud data can be arranged according to the same dimension, and therefore comparison of all the point cloud data in the point cloud data subset is facilitated, and universality and accuracy of determining invalid points in a second depth map are improved.
In some alternative implementations, as shown in fig. 8, step 2044 includes:
step 20441, traversing the mapping points in the mapping point subset according to the preset arrangement sequence of the serial numbers of the corresponding point cloud data, and determining whether the coordinates corresponding to the mapping points traversed currently meet the target extremum condition corresponding to the preset arrangement direction or not compared with the coordinates corresponding to the mapping points traversed currently.
The target extremum condition refers to whether the coordinates corresponding to the mapping points traversed currently are maximum or minimum compared with the coordinates corresponding to the mapping points traversed currently, namely whether the monotonicity of the coordinates of the mapping points is consistent with that of the corresponding sequence numbers is determined. Alternatively, when determining whether the target extremum condition is satisfied based on the coordinates, the determination may be made using x, y, z coordinate values (for example, z coordinate values representing the height in the vertical direction), or may be made using an angle (for example, pitch angle or azimuth angle).
For example, when the preset arrangement sequence is from small to large, the mapping points are traversed according to the sequence from small to large, and if the angle (such as a pitch angle) corresponding to the mapping point currently traversed is larger than the angle corresponding to the mapping point already traversed, it is determined that the mapping point currently traversed meets the first target extremum condition corresponding to the sequence from small to large.
Or when the preset arrangement sequence is from big to small, traversing the mapping points according to the sequence number from big to small, and if the angle corresponding to the mapping point traversed currently is smaller than the angle corresponding to the mapping point traversed currently, determining that the mapping point traversed currently meets a second target extremum condition corresponding to the sequence from big to small.
And 20442, if the target extremum condition is not satisfied, determining the mapping point currently traversed to be an invalid point.
As shown in fig. 3, the sequence numbers of the point cloud data A, B, C are 3, 2, and 1, respectively, and in the second depth map, mapping points corresponding to C, B, A are sequentially traversed in order of sequence numbers from small to large.
Traversing to a mapping point C corresponding to C for the first time, wherein the pitch angle of the mapping point C is minimum, and determining that the mapping point C meets the target extremum condition.
Traversing to a mapping point B corresponding to the B for the second time, wherein the pitch angle of the mapping point B is the maximum pitch angle compared with the pitch angle of the mapping point c, and determining that the mapping point B meets the target extremum condition.
Traversing to a mapping point a corresponding to the A for the third time, wherein the pitch angle of the mapping point a is smaller than the pitch angle of the b, namely, the pitch angle of the mapping point a is not the maximum pitch angle compared with the pitch angles of the c and the b, and determining that the mapping point a does not meet the target extremum condition, and the mapping point a is an invalid point.
According to the method, whether the monotonicity of the coordinate change of the mapping point is consistent with the monotonicity of the serial number corresponding to the mapping point is determined according to the preset arrangement direction of the mapping point, and further the mapping point with the noncoherent monotonicity is determined to be an invalid point.
In some alternative implementations, as shown in fig. 9, before step 201, the method further includes:
step 901, acquiring an initial point cloud data set acquired by a point cloud acquisition device aiming at a target scene.
The initial point cloud data set is a set of unprocessed point cloud data directly acquired by the point cloud acquisition equipment.
Step 902, deleting point cloud data corresponding to a target object from an initial point cloud data set based on space information of the target object in a predetermined target scene, thereby obtaining a point cloud data set.
The target object may be a pre-specified object, for example, when the point cloud acquisition device and the camera are disposed on a vehicle, the target object is the vehicle. The electronic device may determine information such as a position and a size of the vehicle by using a target detection method based on a point cloud, determine a spatial point located on the vehicle from a spatial point set represented by the initial point cloud data set, and delete point cloud data corresponding to the spatial point located on the vehicle to obtain the point cloud data set.
According to the embodiment, the target object in the target scene is detected, the point cloud data corresponding to the target object are deleted, partial point cloud data are extracted in a targeted mode to serve as a basis for generating the depth map, interference caused by the fact that point cloud data which are not needed to be processed in the actual scene are contained in the point cloud data set is avoided, accuracy of generating the depth map is improved, meanwhile data processing capacity of generating the depth map is reduced, and efficiency of generating the depth map is improved.
Exemplary apparatus
Fig. 10 is a schematic structural diagram of a depth map generating apparatus according to an exemplary embodiment of the present disclosure. The present embodiment may be applied to an electronic device, as shown in fig. 10, where the depth map generating apparatus includes: a generating module 1001, configured to generate a first depth map based on a point cloud data set acquired by a target scene; the mapping module 1002 is configured to map the first depth map to a camera coordinate system of the target camera based on parameters of the target camera, to obtain a second depth map; a first determining module 1003, configured to determine, in the second depth map, a mapping point set that is formed by mapping points corresponding to point cloud data included in the point cloud data set; a second determining module 1004, configured to determine an invalid point from the mapping point set based on a coordinate order of the point cloud data in the point cloud data set, and delete the invalid point from the mapping point set; a third determining module 1005 is configured to determine a third depth map that represents a depth truth value under the camera coordinate system based on the mapping point set after the invalid points are deleted.
In this embodiment, the generating module 1001 may generate the first depth map based on the point cloud data set acquired by the target scene. Wherein each point cloud data included in the point cloud data set generally includes a coordinate value representing a position of a point within the target scene under the coordinate system of the point cloud acquisition device 104 shown in fig. 1 (i.e., the coordinate system established with the position of the point cloud acquisition device 104 as the origin). The depth value corresponding to the pixel point in the first depth map can be obtained according to the point cloud data corresponding to the pixel point. For example, a rectangular coordinate system using the position of the point cloud acquisition device 104 as an origin includes three coordinate axes of x, y, and z, the z axis being a coordinate axis in the vertical direction, the x axis being a coordinate value in the optical axis direction of the point cloud acquisition device, and the y axis being a coordinate axis in the horizontal direction. The depth value corresponding to the pixel point in the first depth map is the x coordinate value.
In this embodiment, the mapping module 1002 may map the first depth map to the camera coordinate system of the target camera based on the parameters of the target camera, to obtain the second depth map. The target camera may be a camera 105 as shown in fig. 1, where the camera 105 is located differently from the point cloud acquisition device 104, and therefore the first depth map needs to be mapped under the camera coordinate system. The parameters of the camera may include an internal parameter indicating a mapping relationship between a point in the camera coordinate system and a point in the image plane coordinate system, and an external parameter indicating a mapping relationship between a point in the world coordinate system and a point in the camera coordinate system. As an example, in this embodiment, the parameters of the point cloud collecting device 104 may be used to map the pixel points in the first depth map to the world coordinate system, then map the points in the world coordinate system to the camera coordinate system of the target camera, and map the points in the camera coordinate system to the image plane by using the parameters of the target camera, so as to obtain the second depth map.
In this embodiment, the first determining module 1003 may determine, in the second depth map, a mapping point set that is composed of mapping points corresponding to the point cloud data included in the point cloud data set. Since the mapping relationship between the pixel points in the first depth map and the pixel points in the second depth map is obtained in step 202, and the mapping relationship between the point cloud data included in the point cloud data set and the pixel points in the first depth map is known, a mapping point set composed of mapping points corresponding to the point cloud data included in the point cloud data set may be determined in the second depth map.
In this embodiment, the second determining module 1004 may determine the invalid point from the mapping point set based on the coordinate order of the point cloud data in the point cloud data set, and delete the invalid point from the mapping point set. Specifically, the coordinate sequence of the point cloud data represents the arrangement sequence of the spatial point set indicated by the point cloud data set in space, and because the spatial point represented by the point cloud data is located on the surface of the object, if no object is present to block the spatial point in the camera coordinate system, the coordinate sequence of the mapping point in the mapping point set in the camera coordinate system is consistent with the coordinate sequence of the point cloud data in the point cloud data set. If occlusion occurs, the arrangement order of the coordinates of the occluded points and the coordinates of the non-occluded points may be disordered, and at this time, the mapping points with disordered may be determined as invalid points.
In this embodiment, the third determining module 1005 may determine, based on the set of mapped points after deleting the invalid points, a third depth map representing depth truth values under the camera coordinate system. Specifically, the depth value of the pixel corresponding to the invalid point in the second depth map may be deleted or set to a preset depth value, so as to obtain the third depth map. The depth value corresponding to the pixel point in the third depth map is a depth truth value, and the depth truth value can accurately reflect the distance between the object corresponding to the pixel point in the image shot by the target camera and the target camera.
Referring to fig. 11, fig. 11 is a schematic structural view of a depth map generating apparatus according to another exemplary embodiment of the present disclosure.
In some alternative implementations, the second determining module 1004 includes: a dividing unit 10041, configured to divide the point cloud data set into at least two point cloud data subsets; a first determining unit 10042, configured to determine, for each of at least two point cloud data subsets, a sequence number of each of the point cloud data subsets based on a preset arrangement direction; a second determining unit 10043, configured to determine, in a mapping point set, a mapping point subset composed of mapping points corresponding to each point cloud data in the point cloud data subset; a third determining unit 10044 is configured to determine, from the subset of mapped points, that mapped points, for which the sequence numbers of the corresponding point cloud data do not conform to the preset arrangement sequence, are invalid points according to the preset arrangement direction.
In some alternative implementations, the partitioning unit 10041 includes: a dividing subunit 100411, configured to divide the space into at least two areas according to a first dimension of the space distributed by the point cloud data set; the first determining subunit 100412 is configured to determine point cloud data of a same area in at least two areas as one point cloud data subset, and obtain at least two point cloud data subsets.
In some alternative implementations, the first determining unit 10042 includes: a second determining subunit 100421, configured to determine, in response to determining that the point cloud data subset includes first point cloud data having a corresponding laser beam serial number, that the laser beam serial number is the serial number of the first point cloud data, where the laser beam serial numbers are distributed according to a preset arrangement direction; the third determining subunit 100422 is configured to determine, in response to determining that the subset of point cloud data includes second point cloud data that does not have the corresponding laser beam serial number, the serial number of the second point cloud data in the subset of point cloud data based on the magnitude of the coordinate value of the second point cloud data in the subset of point cloud data.
In some alternative implementations, the third determining subunit is further configured to: determining a preset arrangement direction based on a second dimension of the space in which the point cloud data set is distributed; determining an angle of the second point cloud data in the point cloud data subset relative to a preset arrangement direction based on the magnitude of the coordinate value of the second point cloud data in the point cloud data subset; and determining the sequence number of the second point cloud data in the point cloud data subset based on the angle and the preset angle resolution.
In some alternative implementations, the third determining unit 10044 includes: a fourth determining subunit 100441, configured to traverse the mapping points in the mapping point subset according to a preset arrangement sequence of serial numbers of the corresponding point cloud data, and determine whether a coordinate corresponding to the mapping point traversed currently meets a target extremum condition corresponding to the preset arrangement direction when compared with a coordinate corresponding to the mapping point traversed currently; and a fifth determining subunit 100442, configured to determine the mapping point currently traversed to be an invalid point if the target extremum condition is not satisfied.
In some alternative implementations, the apparatus further includes: an acquisition module 1006, configured to acquire an initial point cloud data set acquired by a point cloud acquisition device for a target scene; the deleting module 1007 is configured to delete, from the initial point cloud data set, point cloud data corresponding to the target object based on spatial information of the target object in the predetermined target scene, thereby obtaining a point cloud data set.
According to the depth map generation device provided by the embodiment of the disclosure, the mapping point set corresponding to the point cloud data set is determined in the depth map under the camera coordinate system, the invalid points are determined from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and the invalid points are deleted from the mapping point set, so that the low-noise depth map is obtained. According to the method, complex processing is not required to be carried out on the image, consistency verification is not required to be carried out on the mapping points of the point cloud in the depth map by using a plurality of sensors, the effectiveness of the spatial distribution of the point cloud is only required to be judged by using the coordinate sequence of the point cloud data, and invalid points which are not matched with the shooting angle of a camera under a camera coordinate system are obtained, so that the noise of depth true values in the depth map can be effectively reduced, and the high-quality depth map can be efficiently and low-cost generated.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 12. The electronic device may be either or both of the terminal device 101 and the server 103 as shown in fig. 1, or a stand-alone device independent thereof, which may communicate with the terminal device 101 and the server 103 to receive the acquired input signals therefrom.
Fig. 12 shows a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 12, the electronic device 1200 includes one or more processors 1201 and memory 1202.
The processor 1201 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 1200 to perform desired functions.
Memory 1202 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and the processor 1201 may execute the program instructions to implement the depth map generation method and/or other desired functions of the various embodiments of the present disclosure above. Various content such as a point cloud data set, a two-dimensional image, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 1200 may further include: an input device 1203 and an output device 1204, which are interconnected via a bus system and/or other forms of connection mechanism (not shown).
For example, when the electronic device is the terminal device 101 or the server 103, the input device 1203 may be a point cloud collecting device, a camera, a mouse, a keyboard, or the like, for inputting a point cloud data set, a two-dimensional image, or the like. When the electronic device is a stand-alone device, the input means 1203 may be a communication network connector for receiving the input point cloud data set, two-dimensional image, etc. from the terminal device 101 and the server 103.
The output device 1204 may output various information to the outside, including the generated depth map and the like. The output device 1204 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 1200 that are relevant to the present disclosure are shown in fig. 12, components such as buses, input/output interfaces, etc. are omitted for simplicity. In addition, the electronic device 1200 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also provide a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the depth map generation method of the various embodiments of the present disclosure described in the "exemplary methods" section above.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the depth map generation method of the various embodiments of the present disclosure described in the "exemplary method" section above.
A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example but not limited to, a system, apparatus, or device including electronic, magnetic, optical, electromagnetic, infrared, or semiconductor, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but the advantages, benefits, effects, etc. mentioned in this disclosure are merely examples and are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
Various modifications and alterations to this disclosure may be made by those skilled in the art without departing from the spirit and scope of the application. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A depth map generation method, comprising:
generating a first depth map based on a point cloud data set acquired by a target scene;
mapping the first depth map to a camera coordinate system of a target camera based on parameters of the target camera to obtain a second depth map;
determining a mapping point set consisting of mapping points corresponding to point cloud data included in the point cloud data set in the second depth map;
determining invalid points from the mapping point set based on the coordinate sequence of the point cloud data in the point cloud data set, and deleting the invalid points from the mapping point set;
and determining a third depth map representing depth truth values under the camera coordinate system based on the mapping point set after the invalid points are deleted.
2. The method of claim 1, wherein the determining invalid points from the set of mapped points based on a coordinate order of point cloud data in the set of point cloud data comprises:
Dividing the point cloud data set into at least two point cloud data subsets;
for each of the at least two point cloud data subsets, determining a sequence number of each of the point cloud data subsets based on a preset arrangement direction;
determining a mapping point subset consisting of mapping points corresponding to each point cloud data in the point cloud data subset in the mapping point set;
and determining that the mapping points, of which the sequence numbers of the corresponding point cloud data do not accord with the preset arrangement sequence, are invalid points from the mapping point subset according to the preset arrangement direction.
3. The method of claim 2, wherein the partitioning the point cloud data set into at least two point cloud data subsets comprises:
dividing the space into at least two areas according to a first dimension of the space distributed by the point cloud data set;
and determining the point cloud data of the same area in the at least two areas as one point cloud data subset, and obtaining the at least two point cloud data subsets.
4. The method of claim 2, wherein the determining a sequence number for each point cloud data in the subset of point cloud data based on the preset arrangement direction comprises:
Determining that the laser beam serial numbers are serial numbers of the first point cloud data in response to determining that the point cloud data subset contains first point cloud data with corresponding laser beam serial numbers, wherein the laser beam serial numbers are distributed according to the preset arrangement direction;
in response to determining that the subset of point cloud data includes second point cloud data that does not have a corresponding laser beam serial number, a serial number of the second point cloud data in the subset of point cloud data is determined based on a magnitude of a coordinate value of the second point cloud data in the subset of point cloud data.
5. The method of claim 4, wherein the determining the sequence number of the second point cloud data in the subset of point cloud data based on the magnitude of the coordinate values of the second point cloud data in the subset of point cloud data comprises:
determining a preset arrangement direction based on a second dimension of the space in which the point cloud data set is distributed; determining an angle of the second point cloud data in the point cloud data subset relative to the preset arrangement direction based on the coordinate values of the second point cloud data in the point cloud data subset;
and determining the sequence number of the second point cloud data in the point cloud data subset based on the angle and the preset angle resolution.
6. The method of claim 2, wherein determining, from the subset of mapped points, according to the preset arrangement direction, that mapped points whose sequence numbers of the corresponding point cloud data do not conform to the preset arrangement order are invalid points comprises:
traversing the mapping points in the mapping point subset according to a preset arrangement sequence of serial numbers of corresponding point cloud data, and determining whether coordinates corresponding to the mapping points traversed currently meet target extremum conditions corresponding to the preset arrangement direction or not compared with coordinates corresponding to the mapping points traversed currently;
and if the target extremum condition is not met, determining the mapping point currently traversed to be an invalid point.
7. The method of any of claims 1-6, wherein prior to generating the first depth map based on the set of point cloud data acquired for the target scene, the method further comprises:
acquiring an initial point cloud data set acquired by point cloud acquisition equipment aiming at the target scene;
and deleting point cloud data corresponding to the target object from the initial point cloud data set based on the predetermined space information of the target object in the target scene to obtain the point cloud data set.
8. A depth map generation apparatus comprising:
the generating module is used for generating a first depth map based on the point cloud data set acquired by the target scene;
the mapping module is used for mapping the first depth map to a camera coordinate system of the target camera based on parameters of the target camera to obtain a second depth map;
a first determining module, configured to determine, in the second depth map, a mapping point set that is formed by mapping points corresponding to point cloud data included in the point cloud data set;
a second determining module, configured to determine an invalid point from the mapping point set based on a coordinate order of the point cloud data in the point cloud data set, and delete the invalid point from the mapping point set;
and the third determining module is used for determining a third depth map representing a depth true value under the camera coordinate system based on the mapping point set after the invalid points are deleted.
9. A computer readable storage medium storing a computer program for execution by a processor to implement the method of any one of claims 1-7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing executable instructions of the processor;
The processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any of the preceding claims 1-7.
CN202310585457.2A 2023-05-23 2023-05-23 Depth map generation method and device, computer readable storage medium and electronic equipment Pending CN116645406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310585457.2A CN116645406A (en) 2023-05-23 2023-05-23 Depth map generation method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310585457.2A CN116645406A (en) 2023-05-23 2023-05-23 Depth map generation method and device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116645406A true CN116645406A (en) 2023-08-25

Family

ID=87618180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310585457.2A Pending CN116645406A (en) 2023-05-23 2023-05-23 Depth map generation method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116645406A (en)

Similar Documents

Publication Publication Date Title
CN113486797B (en) Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle
CN111797650B (en) Obstacle identification method, obstacle identification device, computer equipment and storage medium
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
WO2021098448A1 (en) Sensor calibration method and device, storage medium, calibration system, and program product
EP3968266B1 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
CN112146848B (en) Method and device for determining distortion parameter of camera
CN112106111A (en) Calibration method, calibration equipment, movable platform and storage medium
CN109828250B (en) Radar calibration method, calibration device and terminal equipment
CN111142514B (en) Robot and obstacle avoidance method and device thereof
CN112036359B (en) Method for obtaining topological information of lane line, electronic device and storage medium
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN115273039A (en) Small obstacle detection method based on camera
CN114677588A (en) Obstacle detection method, obstacle detection device, robot and storage medium
CN110706288A (en) Target detection method, device, equipment and readable storage medium
WO2022147655A1 (en) Positioning method and apparatus, spatial information acquisition method and apparatus, and photographing device
CN109598199B (en) Lane line generation method and device
CN114488178A (en) Positioning method and device
CN116645406A (en) Depth map generation method and device, computer readable storage medium and electronic equipment
CN113188569B (en) Coordinate system calibration method, equipment and storage medium for vehicle and laser radar
CN115511944A (en) Single-camera-based size estimation method, device, equipment and storage medium
CN115507840A (en) Grid map construction method, grid map construction device and electronic equipment
CN113014899B (en) Binocular image parallax determination method, device and system
CN114440856A (en) Method and device for constructing semantic map
CN112630750A (en) Sensor calibration method and sensor calibration device
CN114820564A (en) Method and device for processing observation noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination