WO2020133206A1 - 雷达仿真方法及装置 - Google Patents

雷达仿真方法及装置 Download PDF

Info

Publication number
WO2020133206A1
WO2020133206A1 PCT/CN2018/124822 CN2018124822W WO2020133206A1 WO 2020133206 A1 WO2020133206 A1 WO 2020133206A1 CN 2018124822 W CN2018124822 W CN 2018124822W WO 2020133206 A1 WO2020133206 A1 WO 2020133206A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
depth
value
pixels
depth map
Prior art date
Application number
PCT/CN2018/124822
Other languages
English (en)
French (fr)
Inventor
黎晓键
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/124822 priority Critical patent/WO2020133206A1/zh
Priority to CN201880072069.1A priority patent/CN111316119A/zh
Publication of WO2020133206A1 publication Critical patent/WO2020133206A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras

Definitions

  • the invention relates to the technical field of radar, and in particular to a radar simulation method and device.
  • a radar sensor that emits electromagnetic waves with a cone beam for example, a millimeter wave radar
  • a radar sensor that emits electromagnetic waves with a cone beam for example, a millimeter wave radar
  • millimeter wave radar as a sensor with accurate ranging, long detection distance, and all-weather work, has become an indispensable sensor in autonomous driving technology.
  • Embodiments of the present invention provide a radar simulation method and device, which are used to solve the technology in the prior art that relies on a radar sensor that emits a cone beam. It is difficult to avoid the problems of cumbersome field testing and high development cost during the development process.
  • an embodiment of the present invention provides a radar simulation method, including:
  • the depth map of the current frame determine a plurality of pixels in the depth map, the depth value of the pixels in the depth map represents the distance between the pixel and the camera, and the depth of the plurality of pixels The distance represented by the value is less than the distance represented by the depth value of other pixels in the depth map;
  • the depth map satisfies the FOV condition of the camera's FOV, and the FOV condition of the camera corresponds to the detection range of the radar sensor.
  • an embodiment of the present invention provides a radar simulation device including: a processor and a memory;
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, it is used to perform the following operations:
  • the depth map of the current frame determine a plurality of pixels in the depth map, the depth value of the pixels in the depth map represents the distance between the pixel and the camera, and the depth of the plurality of pixels The distance represented by the value is less than the distance represented by the depth value of other pixels in the depth map;
  • the depth map satisfies the FOV condition of the camera's FOV, and the FOV condition of the camera corresponds to the detection range of the radar sensor.
  • an embodiment of the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and the computer program includes at least one piece of code, and the at least one piece of code can be executed by a computer To control the computer to execute the radar simulation method according to any one of the first claims.
  • an embodiment of the present invention provides a computer program, characterized in that, when the computer program is executed by a computer, it is used to implement the radar simulation method according to any one of the first aspect.
  • the radar simulation method and device provided in the embodiments of the present invention determine multiple pixels in the depth map according to the depth map of the current frame, and output the detection of the radar sensor according to the depth value of each pixel in the multiple pixels
  • the target point detected by the radar sensor emitting the cone beam can be obtained through simulation, and the detection result of the radar sensor can be further simulated to realize the simulation of the radar sensor emitting the cone beam, thereby making it possible to avoid the development process Due to the reliance on real radar sensors that emit cone beams, the on-site testing is cumbersome and the development cost is high.
  • FIG. 1 is a schematic flowchart of a radar simulation method provided by an embodiment of the present invention
  • 2A is a schematic diagram of a detection range provided by an embodiment of the present invention.
  • 2B is a schematic diagram of a depth map provided by an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a radar simulation method provided by another embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an output detection structure provided by an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a radar simulation device provided by an embodiment of the present invention.
  • the radar simulation method provided by the embodiment of the present invention can realize the simulation of a radar sensor whose emitted electromagnetic wave is a cone beam, and the detection result of the radar sensor that emits the cone beam can be obtained through simulation by means of software calculation.
  • the radar simulation method provided by the embodiment of the present invention can be applied to any development scenario of a radar sensor that relies on emitting cone beams, which can avoid dependence on real radar sensors that emit cone beams during the development process, thereby solving the development process Due to the reliance on real radar sensors that emit cone beams, the on-site testing is cumbersome and the development cost is high.
  • the radar sensor emitting the cone beam may specifically be a millimeter wave radar sensor.
  • FIG. 1 is a schematic flowchart of a radar simulation method provided by an embodiment of the present invention.
  • the execution subject of this embodiment may be a radar simulation device for implementing radar simulation, and may specifically be a processor of the radar simulation device.
  • the method of this embodiment may include:
  • Step 101 Determine multiple pixels in the depth map according to the depth map of the current frame.
  • the depth value of the pixel in the depth map represents the distance between the pixel and the camera, and the distance represented by the depth value of the plurality of pixels is less than that of other pixels in the depth map The distance indicated by the depth value.
  • the depth map includes 12 pixels, namely pixel 1 to pixel 12, and the depth value represented by the pixel 1 to pixel 12 is getting larger and larger, then according to the depth map
  • the plurality of pixels may specifically be pixel 1 to pixel 3 of 12 pixels.
  • the depth map satisfies the field of view (FOV, Field of View) condition of the camera, and the FOV condition of the camera corresponds to the detection range of the radar sensor.
  • FOV Field of View
  • the detection range is similar to the shooting range of the camera, so when the depth map satisfies the angle of view condition corresponding to the detection range of the radar sensor, the depth The distance from the camera represented by the depth value of the pixel in the figure simulates the distance from the radar sensor, so that according to the depth value of the pixel in the depth map, multiple pixels closest to the radar sensor can be determined.
  • the multiple pixel points may be understood as multiple target points detected by the simulated radar sensor, and the multiple pixel points may correspond to the multiple target points one-to-one.
  • the FOV condition of the camera here may correspond to the detection range of the radar sensor or may have a mapping relationship.
  • the FOV condition of the camera corresponds to the detection range of the radar sensor, specifically, the FOV condition of the camera and the detection range of the radar sensor are completely the same, or the FOV condition of the camera is The detection range of the radar sensor is approximately the same.
  • a depth map corresponding to the detection range in front of the vehicle X1 in the scene shown in FIG. 2A may be as shown in FIG. 2B.
  • the number of the plurality of pixels may be a preset number, for example, 64. Assuming that the depth map includes pixel 1 to pixel 128, 128 pixels, and pixel 1 to pixel 128, the depth value decreases in sequence, then when the preset number is equal to 64 and the depth value is greater, it means that When the distance is closer, the plurality of pixel points may specifically be pixel point 1 to pixel point 64.
  • the present invention may not limit the manner of acquiring the depth map.
  • it can be obtained by camera shooting, or it can also be obtained by rendering.
  • the radar sensor can detect with a certain sampling frequency.
  • the detection range of the radar sensor at the previous sampling time can correspond to the previous frame, and multiple pixels can be determined according to the depth map of the previous frame;
  • the detection range of the radar sensor at the current sampling time can correspond to the depth map of the current frame, Multiple pixel points can be determined according to the depth map of the current frame;
  • the detection range of the radar sensor at the next sampling time can correspond to the depth map of the next frame, and multiple pixel points can be determined according to the depth map of the next frame.
  • the depth map of the current frame may be the same as or different from the depth map of the previous frame, and the depth map of the current frame may be the same as or different from the depth map of the next frame.
  • Step 102 Output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels.
  • the depth value of a pixel point can represent the distance between the pixel point and the camera
  • the distance between each pixel point and the thunder camera can be obtained according to the depth value of each pixel point among the multiple pixel points .
  • the multiple pixels are the target points detected by the simulated radar sensor, and the detection range of the radar sensor corresponds to the FOV of the camera, the distance between the pixel and the camera is obtained by simulation The distance between the target point detected by the radar sensor and the radar sensor.
  • the detection result of the output radar sensor may include the distance between each pixel and the camera (it can be understood that each target point detected by the radar sensor and the radar sensor the distance between).
  • the detection result of the radar sensor may also include motion information of each pixel relative to the radar sensor.
  • the motion information of each pixel relative to the radar sensor may be the same, for example, all are preset motion information.
  • the movement information may include, for example, movement direction, movement speed and the like.
  • the depth value of the pixel can be directly used as the distance between the pixel and the radar sensor; or, the depth value of the pixel can be mathematically calculated to obtain the distance between the pixel and the radar sensor.
  • FIG. 3 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention. Based on the embodiment shown in FIG. 1, this embodiment mainly describes an optional implementation manner of step 102. As shown in FIG. 3, the method of this embodiment may include:
  • Step 301 Determine multiple pixels in the depth map according to the depth map of the current frame.
  • step 301 is similar to step 101 and will not be repeated here.
  • Step 302 Output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels and the motion information of each pixel relative to the camera.
  • the motion information of the pixel relative to the camera is the simulation
  • the correspondence between each pixel in the depth map and the motion information may be stored, and the motion information corresponding to one pixel is the motion information of the pixel relative to the camera. Further optionally, the correspondence between each pixel point and the motion information may be set in advance, or may be set by the user.
  • the motion information of the pixels relative to the camera may be determined according to the label map corresponding to the depth map.
  • the label map is used to indicate the object to which each pixel in the depth map belongs, and the object corresponds to the motion information.
  • the label map can indicate the object to which each pixel belongs by different color labels.
  • the color labels corresponding to the two pixels in the label map are the first color, it can indicate that the two pixels belong to the same object. All are objects represented by the first color.
  • the color labels corresponding to all pixels belonging to the road railing X2 may be dark green
  • the color labels corresponding to all pixels belonging to the distant house X3 may be light green.
  • the method of this embodiment may further include the following steps A and B.
  • Step A According to the label map corresponding to the depth map, determine the object to which each pixel of the plurality of pixels belongs.
  • the label map includes the object to which each pixel in the 100 pixels belongs.
  • the plurality of pixel points determined according to the depth map are pixel point 10, pixel point 20, pixel point 30, and pixel point 40, then pixel point 10, pixel point 20, pixel point 30, and pixel point can be obtained according to the label map The object to which each pixel in 40 belongs.
  • the object may specifically be any object that can be detected by the radar sensor, such as the ground, buildings on the ground, and so on.
  • Step B Determine the motion information of each pixel according to the object to which each pixel of the plurality of pixels belongs.
  • the motion information of each pixel relative to the camera that is, the motion information of each pixel relative to the radar sensor can be obtained.
  • step 302 may specifically include: determining each pixel of the plurality of pixels and the camera according to the respective depth values of the plurality of pixels The distance between each pixel of the plurality of pixels and the camera, and the motion information of each pixel relative to the camera, output the detection result of the radar sensor.
  • the detection result may further include identification information indicating the object to which each pixel belongs.
  • the objects may correspond one-to-one with the target identification number; the method further includes: determining the target identification number of each pixel according to the object to which each pixel of the plurality of pixels belongs (can be understood as The target identification number of each target point).
  • the output of the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels includes: according to the depth value of each pixel in the plurality of pixels, and the target recognition of each pixel No., output the detection result of the radar sensor.
  • the output detection result of the radar sensor may include the distance between each target point and the camera, and the target identification number of each target point.
  • the outputting the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels and the motion information of each pixel relative to the camera includes:
  • the output detection result of the radar sensor may include the distance between each target point and the radar sensor, the movement information of each target point relative to the radar sensor, and the target identification number of each target point.
  • multiple pixels in the depth map are determined according to the depth map of the current frame, and the radar sensor is output according to the depth value of each pixel in the multiple pixels and the motion information of each pixel relative to the camera
  • the detection result of the simulation realizes the simulation of the motion information of the target point sampled by the real radar sensor, so that the detection result of the radar sensor obtained by the simulation includes the distance between the target point and the radar sensor, and the target point relative to the radar sensor Speed of movement.
  • FIG. 4 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention. This embodiment is based on the embodiments shown in FIGS. 1 and 3, and mainly describes that in the simulation, the accuracy loss of the radar sensor is considered for The effect of the detection result of the radar sensor. As shown in FIG. 4, the method of this embodiment may include:
  • Step 401 According to the correspondence between the depth level and the depth value range, update the depth value of each pixel in the depth map of the current frame to the maximum value of the depth value range corresponding to the depth level to which the updated depth map is obtained.
  • the depth value of each pixel in the depth map of the current frame is updated to the maximum value of the depth value range corresponding to the depth level to which it belongs, which can represent the loss of the accuracy of the depth value within the depth level range, and the depth within the same depth level range
  • the values are updated to a fixed depth value within the depth level range, that is, the maximum value within the depth level range. It can be understood that, according to the characteristics of the accuracy loss of the radar sensor, alternatively, the depth values within the same depth level range can be updated to other depth values within the depth level range, such as the minimum value within the depth level range.
  • the depth value of the pixel in the depth map may specifically be any one of 0 to 255,256 integers.
  • depth level 1 corresponds to Depth value range 0 to 63
  • depth level 2 corresponds to depth value range 64 to 127
  • depth level 3 corresponds to depth value range 128 to 192
  • depth level 4 corresponds to depth value range 193 to 255.
  • step 401 may further include: normalizing the depth value of each pixel in the depth map of the current frame.
  • step 401 may specifically include updating the depth value of each pixel in the normalized depth map to the maximum value of the depth value range corresponding to the depth level according to the corresponding relationship between the depth level and the depth value range to obtain an update After the depth map.
  • the depth value range is also the range after normalization.
  • Step 402 Determine a plurality of pixels in the depth map according to the updated depth map.
  • the specific method of determining multiple pixels in the updated depth map in step 402 according to the updated depth map is the same as the specific method for determining multiple pixels in the depth map according to the depth map in step 101. Similarly, I will not repeat them here.
  • Step 403 Output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels.
  • step 403 is similar to step 102 or step 302 and will not be repeated here.
  • the updated depth map is obtained by updating the depth value of each pixel in the depth map of the current frame to the maximum value of the depth value range corresponding to the corresponding depth level according to the correspondence between the depth level and the depth value range Based on the updated depth map, multiple pixels in the depth map are determined to realize the simulation of the accuracy loss of the radar sensor.
  • FIG. 5 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention.
  • This embodiment based on the foregoing embodiment, mainly describes an optional implementation of obtaining the depth map of the current frame in the simulation. the way.
  • the method of this embodiment may include:
  • Step 501 Obtain the original depth map of the current frame.
  • the original depth map of the current frame can be obtained by image rendering. Further optionally, the original depth map of the current frame in a certain stereo scene can be obtained through image rendering according to the motion of the radar sensor.
  • the movement of the radar sensor may specifically include the movement of the carrier carrying the radar sensor.
  • each frame in multiple frames may be used as the current frame in sequence at the target frequency, where the multiple frames are consecutive multiple frames related to a stereoscopic scene.
  • the target frequency may be equal to the sampling frequency of the real radar sensor to simulate the sampling frequency of the radar sensor.
  • the target frequency is 20 Hz.
  • Step 502 Determine the yaw of each pixel in the original depth map in the camera coordinate system of the camera according to the depth value and two-dimensional coordinates of each pixel in the original depth map, and the parameters of the camera Angle and pitch angle.
  • the position of each pixel in the camera coordinate system can be determined according to the camera parameters and the two-dimensional coordinates of each pixel in the original image.
  • the parameters of the camera may include internal parameters of the camera.
  • the yaw angle and pitch angle of each pixel point in the camera coordinate system can be determined according to the position of each pixel point in the original depth map under the camera coordinate system and the depth value of each pixel point.
  • the yaw angle and pitch angle of each pixel point in the camera coordinate system can be understood as the yaw angle and pitch angle of each pixel point relative to the radar sensor.
  • Step 503 According to the pitch angle and yaw angle of each pixel in the original depth map in the camera coordinate system, convert the pixel points in the original depth map that do not satisfy the FOV condition The depth value of is set to a preset value to obtain the depth map of the current frame.
  • the preset value may be used to indicate that the distance to the camera is greater than the maximum detection distance of the radar sensor.
  • the pitch angle and yaw angle of a pixel point do not satisfy the FOV condition, it can indicate that it is outside the detection range of the radar sensor, and the radar sensor will not detect the point as the target point.
  • setting the depth value of a pixel to a preset value can be understood as excluding the pixel from the target point that can be detected by the radar sensor.
  • the FOV conditions may include horizontal FOV conditions and vertical FOV conditions.
  • the horizontal FOV condition can be used to express the restriction on the horizontal FOV for the radar sensor.
  • the vertical FOV condition can be used to express the restriction on the vertical FOV for the radar sensor.
  • the horizontal FOV condition satisfies the condition that the further the distance from the camera is, the smaller the horizontal FOV value is, and the closer to the camera is, the greater the horizontal FOV value is.
  • the horizontal FOV condition includes:
  • the horizontal FOV value is equal to the first FOV value
  • the horizontal FOV value is equal to the second FOV value
  • the horizontal FOV value is equal to the third FOV value
  • the horizontal FOV value is equal to the fourth FOV value
  • the second distance threshold is greater than the first distance threshold and less than the third distance threshold; the second FOV value is greater than the third FOV value and less than the first FOV value, the third The FOV value is greater than the first FOV value.
  • the first distance threshold is equal to 70m
  • the second distance threshold is equal to 160 meters
  • the third distance threshold is equal to 250 meters.
  • the first FOV value is equal to 60°
  • the second FOV value is equal to 8°
  • the third FOV value is equal to 4°
  • the fourth FOV value is equal to 0°.
  • the vertical FOV condition may satisfy the condition independent of the distance of the camera. Further optionally, the vertical FOV condition includes: the vertical FOV value is equal to 10°.
  • the above depth map may be obtained through the step of rendering the depth map, and the above logo map may be obtained by rendering the logo map.
  • the above steps 501 to 503 can be understood as steps for rendering a depth map.
  • the detection result of the radar sensor may be output according to the depth map and the label map according to the depth map and the label map obtained by rendering, in a related manner of the foregoing embodiment. It should be noted that the above-mentioned rendering of the depth map and the identification map of the same frame may be based on the same shooting range of the same stereo scene, and the shooting range is the detection range of the radar sensor.
  • each pixel in the original depth map is under the camera coordinate system of the camera Yaw angle and pitch angle, further, set the depth value of the pixel point in the original depth map that does not meet the FOV condition to the preset value to obtain the depth map of the current frame, thus obtaining the camera
  • the depth map of the FOV condition that is, the depth map corresponding to the detection range of the radar sensor.
  • FIG. 7 is a schematic flowchart of a radar simulation method according to another embodiment of the present invention. Based on the foregoing embodiment, this embodiment mainly describes the influence of ground clutter on the detection result of the radar sensor in the simulation. As shown in FIG. 7, the method of this embodiment may include:
  • Step 701 Determine multiple pixels in the depth map according to the depth map of the current frame.
  • step 701 is similar to step 101 and will not be repeated here.
  • Step 702 Determine that the ground point in the current frame interferes with the detection result of the current frame.
  • the ground point is used to simulate the influence of ground clutter on the detection result of the radar sensor, that is, the radar sensor recognizes the ground point as the target point due to the influence of the ground clutter. Since the influence of the ground clutter is not always present, the ground point of the current frame can be determined to interfere with the detection result of the current frame through step 702.
  • step 702 may specifically include: if the ground point in the previous frame of the current frame interferes with the detection result of the previous frame, the ground in the current frame The point interferes with the detection result of the current frame.
  • the detection result in the previous frame when the detection result in the previous frame is not interfered by the ground point, the detection result of the current frame may or may not be interfered by the ground point.
  • the probability To determine that the detection result of the current frame is disturbed by the ground point.
  • the method of this embodiment further includes: if the ground point in the previous frame does not interfere with the detection result of the previous frame, determining the ground point in the current frame to interfere with the current frame according to the target probability Detection results.
  • the target probability may be a preset probability, or may be positively related to the target frame number. Further optionally, the greater the number of target frames, the greater the target probability; the target frame number is the number of consecutive frames that continue to the current frame and the ground point does not interfere with the detection result.
  • Frame 4 does not include the ground points of frames 1 to 3 that interfere with the detection results.
  • Frame 4 corresponds to The target frame number is 1, the target probability is probability 1 and the detection result of frame 4 is not disturbed by the ground point, the target frame number corresponding to frame 5 is 2, the target probability is probability 2 and the detection result of frame 5 is not affected by the ground point Interference, the target frame number corresponding to frame 6 is 3, the target probability is probability 3 and the detection result of frame 6 is not interfered by the ground point, then the target frame number corresponding to frame 7 is 4, the target probability is probability 4, and the probability 4 >Probability 3>Probability 2>Probability 1.
  • the detection result of frame 7 is disturbed by the ground point, and that frame 8 and frame 9 include the ground point of the interference detection result in frame 7, and frame 10 does not include the ground point of the interference detection result in frame 7, frame 10
  • the corresponding target frame number is 1, the target probability is probability 5, and the detection result of frame 10 is not disturbed by the ground point, then the target frame number corresponding to frame 11 is 2, the target probability is probability 6, and probability 6>probability 5.
  • the ground point may be a randomly selected point. It can be understood that, when the randomness of the ground clutter is not considered, the ground point may also be a preset point.
  • Step 703 Determine the distance between the ground point and the camera.
  • the ground point is a pixel point corresponding to the ground in the current frame, therefore, the distance between the ground point and the camera can be determined by the depth value of the ground point determine.
  • Step 704 Output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels and the distance between the ground point and the camera.
  • the detection result of the output radar sensor may include the distance between each pixel and the camera The distance, and the distance between the ground point and the camera (can be understood as the distance of the target point detected by the radar sensor interfered by ground clutter relative to the radar sensor).
  • the detection result of the radar sensor may also be output according to the motion information of each pixel relative to the camera. That is, the output radar detection result may further include: motion information of each pixel point relative to the camera, and motion information of the ground point relative to the camera. Since the absolute speed of the ground point is 0, the motion information of the ground point relative to the camera can be obtained according to the motion state of the camera.
  • the ground point in the current frame interferes with the detection result of the current frame
  • the distance between the ground point and the camera is determined, according to the depth value of each pixel in the multiple pixels, and between the ground point and the camera
  • the distance output the detection result of the radar sensor, so that the detection result of the radar sensor can include the influence of the ground clutter, and realize the simulation of the ground clutter affecting the radar sensor, thereby improving the authenticity of the simulation.
  • the outputting the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels includes: according to the depth value of each pixel in the plurality of pixels, The detection results of the radar sensor are output in order according to the order of the distance indicated by the depth value from small to large.
  • a computer-readable storage medium is also provided in an embodiment of the present invention.
  • the computer-readable storage medium stores program instructions, and when the program is executed, it may include some or all of the radar simulation methods in the foregoing method embodiments step.
  • An embodiment of the present invention provides a computer program which, when executed by a computer, is used to implement the radar simulation method in any of the above method embodiments.
  • FIG. 8 is a schematic structural diagram of a radar simulation device provided by an embodiment of the present invention.
  • the radar simulation device 800 of this embodiment may include: a memory 801 and a processor 802; connection.
  • the memory 801 may include a read-only memory 801 and a random access memory 801, and provide instructions and data to the processor 802.
  • a part of the memory 801 may further include a non-volatile random access memory 801.
  • the memory 801 is used to store program codes.
  • the processor 802 calls the program code, and when the program code is executed, it is used to perform the following operations:
  • the processor 802 is configured to output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels, specifically including:
  • the detection result of the radar sensor is output.
  • the processor 802 is further used to:
  • the label map is used to indicate the object to which each pixel in the depth map belongs, the object and the motion information correspond;
  • the motion information of each pixel is determined according to the object to which each pixel of the plurality of pixels belongs.
  • the objects correspond to the target identification numbers in one-to-one correspondence; the processor 802 is also used to:
  • the outputting the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels includes:
  • the detection result of the radar sensor is output according to the depth value of each pixel in the plurality of pixels and the target identification number of each pixel.
  • the processor 802 is used to output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels and the motion information of each pixel relative to the camera, This includes:
  • the distance between each pixel of the plurality of pixels and the camera and the motion information of each pixel relative to the camera are output as the detection result of the radar sensor.
  • the processor 802 is further used to:
  • the processor 802 is configured to determine a plurality of pixels in the depth map according to the depth map of the current frame, specifically including:
  • the processor 802 is further used to:
  • the depth value of each pixel in the depth map of the current frame is normalized.
  • the processor 802 is further used to:
  • the depth value of the pixel point in the original depth map that does not satisfy the FOV condition Set is used to indicate that the distance from the camera is greater than the maximum detection distance of the radar sensor.
  • the processor 802 is used to obtain an original depth map of the current frame, which specifically includes:
  • the processor 802 is further configured to sequentially use each frame in multiple frames as the current frame at a target frequency, where the multiple frames are consecutive multiple frames related to a stereoscopic scene .
  • the target frequency is 20 Hz.
  • the processor 802 is further used to:
  • the processor 802 is configured to output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels, specifically including:
  • the detection result of the radar sensor is output according to the depth value of each pixel in the plurality of pixels and the distance between the ground point and the camera.
  • the processor 802 is used to determine a ground point in the current frame interferes with the detection result of the current frame, specifically including:
  • the processor 802 is further used to:
  • the ground point in the previous frame does not interfere with the detection result of the previous frame, the ground point in the current frame interferes with the detection result of the current frame according to the target probability.
  • the greater the number of target frames the greater the target probability.
  • the target frame number is the number of consecutive frames that continue to the current frame and the ground point does not interfere with the detection result.
  • the ground point is randomly selected.
  • the processor 802 is configured to output the detection result of the radar sensor according to the depth value of each pixel in the plurality of pixels, specifically including:
  • the detection results of the radar sensor are output in order according to the order of the distance indicated by the depth value from small to large.
  • the pieces include horizontal FOV conditions and vertical FOV conditions.
  • the horizontal FOV condition includes:
  • the horizontal FOV value is equal to the first FOV value
  • the horizontal FOV value is equal to the second FOV value
  • the horizontal FOV value is equal to the third FOV value
  • the horizontal FOV value is equal to the fourth FOV value
  • the second distance threshold is greater than the first distance threshold and less than the third distance threshold; the second FOV value is greater than the third FOV value and less than the first FOV value, the third The FOV value is greater than the first FOV value.
  • the radar sensor is a millimeter wave radar sensor.
  • the radar simulation device provided in this embodiment can be used to execute the technical solutions of the above method embodiments of the present invention, and its implementation principles and technical effects are similar, and are not repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

一种雷达仿真方法及装置。雷达仿真方法包括:根据当前帧的深度图,确定深度图中的多个像素点(101),深度图中像素点的深度值表示像素点与摄像机之间的距离,且多个像素点的深度值表示的距离,小于深度图中其他像素点的深度值表示的距离;根据多个像素点中各像素点的深度值,输出雷达传感器的探测结果(102);实现了发射锥状波束的雷达传感器的仿真。

Description

雷达仿真方法及装置 技术领域
本发明涉及雷达技术领域,尤其涉及一种雷达仿真方法及装置。
背景技术
目前,随着自动化水平的不断提高,雷达传感器的应用越来越广泛。
现有技术中,对于发射的电磁波是锥状波束的雷达传感器,例如,毫米波雷达,由于其可靠等优点,可以应用于对可靠性要求较高的场景。例如,在自动驾驶技术日趋完善的今天,毫米波雷达作为一个测距精准、检测距离远、全天候工作的传感器,已经是自动驾驶技术中一个不可或缺的传感器了。
但是,目前依赖于发射锥状波束的雷达传感器的技术,开发过程中难以避免实地测试繁琐、开发成本大的问题。
发明内容
本发明实施例提供一种雷达仿真方法及装置,用于解决现有技术中依赖于发射锥状波束的雷达传感器的技术,开发过程中难以避免实地测试繁琐、开发成本大的问题。
第一方面,本发明实施例提供一种雷达仿真方法,包括:
根据当前帧的深度图,确定所述深度图中的多个像素点,所述深度图中像素点的深度值表示所述像素点与摄像机之间的距离,且所述多个像素点的深度值表示的距离,小于所述深度图中其他像素点的深度值表示的距离;
根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果;
其中,所述深度图满足所述摄像机的视场角FOV条件,所述摄像机的FOV条件与所述雷达传感器的探测范围相对应。
第二方面,本发明实施例提供一种雷达仿真装置包括:处理器和存储器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下 操作:
根据当前帧的深度图,确定所述深度图中的多个像素点,所述深度图中像素点的深度值表示所述像素点与摄像机之间的距离,且所述多个像素点的深度值表示的距离,小于所述深度图中其他像素点的深度值表示的距离;
根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果;
其中,所述深度图满足所述摄像机的视场角FOV条件,所述摄像机的FOV条件与所述雷达传感器的探测范围相对应。
第三方面,本发明实施例提供一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行如权利要求第一方面任一项所述的雷达仿真方法。
第四方面,本发明实施例提供一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于实现如第一方面任一项所述的雷达仿真方法。
本发明实施例提供的种雷达仿真方法及装置,通过根据当前帧的深度图,确定深度图中的多个像素点,并根据多个像素点中各像素点的深度值,输出雷达传感器的探测结果,使得能够通过仿真得到发射锥形波束的雷达传感器探测到的目标点,并进一步仿真得到雷达传感器的探测结果,实现了对于发射锥形波束的雷达传感器的仿真,从而使得可以避免开发过程中由于依赖真实的发射锥状波束的雷达传感器,而导致的实地测试繁琐、开发成本大等的问题。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一实施例提供的雷达仿真方法的流程示意图;
图2A为本发明一实施例提供的探测范围的示意图;
图2B为本发明一实施例提供的深度图的示意图;
图3为本发明另一实施例提供的雷达仿真方法的流程示意图;
图4为本发明又一实施例提供的雷达仿真方法的流程示意图;
图5为本发明又一实施例提供的雷达仿真方法的流程示意图;
图6为本发明一实施例提供的输出探测结构的示意图;
图7为本发明又一实施例提供的雷达仿真方法的流程示意图;
图8本发明一实施例提供的雷达仿真装置的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供的雷达仿真方法可以实现对发射的电磁波是锥状波束的雷达传感器的仿真,通过软件运算的方式,仿真得到发射锥状波束的雷达传感器的探测结果。本发明实施例提供的雷达仿真方法可以应用于任何依赖发射锥状波束的雷达传感器的开发场景,可以避免在开发过程中对于真实的发射锥状波束的雷达传感器的依赖,从而解决了开发过程中由于依赖真实的发射锥状波束的雷达传感器,而导致的实地测试繁琐、开发成本大的问题。
可选的,发射锥状波束的雷达传感器具体可以为毫米波雷达传感器。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图1为本发明一实施例提供的雷达仿真方法的流程示意图,本实施例的执行主体可以为用于实现雷达仿真的雷达仿真装置,具体可以为雷达仿真装置的处理器。如图1所示,本实施例的方法可以包括:
步骤101,根据当前帧的深度图,确定所述深度图中的多个像素点。
本步骤中,所述深度图中像素点的深度值表示所述像素点与摄像机之间的距离,且所述多个像素点的深度值表示的距离,小于所述深度图中其他像素点的深度值表示的距离。例如,假设深度图中包括12个像素点,分别为像素点1至像素点12,且像素点1至像素点12的深度值所表示的距离越来越大,则根据所述深度图确定的所述多个像素点,具体可以为12个像素点中的 像素点1至像素点3。
其中,所述深度图满足所述摄像机的视场角(FOV,Field of View)条件,所述摄像机的FOV条件与所述雷达传感器的探测范围相对应。
这里,由于雷达传感器发射的波束是锥状波束,其探测范围与摄像机的拍摄范围是类似的,因此在深度图满足与雷达传感器的探测范围相对应的视场角条件时,可以通过所述深度图中像素点的深度值所表示的与摄像机之间的距离,模拟与雷达传感器之间的距离,从而根据深度图中像素点的深度值,可以确定距离雷达传感器最近的多个像素点。这里,该多个像素点可以理解为仿真得到的雷达传感器探测到的多个目标点,多个像素点可以与多个目标点一一对应。应理解,这里摄像机的FOV条件与所述雷达传感器的探测范围相对应可以是一致也可以是有映射关系。
可选的,所述摄像机的FOV条件与所述雷达传感器的探测范围相对应,具体可以为所述摄像机的FOV条件与所述雷达传感器的探测范围完全相同,或者所述摄像机的FOV条件与所述雷达传感器的探测范围近似相同。
例如,当深度值越大表示与摄像机之间的距离越近时,对于图2A所示的场景中车辆X1前方的探测范围对应的的深度图可以如图2B所示。
可选的,所述多个像素点的数量可以为预设数量,例如64。假设深度图包括像素点1至像素点128,128个像素点,且像素点1至像素点128,深度值依次减小,则当预设数量等于64且深度值越大表示与摄像机之间的距离越近时,所述多个像素点具体可以为像素点1至像素点64。
需要说明的是,对于深度图的获取方式,本发明可以不作限定。例如,可以通过相机拍摄得到,或者,也可以通过渲染得到。
需要说明的是,当雷达传感器可以以一定的采样频率进行探测。具体的,雷达传感器上一采样时刻的探测范围,可以对应上一帧,可以根据上一帧的深度图确定多个像素点;雷达传感器当前采样时刻的探测范围,可以对应当前帧的深度图,可以根据当前帧的深度图确定多个像素点;雷达传感器下一采样时刻的探测范围,可以对应下一帧的深度图,可以根据下一帧的深度图确定多个像素点。可以理解的是,当前帧的深度图可以与上一帧的深度图相同或者不同,当前帧的深度图可以与下一帧的深度图相同或者不同。
步骤102,根据所述多个像素点中各像素点的深度值,输出雷达传感器 的探测结果。
本步骤中,由于一个像素点的深度值可以表示该像素点与摄像机之间的距离,因此,根据多个像素点中各像素点的深度值,可以得到各像素点与雷摄像机之间的距离。进一步的,由于该多个像素点即为仿真得到的雷达传感器探测到的目标点,且雷达传感器的探测范围与摄像机的FOV相对应,因此该像素点与摄像机之间的距离,即为仿真得到的雷达传感器探测到的目标点与雷达传感器之间的距离。
因此,根据多个所述像素点中各像素点的深度值,输出的雷达传感器的探测结果可以包括各像素点与摄像机之间的距离(可以理解为雷达传感器探测到的各目标点与雷达传感器之间的距离)。
需要说明的是,除了距离之外,可选的,雷达传感器的探测结果还可以包括各像素点相对于雷达传感器的运动信息。进一步可选的,各像素点相对于雷达传感器的运动信息可以相同,例如都为预设运动信息。其中,所述运动信息例如可以包括运动方向、运动速度等。
可选的,可以直接将像素点的深度值作为像素点与雷达传感器之间的距离;或者,也可以对像素点的深度值进行数学运算,得到像素点与雷达传感器之间的距离。
本实施例中,通过根据当前帧的深度图,确定深度图中的多个像素点,并根据多个像素点中各像素点的深度值,输出雷达传感器的探测结果,使得能够通过仿真得到发射锥形波束的雷达传感器探测到的目标点,并进一步仿真得到雷达传感器的探测结果,实现了对于发射锥形波束的雷达传感器的仿真,从而使得可以避免开发过程中由于依赖真实的发射锥状波束的雷达传感器,而导致的实地测试繁琐、开发成本大等的问题。
图3为本发明另一实施例提供的雷达仿真方法的流程示意图,本实施例在图1所示实施例的基础上,主要描述了步骤102的一种可选的实现方式。如图3所示,本实施例的方法可以包括:
步骤301,根据当前帧的深度图,确定所述深度图中的多个像素点。
需要说明的是,步骤301与步骤101类似,在此不再赘述。
步骤302,根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
本步骤中,由于该多个像素点即为仿真得到的雷达传感器探测到的目标点,且雷达传感器的探测范围与摄像机的FOV相对应,因此该像素点相对于摄像机的运动信息,即为仿真得到的雷达传感器探测到的目标点相对于雷达传感器的运动信息。因此,根据多个所述像素点中各像素点的深度值,以及各像素点相对于摄像机的运动信息,输出的雷达传感器的探测结果可以包括各像素点与摄像机之间的距离(可以理解为雷达传感器探测到的各目标点与雷达传感器之间的距离),各像素点相对于摄像机的运动信息(可以理解为雷达传感器探测到的各目标点相对于雷达传感器的运动信息)。
可选的,可以存储有深度图中各像素点与运动信息的对应关系,一个像素点对应的运动信息即为该像素点相对于摄像机的运动信息。进一步可选的,各像素点与运动信息的对应关系可以预先设置,或者也可以由用户设置。
或者,可选的,可以根据与所述深度图对应的标签图,确定像素点相对于摄像机的运动信息。其中,所述标签图用于指示所述深度图中各像素点所属的对象,所述对象与运动信息对应。例如,标签图可以通过不同的颜色标签,指示各像素点所属的对象,当标签图中两个像素点对应的颜色标签均为第一颜色时,可以表示这两个像素点属于同一个对象,均为该第一颜色所代表的对象。例如,如图2B所示的深度图中,属于道路栏杆X2的所有像素点对应的颜色标签可以均为深绿色,属于远处房屋X3的所有像素点对应的颜色标签可以均为浅绿色。
进一步可选的,本实施例的方法还可以包括如下步骤A和步骤B。
步骤A,根据所述深度图对应的标签图,确定所述多个像素点中各像素点所属的对象。
这里,假设深度图包括像素点1至像素点100,100个像素中各像素点的深度值,则标签图中包括该100个像素点中各像素点的所属的对象。进一步的,假设根据深度图确定的多个像素点为像素点10、像素点20、像素点30和像素点40,则根据标签图可以得到像素点10、像素点20、像素点30和像素点40中各像素点所属的对象。
其中,对象具体可以为雷达传感器可以探测到的任意对象,例如地面、地面上的建筑物等。
步骤B,根据所述多个像素点中各像素点所属的对象,确定各像素点的 运动信息。
这里,由于对象与运动信息对应,因此根据各像素点所属的对象,可以得到各像素点相对于摄像机的运动信息,即各像素点相对于雷达传感器的运动信息。
可选的,当所述深度值与距离之间需要转换时,步骤302具体可以包括:根据所述多个像素点各自的深度值,确定所述多个像素点中各像素点与所述摄像机之间的距离;将所述多个像素点中各像素点与所述摄像机之间的距离,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
可选的,为了便于开发,上述探测结果中还可以包括用于指示各像素点所属的对象的识别信息。进一步可选的,所述对象可以与目标识别号一一对应;所述方法还包括:根据所述多个像素点中各像素点所属的对象,确定各像素点的目标识别号(可以理解为各目标点的目标识别号)。
相应的,上述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:根据所述多个像素点中各像素点的深度值,以及各像素点的目标识别号,输出雷达传感器的探测结果。这里,输出的所述雷达传感器的探测结果可以包括各目标点与摄像机之间的距离,以及各目标点的目标识别号。
进一步可选的,上述根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果,包括:
根据所述多个像素点中各像素点的深度值,各像素点相对于所述摄像机的运动信息以及各像素点的目标识别号,输出雷达传感器的探测结果。这里,输出的所述雷达传感器的探测结果可以包括各目标点与雷达传感器之间的距离,各目标点相对于雷达传感器的运动信息,以及各目标点的目标识别号。
本实施例中,通过根据当前帧的深度图,确定深度图中的多个像素点,根据多个像素点中各像素点的深度值,以及各像素点相对于摄像机的运动信息,输出雷达传感器的探测结果,实现了对于真实的雷达传感器采样到的目标点的运动信息的仿真,使得仿真得到的雷达传感器的探测结果中包括目标点与雷达传感器之间的距离,以及目标点相对于雷达传感器的运动速度。
图4为本发明又一实施例提供的雷达仿真方法的流程示意图,本实施例在图1、图3所示实施例的基础上,主要描述了在仿真中,考虑到雷达传感 器的精度损失对于雷达传感器的探测结果的影响。如图4所示,本实施例的方法可以包括:
步骤401,根据深度等级与深度值范围的对应关系,将当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的所述深度图。
这里,将当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,可以表示损失深度等级范围内的深度值的精度,将同一深度等级范围内的深度值均更新为该深度等级范围内的固定的深度值,即该深度等级范围内的最大值。可以理解的是,根据雷达传感器精度损失的特点,可替换的,可以将同一深度等级范围内的深度值均更新为该深度等级范围内的其他深度值,例如该深度等级范围内的最小值。
可选的,深度图中的像素点的深度值具体可以为0至255,256个整数中的任意一个。
可以理解的是,雷达传感器的探测精度越高,则深度等级与深度范围划分的粒度可以越细;例如,可以划分为深度等级1至深度等级4,4个深度等级,且深度等级1对应的深度值范围0至63,深度等级2对应的深度值范围64至127,深度等级3对应的深度值范围128至192,深度等级4对应的深度值范围193至255。雷达传感器的探测精度越低,则深度等级与深度范围划分的粒度可以越粗;例如,可以划分为深度等级1至深度等级6,6个深度等级,且深度等级1对应的深度值范围0至42,深度等级2对应的深度值范围43至85,深度等级3对应的深度值范围86至128,深度等级4对应的深度值范围129至171,深度等级5对应的深度值范围172至214,深度等级6对应的深度值范围215至255。
可选的,为了便于计算,步骤401之前还可以包括:对所述当前帧的深度图中各像素点的深度值进行归一化。相应的,步骤401具体可以包括根据深度等级与深度值范围的对应关系,将归一化后的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的所述深度图。可以理解的是,这里,深度值范围也是归一化之后的范围。
步骤402,根据更新后的所述深度图,确定所述深度图中的多个像素点。
需要说明的是,步骤402中根据更新后的深度图确定更新后的深度图中 的多个像素点的具体方式,与步骤101中根据深度图,确定深度图中的多个像素点的具体方式类似,在此不再赘述。
步骤403,根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果。
需要说明的是,步骤403与步骤102或步骤302类似,在此不再赘述。
本实施例中,通过根据深度等级与深度值范围的对应关系,将当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的深度图,根据更新后的深度图,确定深度图中的多个像素点,实现了对于雷达传感器的精度损失的仿真。
图5为本发明又一实施例提供的雷达仿真方法的流程示意图,本实施例在前述实施例的基础上,主要描述了在仿真中,获得上述当前帧的深度图的一种可选的实现方式。如图5所示,本实施例的方法可以包括:
步骤501,获得当前帧的原始深度图。
本步骤中,可选的,可以通过图像渲染,得到当前帧的原始深度图。进一步可选的,可以根据雷达传感器的运动,通过图像渲染,得到在一定的立体场景中的当前帧的原始深度图。可选的,雷达传感器的运动,具体可以包括搭载雷达传感器的载体的运动。
在上述实施例中,可以以目标频率依次将多帧中的各帧作为所述当前帧,所述多帧为与一个立体场景相关的连续的多个帧。当通过图像渲染得到原始深度图时,具体的可以通过图像渲染,得到在该立体场景中的连续的多个帧各自的原始深度图。其中,所述目标频率可以与真实的雷达传感器的采样频率相等,以仿真雷达传感器的采样频率。可选的,所述目标频率为20赫兹。
步骤502,根据所述原始深度图中各像素点的深度值和二维坐标,以及所述摄像机的参数,确定所述原始深度图中各像素点在所述摄像机的摄像机坐标系下的偏航角和俯仰角。
本步骤中,具体的,可以根据摄像机的参数、原始图像中各像素点的二维坐标,确定各像素点在摄像机坐标系下的位置。其中,摄像机的参数可以包括摄像机的内参。进一步的,可以根据原始深度图中各像素点在摄像机坐标系下的位置,以及各像素点的深度值,确定各像素点在所述摄像机坐标系下的偏航角和俯仰角。这里,各像素点在所述摄像机坐标系下的偏航角和俯 仰角,可以理解为各像素点相对于雷达传感器的偏航角和俯仰角。
步骤503,根据所述原始深度图中各像素点在所述摄像机坐标系下的俯仰角和偏航角,将所述原始深度图中俯仰角和偏航角不满足所述FOV条件的像素点的深度值设置为预设值,得到所述当前帧的深度图。
本步骤中,所述预设值可以用于表示与所述摄像机的距离大于所述雷达传感器的最大探测距离。当一个像素点的俯仰角和偏航角不满足FOV条件时,可以表示处于雷达传感器的探测范围之外,雷达传感器不会将该点探测为目标点。这里,将一个像素点的深度值设置为预设值,可以理解为将该像素点排除在雷达传感器能够探测到的目标点之外。
可选的,本发明实施例中,当真实的雷达传感器对于水平FOV和垂直FOV均存在限制时,所述FOV条件可以包括水平FOV条件和垂直FOV条件。其中,水平FOV条件可以用于表示对于雷达传感器对于水平FOV的限制。垂直FOV条件可以用于表示对于雷达传感器对于垂直FOV的限制。
可选的,所述水平FOV条件满足与摄像机的距离越远则水平FOV值越小,与摄像机的距离越近则水平FOV值越大的条件。进一步可选的,所述水平FOV条件包括:
与所述摄像机之间的距离小于或等于第一距离阈值时,水平FOV值等于第一FOV值;
与所述摄像机之间的距离大于所述第一距离阈值且小于或等于第二距离阈值时,水平FOV值等于第二FOV值;
与所述摄像机之间的距离大于所述第二距离阈值且小于或等于第三距离阈值时,水平FOV值等于第三FOV值;
与所述摄像机之间的距离所述第三距离阈值时,水平FOV值等于第四FOV值;
其中,所述第二距离阈值大于所述第一距离阈值且小于所述第三距离阈值;所述第二FOV值大于所述第三FOV值且小于所述第一FOV值,所述第三FOV值大于所述第一FOV值。
进一步可选的,所述第一距离阈值等于70m,所述第二距离阈值等于160米,所述第三距离阈值等于250米。
进一步可选的,所述第一FOV值等于60°,所述第二FOV值等于8°, 所述第三FOV值等于4°,所述第四FOV值等于0°。
可选的,所述垂直FOV条件可以满足与摄像机的距离无关的条件。进一步可选的,所述垂直FOV条件包括:垂直FOV值等于10°。
需要说明的是,对于上述水平FOV条件和垂直FOV条件,本领域技术人员可以根据真实的雷达传感器的探测范围进行灵活设计,本发明对此可以不作限定。
可选的,如图6所示,当原始深度图通过图像渲染获得时,可以通过渲染深度图的步骤,得到上述深度图,并通过渲染标识图,得到上述标识图。其中,上述步骤501-步骤503,可以理解为渲染深度图的步骤。进一步的,可以根据渲染获得的深度图和标签图,采用前述实施例的相关方式,根据深度图和标签图,输出雷达传感器的探测结果。需要说明的是,上述渲染同一帧的深度图和标识图可以是基于同一个立体场景中,摄像机的同一个拍摄范围进行渲染,该拍摄范围即为雷达传感器的探测范围。
本实施例中,通过获得当前帧的原始深度图,根据原始深度图中各像素点的深度值和二维坐标,以及摄像机的参数,确定原始深度图中各像素点在摄像机的摄像机坐标系下的偏航角和俯仰角,进一步的,将原始深度图中俯仰角和偏航角不满足FOV条件的像素点的深度值设置为预设值,得到当前帧的深度图,从而得到了满足摄像机的FOV条件的深度图,即雷达传感器的探测范围对应的深度图。
图7为本发明又一实施例提供的雷达仿真方法的流程示意图,本实施例在前述实施例的基础上,主要描述了在仿真中,考虑到地杂波对于雷达传感器的探测结果的影响。如图7所示,本实施例的方法可以包括:
步骤701,根据当前帧的深度图,确定所述深度图中的多个像素点。
需要说明的是,步骤701与步骤101类似,在此不再赘述。
步骤702,确定所述当前帧中的地面点干扰所述当前帧的探测结果。
本步骤中,所述地面点用于模拟地杂波对雷达传感器的探测结果的影响,即由于地杂波的影响导致雷达传感器将地面点识别为目标点。由于地杂波的影响并不是一直存在的,因此可以通过步骤702确定当前帧的地面点干扰当前帧的探测结果。
考虑到在真实的雷达传感器中,一个地面点被识别为目标点后,当该地 面点在雷达传感器的探测范围中时,该地面点持续被识别为目标点,可选的,为了提高仿真地杂波对于探测结果影响的真实性,步骤702具体可以包括:若所述当前帧的上一帧中所述地面点干扰所述上一帧的探测结果,则所述当前帧中的所述地面点干扰所述当前帧的探测结果。
进一步可选的,当所述上一帧中的探测结果未受到地面点的干扰时,当前帧的探测结果可能受到地面点的干扰,也可能不受到地面点的干扰,此时,可以根据概率,确定当前帧的探测结果受到地面点的干扰。具体的,本实施例的方法还包括:若所述上一帧中的地面点未干扰所述上一帧的探测结果,则根据目标概率,确定当前帧中的地面点干扰所述当前帧的探测结果。
可选的,所述目标概率可以为预设的概率,或者也可以与目标帧数正相关。进一步可选的,目标帧数越多,则所述目标概率越大;所述目标帧数为持续到所述当前帧,地面点未干扰探测结果的连续的帧的数目。
例如,假设帧1至,11连续的11个帧,帧1至帧3的探测结果受到地面点的干扰,帧4中不包括帧1至帧3中干扰探测结果的地面点,帧4对应的目标帧数为1,目标概率为概率1且帧4的探测结果未受到地面点的干扰,帧5对应的目标帧数为2,目标概率为概率2且帧5的探测结果未受到地面点的干扰,帧6对应的目标帧数为3,目标概率为概率3且帧6的探测结果未受到地面点的干扰,则帧7对应的目标帧数为4,目标概率为概率4,且概率4>概率3>概率2>概率1。
进一步的,假设帧7的探测结果受到地面点的干扰,且帧8和帧9中包括帧7中干扰探测结果的地面点,帧10中不包括帧7中干扰探测结果的地面点,帧10对应的目标帧数为1,目标概率为概率5且帧10的探测结果未受到地面点的干扰,则帧11对应的目标帧数为2,目标概率为概率6,且概率6>概率5。
考虑到地杂波对于雷达传感器的影响时随机的,可选的,为了进一步提高仿真的真实性,所述地面点可以为随机选择得到的点。可以理解的是,在不考虑地杂波的随机性时,所述地面点也可以为预设的点。
步骤703,确定所述地面点与所述摄像机之间的距离。
本步骤中,可以理解的是,所述地面点为所述当前帧中,地面对应的像素点,因此,所述地面点与所述摄像机之间的距离,可以通过所述地面点的 深度值确定。
需要说明的是,步骤702和步骤703,与步骤701之间并没有先后顺序的限制。
步骤704,根据所述多个像素点中各像素点的深度值,以及所述地面点与所述摄像机之间的距离,输出雷达传感器的探测结果。
本步骤中,根据多个所述像素点中各像素点的深度值,以及所述地面点与所述摄像机之间的距离,输出的雷达传感器的探测结果可以包括各像素点与摄像机之间的距离,以及所述地面点与所述摄像机之间的距离(可以理解为雷达传感器受地杂波干扰探测到的目标点相对于雷达传感器的距离)。
可选的,步骤704中,也可以根据各像素点相对于摄像机的运动信息,输出雷达传感器的探测结果。即,输出的雷达探测结果中还可以包括:各像素点相对于摄像机的运动信息,以及地面点相对于摄像机的运动信息。由于地面点的绝对速度为0,因此可以根据摄像机的运动状态得到地面点相对于摄像机的运动信息。
本实施例中,通过确定当前帧中的地面点干扰当前帧的探测结果,确定地面点与摄像机之间的距离,根据多个像素点中各像素点的深度值,以及地面点与摄像机之间的距离,输出雷达传感器的探测结果,使得雷达传感器的探测结果中可以包括地杂波的影响,实现了地杂波影响雷达传感器的仿真,从而提高了仿真的真实性。
在上述实施例中,可选的,上述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:根据所述多个像素点中各像素点的深度值,按照深度值表示的距离由小至大的顺序,依次输出雷达传感器的探测结果。
本发明实施例中还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序执行时可包括如上述各方法实施例中的雷达仿真方法的部分或全部步骤。
本发明实施例提供一种计算机程序,当所述计算机程序被计算机执行时,用于实现上述任一方法实施例中的雷达仿真方法。
图8本发明一实施例提供的雷达仿真装置的结构示意图,如图8所示,本实施例的雷达仿真装置800可以包括:存储器801和处理器802;上述存 储器801和处理器802可以通过总线连接。存储器801可以包括只读存储器801和随机存取存储器801,并向处理器802提供指令和数据。存储器801的一部分还可以包括非易失性随机存取存储器801。
所述存储器801,用于存储程序代码。
所述处理器802,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
在一种可能的实现中,所述处理器802用于根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,具体包括:
根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
在一种可能的实现中,所述处理器802还用于:
根据所述深度图对应的标签图,确定所述多个像素点中各像素点所属的对象;所述标签图用于指示所述深度图中各像素点所属的对象,所述对象与运动信息对应;
根据所述多个像素点中各像素点所属的对象,确定各像素点的运动信息。
在一种可能的实现中,所述对象与目标识别号一一对应;所述处理器802还用于:
根据所述多个像素点中各像素点所属的对象,确定各像素点的目标识别号;
所述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:
根据所述多个像素点中各像素点的深度值,以及各像素点的目标识别号,输出雷达传感器的探测结果。
在一种可能的实现中,所述处理器802用于根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果,具体包括:
根据所述多个像素点各自的深度值,确定所述多个像素点中各像素点与所述摄像机之间的距离;
将所述多个像素点中各像素点与所述摄像机之间的距离,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
在一种可能的实现中,所述处理器802还用于:
根据深度等级与深度值范围的对应关系,将所述当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的所述深度图;
所述处理器802用于根据当前帧的深度图,确定所述深度图中的多个像素点,具体包括:
根据更新后的所述深度图,确定所述深度图中的多个像素点。
在一种可能的实现中,所述处理器802还用于:
对所述当前帧的深度图中各像素点的深度值进行归一化。
在一种可能的实现中,所述处理器802还用于:
获得当前帧的原始深度图;
根据所述原始深度图中各像素点的深度值和二维坐标,以及所述摄像机的参数,确定所述原始深度图中各像素点在所述摄像机的摄像机坐标系下的偏航角和俯仰角;
根据所述原始深度图中各像素点在所述摄像机坐标系下的俯仰角和偏航角,将所述原始深度图中俯仰角和偏航角不满足所述FOV条件的像素点的深度值设置为预设值,得到所述当前帧的深度图;所述预设值用于表示与所述摄像机的距离大于所述雷达传感器的最大探测距离。
在一种可能的实现中,所述处理器802用于获得当前帧的原始深度图,具体包括:
通过图像渲染,得到当前帧的原始深度图。
在一种可能的实现中,所述处理器802还用于:以目标频率依次将多帧中的各帧作为所述当前帧,所述多帧为与一个立体场景相关的连续的多个帧。
在一种可能的实现中,所述目标频率为20赫兹。
在一种可能的实现中,所述处理器802还用于:
确定当前帧中的地面点干扰所述当前帧的探测结果;
确定所述地面点与所述摄像机之间的距离;
所述处理器802,用于根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,具体包括:
根据所述多个像素点中各像素点的深度值,以及所述地面点与所述摄像 机之间的距离,输出雷达传感器的探测结果。
在一种可能的实现中,所述处理器802用于确定当前帧中的地面点干扰所述当前帧的探测结果,具体包括:
若所述当前帧的上一帧中的所述地面点干扰所述上一帧的探测结果,则所述当前帧中的所述地面点干扰所述当前帧的探测结果。
在一种可能的实现中,所述处理器802还用于:
若所述上一帧中的地面点未干扰所述上一帧的探测结果,则根据目标概率,确定当前帧中的地面点干扰所述当前帧的探测结果。
在一种可能的实现中,目标帧数越多,则所述目标概率越大,所述目标帧数为持续到所述当前帧,地面点未干扰探测结果的连续的帧的数目。
在一种可能的实现中,所述地面点为随机选择得到。
在一种可能的实现中,所述处理器802用于根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,具体包括:
根据所述多个像素点中各像素点的深度值,按照深度值表示的距离由小至大的顺序,依次输出雷达传感器的探测结果。
在一种可能的实现中,件包括水平FOV条件和垂直FOV条件。
在一种可能的实现中,所述水平FOV条件包括:
与所述摄像机之间的距离小于或等于第一距离阈值时,水平FOV值等于第一FOV值;
与所述摄像机之间的距离大于所述第一距离阈值且小于或等于第二距离阈值时,水平FOV值等于第二FOV值;
与所述摄像机之间的距离大于所述第二距离阈值且小于或等于第三距离阈值时,水平FOV值等于第三FOV值;
与所述摄像机之间的距离所述第三距离阈值时,水平FOV值等于第四FOV值;
其中,所述第二距离阈值大于所述第一距离阈值且小于所述第三距离阈值;所述第二FOV值大于所述第三FOV值且小于所述第一FOV值,所述第三FOV值大于所述第一FOV值。
在一种可能的实现中,所述雷达传感器为毫米波雷达传感器。
本实施例提供的雷达仿真装置,可以用于执行本发明上述方法实施例的 技术方案,其实现原理和技术效果类似,此处不再赘述。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (42)

  1. 一种雷达仿真方法,其特征在于,包括:
    根据当前帧的深度图,确定所述深度图中的多个像素点,所述深度图中像素点的深度值表示所述像素点与摄像机之间的距离,且所述多个像素点的深度值表示的距离,小于所述深度图中其他像素点的深度值表示的距离;
    根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果;
    其中,所述深度图满足所述摄像机的视场角FOV条件,所述摄像机的FOV条件与所述雷达传感器的探测范围相对应。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:
    根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    根据所述深度图对应的标签图,确定所述多个像素点中各像素点所属的对象;所述标签图用于指示所述深度图中各像素点所属的对象,所述对象与运动信息对应;
    根据所述多个像素点中各像素点所属的对象,确定各像素点的运动信息。
  4. 根据权利要求3所述的方法,其特征在于,所述对象与目标识别号一一对应;所述方法还包括:
    根据所述多个像素点中各像素点所属的对象,确定各像素点的目标识别号;
    所述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:
    根据所述多个像素点中各像素点的深度值,以及各像素点的目标识别号,输出雷达传感器的探测结果。
  5. 根据权利要求2-4任一项所述的方法,其特征在于,所述根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果,包括:
    根据所述多个像素点各自的深度值,确定所述多个像素点中各像素点与所述摄像机之间的距离;
    将所述多个像素点中各像素点与所述摄像机之间的距离,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据当前帧的深度图,确定所述深度图中的多个像素点之前,还包括:
    根据深度等级与深度值范围的对应关系,将所述当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的所述深度图;
    所述根据当前帧的深度图,确定所述深度图中的多个像素点,包括:
    根据更新后的所述深度图,确定所述深度图中的多个像素点。
  7. 根据权利要求6所述的方法,其特征在于,所述根据深度等级与深度值范围的对应关系,将所述当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的所述深度图之前,还包括:
    对所述当前帧的深度图中各像素点的深度值进行归一化。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述根据当前帧的深度图,确定所述深度图中的多个像素点之前,还包括:
    获得当前帧的原始深度图;
    根据所述原始深度图中各像素点的深度值和二维坐标,以及所述摄像机的参数,确定所述原始深度图中各像素点在所述摄像机的摄像机坐标系下的偏航角和俯仰角;
    根据所述原始深度图中各像素点在所述摄像机坐标系下的俯仰角和偏航角,将所述原始深度图中俯仰角和偏航角不满足所述FOV条件的像素点的深度值设置为预设值,得到所述当前帧的深度图;所述预设值用于表示与所述摄像机的距离大于所述雷达传感器的最大探测距离。
  9. 根据权利要求8所述的方法,其特征在于,所述获得当前帧的原始深度图,包括:
    通过图像渲染,得到当前帧的原始深度图。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,以目标频率依次将多帧中的各帧作为所述当前帧,所述多帧为与一个立体场景相关的连续的多个帧。
  11. 根据权利要求10所述的方法,其特征在于,所述目标频率为20赫兹。
  12. 根据权利要求10或11所述的方法,其特征在于,所述方法还包括:
    确定当前帧中的地面点干扰所述当前帧的探测结果;
    确定所述地面点与所述摄像机之间的距离;
    所述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:
    根据所述多个像素点中各像素点的深度值,以及所述地面点与所述摄像机之间的距离,输出雷达传感器的探测结果。
  13. 根据权利要求12所述的方法,其特征在于,所述确定当前帧中的地面点干扰所述当前帧的探测结果,包括:
    若所述当前帧的上一帧中的所述地面点干扰所述上一帧的探测结果,则所述当前帧中的所述地面点干扰所述当前帧的探测结果。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    若所述上一帧中的地面点未干扰所述上一帧的探测结果,则根据目标概率,确定当前帧中的地面点干扰所述当前帧的探测结果。
  15. 根据权利要求14所述的方法,其特征在于,目标帧数越多,则所述目标概率越大,所述目标帧数为持续到所述当前帧,地面点未干扰探测结果的连续的帧的数目。
  16. 根据权利要求12-15任一项所述的方法,其特征在于,所述地面点为随机选择得到。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:
    根据所述多个像素点中各像素点的深度值,按照深度值表示的距离由小至大的顺序,依次输出雷达传感器的探测结果。
  18. 根据权利要求1-17任一项所述的方法,其特征在于,所述FOV条件包括水平FOV条件和垂直FOV条件。
  19. 根据权利要求18所述的方法,其特征在于,所述水平FOV条件包括:
    与所述摄像机之间的距离小于或等于第一距离阈值时,水平FOV值等于 第一FOV值;
    与所述摄像机之间的距离大于所述第一距离阈值且小于或等于第二距离阈值时,水平FOV值等于第二FOV值;
    与所述摄像机之间的距离大于所述第二距离阈值且小于或等于第三距离阈值时,水平FOV值等于第三FOV值;
    与所述摄像机之间的距离所述第三距离阈值时,水平FOV值等于第四FOV值;
    其中,所述第二距离阈值大于所述第一距离阈值且小于所述第三距离阈值;所述第二FOV值大于所述第三FOV值且小于所述第一FOV值,所述第三FOV值大于所述第一FOV值。
  20. 根据权利要求1-19任一项所述的方法,其特征在于,所述雷达传感器为毫米波雷达传感器。
  21. 一种雷达仿真装置,其特征在于,包括:处理器和存储器;
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    根据当前帧的深度图,确定所述深度图中的多个像素点,所述深度图中像素点的深度值表示所述像素点与摄像机之间的距离,且所述多个像素点的深度值表示的距离,小于所述深度图中其他像素点的深度值表示的距离;
    根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果;
    其中,所述深度图满足所述摄像机的视场角FOV条件,所述摄像机的FOV条件与所述雷达传感器的探测范围相对应。
  22. 根据权利要求21所述的装置,其特征在于,所述处理器用于根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,具体包括:
    根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
  23. 根据权利要求22所述的装置,其特征在于,所述处理器还用于:
    根据所述深度图对应的标签图,确定所述多个像素点中各像素点所属的对象;所述标签图用于指示所述深度图中各像素点所属的对象,所述对象与运动信息对应;
    根据所述多个像素点中各像素点所属的对象,确定各像素点的运动信息。
  24. 根据权利要求23所述的装置,其特征在于,所述对象与目标识别号一一对应;所述处理器还用于:
    根据所述多个像素点中各像素点所属的对象,确定各像素点的目标识别号;
    所述根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,包括:
    根据所述多个像素点中各像素点的深度值,以及各像素点的目标识别号,输出雷达传感器的探测结果。
  25. 根据权利要求22-24任一项所述的装置,其特征在于,所述处理器用于根据所述多个像素点中各像素点的深度值,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果,具体包括:
    根据所述多个像素点各自的深度值,确定所述多个像素点中各像素点与所述摄像机之间的距离;
    将所述多个像素点中各像素点与所述摄像机之间的距离,以及各像素点相对于所述摄像机的运动信息,输出雷达传感器的探测结果。
  26. 根据权利要求21-25任一项所述的装置,其特征在于,所述处理器还用于:
    根据深度等级与深度值范围的对应关系,将所述当前帧的深度图中各像素点的深度值更新为所属深度等级对应的深度值范围的最大值,得到更新后的所述深度图;
    所述处理器用于根据当前帧的深度图,确定所述深度图中的多个像素点,具体包括:
    根据更新后的所述深度图,确定所述深度图中的多个像素点。
  27. 根据权利要求26所述的装置,其特征在于,所述处理器还用于:
    对所述当前帧的深度图中各像素点的深度值进行归一化。
  28. 根据权利要求21-27任一项所述的装置,其特征在于,所述处理器还用于:
    获得当前帧的原始深度图;
    根据所述原始深度图中各像素点的深度值和二维坐标,以及所述摄像机 的参数,确定所述原始深度图中各像素点在所述摄像机的摄像机坐标系下的偏航角和俯仰角;
    根据所述原始深度图中各像素点在所述摄像机坐标系下的俯仰角和偏航角,将所述原始深度图中俯仰角和偏航角不满足所述FOV条件的像素点的深度值设置为预设值,得到所述当前帧的深度图;所述预设值用于表示与所述摄像机的距离大于所述雷达传感器的最大探测距离。
  29. 根据权利要求28所述的装置,其特征在于,所述处理器用于获得当前帧的原始深度图,具体包括:
    通过图像渲染,得到当前帧的原始深度图。
  30. 根据权利要求21-29任一项所述的装置,其特征在于,所述处理器还用于:以目标频率依次将多帧中的各帧作为所述当前帧,所述多帧为与一个立体场景相关的连续的多个帧。
  31. 根据权利要求30所述的装置,其特征在于,所述目标频率为20赫兹。
  32. 根据权利要求30或31所述的装置,其特征在于,所述处理器还用于:
    确定当前帧中的地面点干扰所述当前帧的探测结果;
    确定所述地面点与所述摄像机之间的距离;
    所述处理器,用于根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,具体包括:
    根据所述多个像素点中各像素点的深度值,以及所述地面点与所述摄像机之间的距离,输出雷达传感器的探测结果。
  33. 根据权利要求32所述的装置,其特征在于,所述处理器用于确定当前帧中的地面点干扰所述当前帧的探测结果,具体包括:
    若所述当前帧的上一帧中的所述地面点干扰所述上一帧的探测结果,则所述当前帧中的所述地面点干扰所述当前帧的探测结果。
  34. 根据权利要求33所述的装置,其特征在于,所述处理器还用于:
    若所述上一帧中的地面点未干扰所述上一帧的探测结果,则根据目标概率,确定当前帧中的地面点干扰所述当前帧的探测结果。
  35. 根据权利要求34所述的装置,其特征在于,目标帧数越多,则所述 目标概率越大,所述目标帧数为持续到所述当前帧,地面点未干扰探测结果的连续的帧的数目。
  36. 根据权利要求32-35任一项所述的装置,其特征在于,所述地面点为随机选择得到。
  37. 根据权利要求21-36任一项所述的装置,其特征在于,所述处理器用于根据所述多个像素点中各像素点的深度值,输出雷达传感器的探测结果,具体包括:
    根据所述多个像素点中各像素点的深度值,按照深度值表示的距离由小至大的顺序,依次输出雷达传感器的探测结果。
  38. 根据权利要求21-37任一项所述的装置,其特征在于,所述FOV条件包括水平FOV条件和垂直FOV条件。
  39. 根据权利要求38所述的装置,其特征在于,所述水平FOV条件包括:
    与所述摄像机之间的距离小于或等于第一距离阈值时,水平FOV值等于第一FOV值;
    与所述摄像机之间的距离大于所述第一距离阈值且小于或等于第二距离阈值时,水平FOV值等于第二FOV值;
    与所述摄像机之间的距离大于所述第二距离阈值且小于或等于第三距离阈值时,水平FOV值等于第三FOV值;
    与所述摄像机之间的距离所述第三距离阈值时,水平FOV值等于第四FOV值;
    其中,所述第二距离阈值大于所述第一距离阈值且小于所述第三距离阈值;所述第二FOV值大于所述第三FOV值且小于所述第一FOV值,所述第三FOV值大于所述第一FOV值。
  40. 根据权利要求21-39任一项所述的装置,其特征在于,所述雷达传感器为毫米波雷达传感器。
  41. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行如权利要求1-20任一项所述的雷达仿真方法。
  42. 一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于实现如权利要求1-20任一项所述的雷达仿真方法。
PCT/CN2018/124822 2018-12-28 2018-12-28 雷达仿真方法及装置 WO2020133206A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/124822 WO2020133206A1 (zh) 2018-12-28 2018-12-28 雷达仿真方法及装置
CN201880072069.1A CN111316119A (zh) 2018-12-28 2018-12-28 雷达仿真方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124822 WO2020133206A1 (zh) 2018-12-28 2018-12-28 雷达仿真方法及装置

Publications (1)

Publication Number Publication Date
WO2020133206A1 true WO2020133206A1 (zh) 2020-07-02

Family

ID=71127383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124822 WO2020133206A1 (zh) 2018-12-28 2018-12-28 雷达仿真方法及装置

Country Status (2)

Country Link
CN (1) CN111316119A (zh)
WO (1) WO2020133206A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113820694A (zh) * 2021-11-24 2021-12-21 腾讯科技(深圳)有限公司 一种仿真测距的方法、相关装置、设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010031068A1 (en) * 2000-04-14 2001-10-18 Akihiro Ohta Target detection system using radar and image processing
US20020118352A1 (en) * 2001-02-23 2002-08-29 Japan Atomic Energy Research Institute Fast gate scanning three-dimensional laser radar apparatus
CN102168954A (zh) * 2011-01-14 2011-08-31 浙江大学 基于单目摄像机的深度、深度场及物体大小的测量方法
CN104965202A (zh) * 2015-06-18 2015-10-07 奇瑞汽车股份有限公司 障碍物探测方法和装置
CN105261039A (zh) * 2015-10-14 2016-01-20 山东大学 一种基于深度图像的自适应调整目标跟踪算法
CN105607635A (zh) * 2016-01-05 2016-05-25 东莞市松迪智能机器人科技有限公司 自动导引车全景光学视觉导航控制***及全向自动导引车

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8330804B2 (en) * 2010-05-12 2012-12-11 Microsoft Corporation Scanned-beam depth mapping to 2D image
CN103456038A (zh) * 2013-08-19 2013-12-18 华中科技大学 一种井下环境三维场景重建方法
US10061029B2 (en) * 2015-01-06 2018-08-28 Samsung Electronics Co., Ltd. Correction of depth images from T-O-F 3D camera with electronic-rolling-shutter for light modulation changes taking place during light integration
US10282591B2 (en) * 2015-08-24 2019-05-07 Qualcomm Incorporated Systems and methods for depth map sampling
CN107766847B (zh) * 2017-11-21 2020-10-30 海信集团有限公司 一种车道线检测方法及装置
CN107966693B (zh) * 2017-12-05 2021-08-13 成都合纵连横数字科技有限公司 一种基于深度渲染的车载激光雷达仿真方法
CN108280401B (zh) * 2017-12-27 2020-04-07 达闼科技(北京)有限公司 一种路面检测方法、装置、云端服务器及计算机程序产品
CN108564615B (zh) * 2018-04-20 2022-04-29 驭势(上海)汽车科技有限公司 模拟激光雷达探测的方法、装置、***及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010031068A1 (en) * 2000-04-14 2001-10-18 Akihiro Ohta Target detection system using radar and image processing
US20020118352A1 (en) * 2001-02-23 2002-08-29 Japan Atomic Energy Research Institute Fast gate scanning three-dimensional laser radar apparatus
CN102168954A (zh) * 2011-01-14 2011-08-31 浙江大学 基于单目摄像机的深度、深度场及物体大小的测量方法
CN104965202A (zh) * 2015-06-18 2015-10-07 奇瑞汽车股份有限公司 障碍物探测方法和装置
CN105261039A (zh) * 2015-10-14 2016-01-20 山东大学 一种基于深度图像的自适应调整目标跟踪算法
CN105607635A (zh) * 2016-01-05 2016-05-25 东莞市松迪智能机器人科技有限公司 自动导引车全景光学视觉导航控制***及全向自动导引车

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113820694A (zh) * 2021-11-24 2021-12-21 腾讯科技(深圳)有限公司 一种仿真测距的方法、相关装置、设备以及存储介质
CN113820694B (zh) * 2021-11-24 2022-03-01 腾讯科技(深圳)有限公司 一种仿真测距的方法、相关装置、设备以及存储介质

Also Published As

Publication number Publication date
CN111316119A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
US10970864B2 (en) Method and apparatus for recovering point cloud data
US11455565B2 (en) Augmenting real sensor recordings with simulated sensor data
US11487988B2 (en) Augmenting real sensor recordings with simulated sensor data
CN111324115B (zh) 障碍物位置检测融合方法、装置、电子设备和存储介质
US11313951B2 (en) Ground detection method, electronic device, and vehicle
CN111563450B (zh) 数据处理方法、装置、设备及存储介质
WO2020133230A1 (zh) 雷达仿真方法、装置及***
US10140722B2 (en) Distance measurement apparatus, distance measurement method, and non-transitory computer-readable storage medium
CN111339876B (zh) 用于识别场景中各区域类型的方法和装置
US20210124960A1 (en) Object recognition method and object recognition device performing the same
CN113203409A (zh) 一种复杂室内环境移动机器人导航地图构建方法
US11092690B1 (en) Predicting lidar data using machine learning
EP4206723A1 (en) Ranging method and device, storage medium, and lidar
CN111354022A (zh) 基于核相关滤波的目标跟踪方法及***
CN115147333A (zh) 一种目标检测方法及装置
CN115984637A (zh) 时序融合的点云3d目标检测方法、***、终端及介质
CN113820694B (zh) 一种仿真测距的方法、相关装置、设备以及存储介质
WO2020133206A1 (zh) 雷达仿真方法及装置
CN112612714B (zh) 红外目标检测器的安全性测试方法和装置
JP2018116004A (ja) データ圧縮装置、制御方法、プログラム及び記憶媒体
CN109035390B (zh) 基于激光雷达的建模方法及装置
CN113920273B (zh) 图像处理方法、装置、电子设备和存储介质
CN116184426A (zh) 直接飞行时间测距方法、装置、电子设备及可读存储介质
CN115407302A (zh) 激光雷达的位姿估计方法、装置和电子设备
CN116047537B (zh) 基于激光雷达的道路信息生成方法及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944233

Country of ref document: EP

Kind code of ref document: A1