WO2022217988A1 - 传感器配置方案确定方法、装置、计算机设备、存储介质及程序 - Google Patents

传感器配置方案确定方法、装置、计算机设备、存储介质及程序 Download PDF

Info

Publication number
WO2022217988A1
WO2022217988A1 PCT/CN2022/071455 CN2022071455W WO2022217988A1 WO 2022217988 A1 WO2022217988 A1 WO 2022217988A1 CN 2022071455 W CN2022071455 W CN 2022071455W WO 2022217988 A1 WO2022217988 A1 WO 2022217988A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor configuration
configuration scheme
sensor
screened
measurement data
Prior art date
Application number
PCT/CN2022/071455
Other languages
English (en)
French (fr)
Inventor
马涛
刘知正
李怡康
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022217988A1 publication Critical patent/WO2022217988A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Definitions

  • the present disclosure relates to the technical field of intelligent driving, and in particular, to a method, device, computer equipment, storage medium and program for determining a sensor configuration scheme.
  • various sensors such as radar, cameras, etc.
  • the detection results of the sensors are used to identify obstacles.
  • the selection and installation position of the sensor directly affect the accuracy of obstacle recognition.
  • human experience and rules are generally mainly relied on.
  • this method requires repeated adjustments and is inefficient when deploying sensors.
  • Embodiments of the present disclosure provide at least a method, apparatus, computer device, storage medium, and program for determining a sensor configuration scheme.
  • Embodiments of the present disclosure provide a method for determining a sensor configuration scheme, including:
  • the simulation measurement data corresponding to each sensor configuration scheme to be screened is the data of the target object measured by the sensor in the sensor configuration scheme;
  • conditional entropy of the sensor configuration scheme is determined; the conditional entropy of a filtered sensor configuration scheme is used to characterize the measurement results of the sensors in the sensor configuration scheme in the sensor Stability under simulated measurement data corresponding to the configuration scheme;
  • a target sensor configuration scheme is determined from the plurality of sensor configuration schemes to be screened. In this way, each sensor configuration scheme can be judged by quantitative indicators, and then the optimal sensor configuration scheme can be selected through the conditional entropy of each sensor configuration scheme.
  • Embodiments of the present disclosure also provide a device for determining a sensor configuration scheme, including:
  • an acquisition module configured to acquire multiple sensor configuration schemes to be screened
  • the first determination module is configured to determine the simulated measurement data corresponding to each sensor configuration scheme to be screened; the simulated measurement data corresponding to one sensor configuration scheme to be screened is the data of the target object measured by the sensor in the sensor configuration scheme ;
  • the second determination module is configured to determine the conditional entropy of each sensor configuration scheme to be screened based on the simulation measurement data corresponding to the sensor configuration scheme; the conditional entropy of a sensor configuration scheme to be screened is used to characterize the sensor configuration scheme. The stability of the measurement results of the sensor under the simulated measurement data corresponding to the sensor configuration scheme;
  • the selection module is configured to determine a target sensor configuration scheme from the plurality of sensor configuration schemes to be screened based on the determined conditional entropy of each sensor configuration scheme to be screened.
  • An embodiment of the present disclosure further provides a computer device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the computer device runs, the processor and the The memories communicate with each other through a bus, and when the machine-readable instructions are executed by the processor, the steps of the method for determining a sensor configuration scheme described in any of the foregoing embodiments are executed.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the method for determining a sensor configuration scheme described in any of the foregoing embodiments are executed. .
  • Embodiments of the present disclosure also provide a computer program, where the computer program includes computer-readable codes, and when the computer-readable codes are executed in an electronic device, the processor of the electronic device executes any of the above.
  • FIG. 1 shows a schematic flowchart of a method for determining a sensor configuration scheme provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of a system architecture of a method for determining a sensor configuration scheme provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic flowchart of a method for obtaining a plurality of sensor configuration schemes to be screened in the method for determining a sensor configuration scheme provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of target object voxelization provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic structural diagram of an apparatus for determining a sensor configuration scheme provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • the rules may be to minimize blind spots and improve the perception range.
  • experience and rules cannot be converted into specific data, so each sensor configuration scheme cannot be judged intuitively, which leads to low efficiency in sensor deployment.
  • the execution subject of the method for determining a sensor configuration scheme provided by the embodiment of the present disclosure is generally a computer with a certain computing capability.
  • equipment the computer equipment for example includes: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant) Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the sensor configuration scheme determination method may be implemented by the processor invoking computer-readable instructions stored in the memory.
  • Fig. 1 is a flowchart of a method for determining a sensor configuration scheme provided by an embodiment of the present disclosure, the method includes steps 101 to 104, wherein:
  • Step 101 Acquire a plurality of sensor configuration solutions to be screened.
  • Step 102 Determine the simulated measurement data corresponding to each sensor configuration scheme to be screened; the simulated measurement data corresponding to one sensor configuration scheme to be screened is the data of the target object measured by the sensors in the sensor configuration scheme.
  • Step 103 Determine the conditional entropy of the sensor configuration scheme based on the simulation measurement data corresponding to each sensor configuration scheme to be screened; the conditional entropy of a sensor configuration scheme to be screened is used to characterize the measurement results of the sensors in the sensor configuration scheme Stability under simulated measurement data corresponding to this sensor configuration.
  • Step 104 based on the determined conditional entropy of each sensor configuration scheme to be screened, from the plurality of sensor configuration schemes to be screened, determine a target sensor configuration scheme.
  • a plurality of sensor configuration schemes to be screened can be obtained, different sensor configuration schemes correspond to different simulated measurement data, and then based on the simulated measurement data corresponding to different sensor configuration schemes, Determine the conditional entropy of different sensor configuration schemes respectively.
  • the conditional entropy can be understood as the stability of another variable under the condition of a known random variable. Referring to this scheme, the conditional entropy of the sensor configuration scheme is the sensor configuration scheme.
  • the conditional entropy of the sensor configuration scheme can also be understood as the sensor measurement results in different sensor configuration schemes. In this way, each sensor configuration scheme can be judged by quantitative indicators, and then the optimal sensor configuration scheme can be selected through the conditional entropy of each sensor configuration scheme.
  • FIG. 2 shows a schematic diagram of a system architecture to which a method for determining a sensor configuration scheme according to an embodiment of the present disclosure can be applied; as shown in FIG. 2 , the system architecture includes: a plurality of sensor configuration scheme acquisition terminals 201 to be screened, a network 202 and a control terminal 203.
  • the system architecture includes: a plurality of sensor configuration scheme acquisition terminals 201 to be screened, a network 202 and a control terminal 203.
  • a plurality of sensor configuration scheme acquisition terminals 201 to be screened and the control terminal 203 establish communication connections through the network 202, and the plurality of sensor configuration scheme acquisition terminals 201 to be screened report to the control terminal 203 through the network 202.
  • the control terminal 203 determines the simulated measurement data corresponding to each sensor configuration scheme to be screened in response to the plurality of sensor configuration schemes to be screened, and then, based on the simulated measurement data, determines each simulated measurement data The conditional entropy of the corresponding sensor configuration scheme; thirdly, based on the determined conditional entropy of each sensor configuration scheme to be screened, from the plurality of sensor configuration schemes to be screened, the target sensor configuration scheme is determined. Finally, the control terminal 203 uploads the target sensor configuration scheme to the network 202 , and sends the target sensor configuration scheme acquisition terminal 201 through the network 202 to a plurality of sensor configuration scheme acquisition terminals 201 to be screened.
  • the acquisition terminals 201 of the multiple sensor configuration solutions to be screened may include an image acquisition device, and the control terminal 203 may include a vision processing device or a remote server with visual information processing capability.
  • Network 202 may employ wired or wireless connections.
  • the control terminal 203 is a vision processing device
  • the acquisition terminals 201 of a plurality of sensor configuration solutions to be screened can be connected to the vision processing device through wired connection, such as data communication through a bus; when the control terminal 203 is a remote server
  • the acquisition terminals 201 of a plurality of sensor configuration solutions to be screened can perform data interaction with a remote server through a wireless network.
  • the acquisition terminals 201 of the multiple sensor configuration solutions to be screened may be a vision processing device with a video capture module, or a host with a camera.
  • the display method in the augmented reality scenario of the embodiment of the present disclosure may be executed by a plurality of sensor configuration solution acquisition terminals 201 to be screened, and the above-mentioned system architecture may not include the network 202 and the control terminal 203 .
  • the sensor configuration solution may be a sensor deployment solution in an automatic driving device, including sensor installation information and sensor internal parameter information.
  • the sensor installation information includes the installation position of the sensor in the predefined sensing space (for example, the three-dimensional coordinates in the sensing space) and the installation orientation (for example, the rotation matrix); wherein, the sensing space It is the range of the area that needs to be sensed around the automatic driving device.
  • conditional entropy is used to reflect the stability of the measurement results of the sensor under the simulated measurement data, therefore, when determining the conditional entropy, only The conditional entropy in the perceptual space needs to be determined.
  • the center of the automatic driving device can be used as the intersection of the body diagonals, and the perception space corresponding to the automatic driving device can be set according to the preset length, width and height. Due to the different perception requirements of autonomous driving devices of different sizes, for example, for a small car, there is no need to perceive objects (such as signs, etc.) in a higher space, while for a car with a higher height, Objects in higher spaces need to be sensed; for automatic driving devices with automatic parking function, it is necessary to pay attention to the objects in the space at the rear of the vehicle as much as possible; for automatic driving devices without automatic parking function, Therefore, in order to meet the perception requirements of different automatic driving devices and reduce the amount of calculation in the selection process of sensor configuration solutions, it can be used for different sizes of automatic driving devices. Set different sizes of perceived space. Exemplarily, the length, width and height of the perception space may be proportional to the length, width and height of the automatic driving device.
  • a perception space of the same size is set for all automatic driving devices, but different weights are set for different positions in the perception space, and the weights are used to represent the importance of detection of objects appearing at the position for the automatic driving device.
  • the weight of the rear of the vehicle may be set lower.
  • the sensor internal parameter information may include vertical angular resolution and horizontal angular resolution; when the sensor includes an image acquisition device, the sensor internal parameter information may include an internal parameter matrix of the image acquisition device,
  • the image acquisition device may be, for example, a camera.
  • Different types of sensors correspond to different simulated measurement data, so that through different types of simulated measurement data, a sensor configuration scheme for various types of sensors can be determined.
  • the method shown in FIG. 3 may be referred to, including the following steps:
  • Step 301 Obtain initial installation positions of multiple sensors.
  • Step 302 Offset the initial installation position of each sensor according to the set offset to obtain a plurality of installation positions to be screened.
  • Step 303 Combine multiple installation positions of different sensors to be screened to obtain a plurality of sensor configuration solutions to be screened.
  • the initial installation position may refer to the approximate installation position.
  • a lidar needs to be installed on the roof of an autonomous vehicle, but the precise and optimal installation position cannot be determined. Therefore, any position on the roof can be installed.
  • a location is set as the initial installation location, and then a location search is performed in step 302 to obtain a plurality of installation locations to be screened, and then the optimal installation location is determined.
  • the offset may refer to an offset step, and different types of sensors may have different offsets when performing offset.
  • each time the initial installation position is offset it can be offset from the initial installation position, and the direction of the offset can be different.
  • Each time the initial installation position is offset one can be obtained The installation location to be filtered.
  • the Nth offset when there is only one offset, when the initial installation position is offset according to the offset, the Nth offset may be performed on the basis of the Nth offset installation position. +1 offset, the first offset is based on the initial installation position, the second offset is based on the first offset installation position... and so on until Complete the offset for a preset number of times, and N is a positive integer greater than or equal to 1.
  • the multiple offsets may be sorted in descending order, and then the initial installation position may be offset based on the largest offset (for example, you can offset preset times) to obtain multiple intermediate offset installation positions. Then use the multiple intermediate offset installation positions as the initial installation positions, and then use the second offset in the sorting result to offset... and so on, until the offset is based on each offset.
  • the multiple sensor configuration schemes obtained after this offset can be determined, and then the sensor configuration scheme with the smallest conditional entropy among the multiple sensor configuration schemes is installed.
  • the position is used as the initial installation position, and the above process is performed again until the sensor configuration scheme corresponding to the smallest offset is obtained.
  • the acquisition of multiple sensor configuration schemes to be screened in step 101 may be to acquire the sensor configuration scheme corresponding to the smallest offset.
  • multiple installation positions of different sensors to be screened may also be combined, and then combined with at least one sensor internal parameter information preconfigured to obtain multiple sensor configuration solutions to be screened.
  • the preconfigured at least one sensor internal parameter information may refer to the internal parameter information of different types of sensors, for example, for lidar, there may be a 64-line radar, a 32-line radar, and the sensor internal parameter information corresponding to different types of sensors. different.
  • sensors include sensors of different categories, such as radars, cameras, etc., and sensors of the same type with different internal reference information, such as cameras with different internal reference information, or Radar with different internal reference information.
  • the automatic search for the sensor configuration scheme can be realized, and then the selection of the optimal configuration scheme can be realized through the conditional entropy corresponding to the searched different sensor configuration schemes.
  • step 102 For step 102,
  • the simulated measurement data is data of the target object measured by the sensor in the sensor configuration scheme.
  • the simulated measurement data includes the number of point cloud points reflected by the target object;
  • the simulated measurement data includes The area occupied by the target object in the image captured by the image acquisition device.
  • the target object may refer to a preset object that needs to be perceived in the perception space, and may include, for example, vehicles, pedestrians, and the like.
  • the target objects in the pre-defined perception space may be Voxelization processing to obtain multiple voxels corresponding to the target object; then for each sensor configuration scheme to be screened, based on the sensor configuration scheme and the position coordinates of the multiple voxels corresponding to the target object in the sensing space, Determine the simulation measurement data corresponding to the sensor configuration scheme.
  • the process of voxelizing the target object can be understood as dividing the surface of the target object into cubes of preset size.
  • An exemplary process of voxelization is shown in FIG. 4 .
  • the simulation measurement data can be determined directly according to the position coordinates of the voxel of the target object, which can speed up the calculation speed of the simulation measurement data.
  • the sensor configuration scheme and the position coordinates of multiple voxels corresponding to the target object in the sensing space when determining the simulated measurement data corresponding to the sensor configuration scheme, may include the following steps:
  • Step a Based on the installation position, vertical angular resolution and horizontal angular resolution of the lidar, determine a rotation matrix corresponding to any laser beam of the lidar, where the rotation matrix is used to represent the rotation matrix of the laser beam. launch direction.
  • the emission direction of the laser beam can be represented by a rotation matrix, and an exemplary calculation can be performed by the following formula:
  • V G R -1 ⁇ [sin( ⁇ )cos( ⁇ ),sin( ⁇ )sin ⁇ ,cos( ⁇ )] T (1);
  • V G represents the rotation matrix of the laser beam
  • R represents the rotation matrix of the lidar
  • represents the vertical detection angle
  • represents the horizontal detection angle
  • the horizontal detection angle is calculated according to the horizontal angular resolution
  • the vertical detection angle is based on The vertical angular resolution is calculated.
  • the horizontal detection angle range of the lidar is -90° to 90°
  • the horizontal angle resolution is 10°
  • the vertical detection angle range is 0° to -60°
  • the horizontal angle resolution is 10°
  • Step b Based on the rotation matrix corresponding to any one of the laser beams, and the position coordinates of the plurality of voxels corresponding to the target object in the sensing space, determine the number of point cloud points reflected by the target object. number.
  • each laser beam can be regarded as a ray with the position of the lidar as the origin. is the direction of the laser beam.
  • each laser beam can be regarded as a directed line segment with the position of the lidar as the origin and the detection distance as the length, and the direction of the line segment is the direction of each laser. The direction of the beam.
  • the distance between the position coordinates corresponding to each voxel and any of the laser beams can be calculated (for example, the distance from the point to the line, or the distance from the point to the ray), if the distance is less than the preset distance, then determine the The laser beam falls on the voxel, which has a reflected point cloud.
  • the preset target voxels to be counted under the relative position relationship can be determined according to the relative position relationship between the target object and the lidar, and then the target voxel and the relative position relationship can be calculated.
  • the distance between the individual laser beams Exemplarily, if the target object is a target vehicle, and the relative positional relationship between the target vehicle and the lidar is longitudinally parallel, the lidar can only detect the rear of the vehicle during the detection process, so when determining the simulated measurement data, only Determine the distance between the voxel at the rear of the vehicle and the laser beam.
  • the sensor configuration scheme includes the installation information of the image acquisition device and the internal parameter matrix of the image acquisition device, based on the sensor configuration scheme and the position coordinates of the plurality of voxels corresponding to the target object in the perception space , the following steps are included when determining the simulated measurement data corresponding to the sensor configuration scheme:
  • Step a Based on the installation information of the image acquisition device and the internal parameter matrix, the position coordinates of the plurality of voxels corresponding to the target object in the perception space are converted into the image coordinate system corresponding to the image acquisition device. , to obtain the target pixels corresponding to the plurality of voxels.
  • p C represents the two-dimensional coordinate corresponding to the voxel in the image coordinate system
  • p G represents the three-dimensional coordinate of the voxel in the perceptual space
  • K represents the internal parameter matrix of the image acquisition device
  • R represents the installation position of the image acquisition device ( That is, the three-dimensional coordinates in the perception space)
  • t represents the rotation matrix of the image acquisition device.
  • step b the area of the location area formed by the target pixels is taken as the area occupied by the target object in the image captured by the image acquisition device.
  • the pre-set key voxels under the relative position relationship can be determined according to the relative position relationship between the target object and the image acquisition device, and then the key voxels can be determined based on the relative position relationship between the target object and the image acquisition device.
  • the above formula (2) is converted into the image coordinate system, and the converted target pixels are connected to each other, and the area of the connected area is taken as the area occupied by the target object in the image captured by the image acquisition device. .
  • the image acquisition device can only photograph the rear of the vehicle during the detection process. Therefore, when determining the simulated measurement data, It can be determined that the voxels corresponding to the four vertices of the rear of the car are the target voxels, and the target voxels are converted into the image coordinate system to obtain four target pixels, and the four target pixels are connected to form the area.
  • the area is the area occupied by the target vehicle in the image captured by the image acquisition device.
  • conditional entropy can be understood as the stability or certainty of a variable under the condition of another known random variable.
  • the conditional entropy described in the embodiments of the present disclosure can be the conditional entropy in a special scenario, or It can be called perceptual entropy.
  • conditional entropy can be as follows:
  • V) represents the stability of the value of the variable U under the variable V.
  • formula (3) can be expressed in the following form:
  • V) - ⁇ v ⁇ u p(u
  • v))dup(v)dv E v ⁇ pV H(U
  • V) can be expressed as the expected value of the variable U when the variable V takes the value v when the variable v follows the pV distribution.
  • the detection result of the sensor is the target object detected by the sensor, and the selection of the sensor and the installation position of the sensor affect the detection result of the sensor, that is, the configuration scheme of the sensor affects the detection result of the sensor.
  • the simulated measurement data is also uniquely determined, that is, the simulated measurement data can indirectly represent the sensor configuration scheme, so it is the simulated measurement data that affects the sensor detection results.
  • conditional entropy formula in the embodiment of the present disclosure can be expressed as follows:
  • q represents the parameters in the sensor configuration scheme, including sensor internal parameter information and sensor installation information
  • m represents the simulated measurement data corresponding to the voxel
  • M represents the distribution of the simulated measurement data corresponding to the target object
  • S represents the target measured by the sensor.
  • the change of the target object is mainly in the x direction and the y direction during the driving process of the autonomous vehicle, and the value of the z direction is a fixed value, it can only include (s x , s y ).
  • conditional entropy can be expressed as the expectation when s follows the ps distribution.
  • the prior distribution of the voxel s needs to be determined.
  • the prior distribution can be determined by counting a large number of data sets, that is, the distribution of the positions of the perceived target objects in the perception space.
  • the voxel s detected by the sensor obeys the Gaussian distribution
  • the Gaussian distribution can be expressed by the following formula:
  • formula (6) can be expressed by the following formula:
  • represents the standard deviation of the Gaussian distribution.
  • a and b are preset linear transformation coefficients. Different types of sensors have different linear transformation coefficients.
  • the linear transformation coefficients corresponding to lidar can be a 1 and b 1
  • the linear transformation coefficients corresponding to the image acquisition device can be a 2 , b 2 .
  • the sensor configuration scheme includes installation information of the multiple sensors and sensor internal parameter information.
  • the conditional entropy of each sensor configuration scheme is determined based on the simulated measurement data corresponding to the sensor configuration scheme, different fusion methods may be performed based on the different types of sensors in the sensor configuration scheme.
  • a sensor configuration solution to be screened is a configuration solution for multiple lidars, that is, the sensor configuration solution to be screened only includes lidars (that is, the sensor configuration solution does not include an image acquisition device), Then, based on the simulated measurement data of each lidar in the sensor configuration scheme, the target simulation measurement data corresponding to the sensor configuration scheme can be determined; then based on the target simulation measurement data, the conditional entropy of the sensor configuration scheme can be determined.
  • m i represents the simulated measurement data of the ith sensor, i traverses all sensors, and m fused represents the simulated measurement data of the target.
  • this fusion method is to directly sum the simulated measurement data corresponding to each sensor, if the sensor configuration scheme is a configuration scheme for multiple image acquisition devices, due to the different deployment positions of different image acquisition devices, the captured images will also be different. It is definitely different, so the images captured by different image acquisition devices cannot be directly added.
  • the sensor configuration scheme is a configuration scheme for at least one image acquisition device and at least one lidar, since the data types of the simulated measurement data of the image acquisition device and the simulated measurement data of the lidar are not the same, they cannot be directly added.
  • the simulated measurement data of different lidars is point cloud data
  • the point cloud data is used to reflect the imaging of the object.
  • the point cloud becomes denser, so the detection result of the target object can be more accurate.
  • the target simulation measurement data m fused can be brought into the formula (11) to determine the conditional entropy of the sensor configuration scheme.
  • the sensor configuration solution is a configuration solution for multiple image capture devices, or a configuration solution for at least one image capture device and at least one lidar, or a configuration solution for multiple lasers
  • a radar configuration scheme first, based on the simulated measurement data of any sensor corresponding to the sensor configuration scheme, determine the standard deviation of the Gaussian distribution to which the detected object corresponding to the sensor corresponds; Under the sensor configuration scheme, the standard deviations corresponding to a plurality of sensors are fused to obtain a target standard deviation; and then the conditional entropy of the sensor configuration scheme is determined based on the target standard deviation.
  • the Gaussian distribution obeyed by the detected object corresponding to the sensor can be understood as the distribution of the voxels s detected by the sensor after the sensor is installed on the autonomous vehicle according to the sensor configuration scheme.
  • the Simulate the measurement data determine the AP corresponding to any sensor (bring into formula (10)), and then, based on the AP corresponding to the any sensor, determine the Gaussian distribution that the detected object corresponding to the any sensor obeys The standard deviation of (bring into equation (9)).
  • the fusion can be performed by the following formula:
  • ⁇ i represents the standard deviation of the ith sensor, i is taken over all sensors, and ⁇ fused represents the target standard deviation.
  • the target standard deviation ⁇ fused can be brought into formula (8) to obtain the conditional entropy of the sensor configuration scheme.
  • Different types of sensors can use different data fusion methods, and then determine the conditional entropy of the sensor configuration scheme based on the fused data (here refers to the target simulation measurement data or target standard deviation), so it can be realized for multiple types of sensors.
  • the installation information and internal reference information are determined.
  • step 104 For step 104,
  • the conditional entropy of a sensor configuration scheme is used to characterize the stability of the measurement results of the sensors in the sensor configuration scheme under the simulated measurement data corresponding to the sensor configuration scheme.
  • the smallest sensor configuration scheme to be screened is used as the target sensor configuration scheme.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the execution order of each step should be based on its function and possible intrinsic Logical OK.
  • the embodiments of the present disclosure also provide a device for determining a sensor configuration scheme corresponding to the method for determining a sensor configuration scheme. Since the principle of the device in the embodiment of the present disclosure for solving problems is the same as the above-mentioned method for determining a sensor configuration scheme in the embodiment of the present disclosure Similarly, the implementation of the apparatus can be referred to the implementation of the method.
  • the apparatus includes: an acquisition module 501 , a first determination module 502 , a second determination module 503 , and a selection module 504 ; wherein ,
  • the obtaining module 501 is configured to obtain a plurality of sensor configuration schemes to be screened;
  • the first determination module 502 is configured to determine the simulated measurement data corresponding to each sensor configuration scheme to be screened; the simulated measurement data corresponding to one sensor configuration scheme to be screened is the target object measured by the sensor in the sensor configuration scheme. data;
  • the second determination module 503 is configured to determine the conditional entropy of the sensor configuration scheme based on the simulation measurement data corresponding to each sensor configuration scheme to be screened; the conditional entropy configuration of a sensor configuration scheme to be screened is configured to characterize the sensor configuration The stability of the measurement results of the sensors in the scheme under the simulated measurement data corresponding to the sensor configuration scheme;
  • the selection module 504 is configured to determine a target sensor configuration scheme from the plurality of sensor configuration schemes to be screened based on the determined conditional entropy of each sensor configuration scheme to be screened.
  • the sensor configuration solution is a sensor deployment solution in an automatic driving device
  • the sensor configuration scheme includes sensor installation information and sensor internal parameter information
  • the sensor installation information includes installation positions and installation orientations of a plurality of sensors in a predefined sensing space; wherein the sensing space is a range of an area around the automatic driving device that needs to be sensed.
  • the acquiring module 501 when acquiring a plurality of sensor configuration solutions to be screened, is configured to:
  • a plurality of installation positions to be screened of different sensors are combined to obtain the configuration solutions of the plurality of sensors to be screened.
  • the simulated measurement data when the sensor includes an image acquisition device, includes an area occupied by the target object in an image captured by the image acquisition device.
  • the simulated measurement data when the sensor includes a lidar, includes the number of point cloud points reflected by the target object.
  • the first determining module 502 when determining the simulated measurement data corresponding to each sensor configuration scheme to be screened, is configured to:
  • Voxelization is performed on the target object in the predefined perception space to obtain multiple voxels corresponding to the target object;
  • the simulated measurement data corresponding to the sensor configuration scheme is determined.
  • the first determining module 502 when the sensor configuration scheme includes the installation position of the lidar, and the vertical angular resolution and the horizontal angular resolution of the lidar, the first determining module 502, based on the The sensor configuration scheme and the position coordinates of the multiple voxels corresponding to the target object in the sensing space, when determining the simulated measurement data corresponding to the sensor configuration scheme, the configuration is as follows:
  • the number of point cloud points reflected by the target object is determined based on the rotation matrix corresponding to any one of the laser beams and the position coordinates of the plurality of voxels corresponding to the target object in the sensing space.
  • the first determination module 502 based on the sensor configuration scheme and the The position coordinates of the multiple voxels corresponding to the target object in the sensing space, when determining the simulated measurement data corresponding to the sensor configuration scheme, are configured as:
  • the position coordinates of the plurality of voxels corresponding to the target object in the perception space are converted into the image coordinate system corresponding to the image acquisition device to obtain the target pixels corresponding to the plurality of voxels;
  • the area of the location area formed by the target pixel points is taken as the area occupied by the target object in the image captured by the image acquisition device.
  • the second determining module 503 is configured to use the following method to determine the conditional entropy of each sensor configuration scheme:
  • lidar In the case that only lidar is included in a sensor configuration scheme to be screened, based on the simulated measurement data of each lidar in the sensor configuration scheme, determine the target simulation measurement data corresponding to the sensor configuration scheme;
  • the conditional entropy of the sensor configuration scheme is determined.
  • the second determining module 503 is configured to use the following method to determine the conditional entropy of each sensor configuration scheme:
  • any sensor configuration scheme to be screened based on the simulated measurement data of any sensor in the sensor configuration scheme, determine the standard deviation of the Gaussian distribution that the object detected by the sensor obeys;
  • the conditional entropy of the sensor configuration is determined.
  • a schematic structural diagram of a computer device 600 provided by an embodiment of the present disclosure includes a processor 601 , a memory 602 , and a bus 603 .
  • the memory 602 is configured to store execution instructions, including the memory 6021 and the external memory 6022; the memory 6021 here is also called internal memory, and is configured to temporarily store the operation data in the processor 601 and the data exchanged with the external memory 6022 such as the hard disk,
  • the processor 601 exchanges data with the external memory 6022 through the memory 6021.
  • the processor 601 communicates with the memory 602 through the bus 603, so that the processor 601 executes the following instructions:
  • the simulation measurement data corresponding to each sensor configuration scheme to be screened is the data of the target object measured by the sensor in the sensor configuration scheme;
  • the conditional entropy of the sensor configuration scheme is determined; the conditional entropy of a sensor configuration scheme to be screened is used to characterize that the measurement results of the sensors in the sensor configuration scheme are in the sensor Stability under simulated measurement data corresponding to the configuration scheme;
  • a target sensor configuration scheme is determined from the plurality of sensor configuration schemes to be screened.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the embodiments of the present disclosure further provide a computer program product, the computer product carries program codes, and the instructions included in the program codes can be configured to execute the steps of the sensor configuration scheme determination method described in the above method embodiments.
  • the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof.
  • the computer program product may be embodied as a computer storage medium, and in another optional embodiment, the computer program product may be embodied as a software product, such as a software development kit (SoftwAPe Development Kit, SDK), etc. Wait.
  • a software development kit SoftwAPe Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the technical solutions of the embodiments of the present disclosure are essentially or contribute to the prior art or parts of the technical solutions may be embodied in the form of software products, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • the embodiments of the present application provide a method, device, computer equipment and storage medium for determining a sensor configuration scheme, including: acquiring a plurality of sensor configuration schemes to be screened; determining simulation measurement data corresponding to each sensor configuration scheme to be screened; a The simulated measurement data corresponding to the sensor configuration scheme to be screened is the data of the target object measured by the sensors in the sensor configuration scheme; based on the simulated measurement data corresponding to each sensor configuration scheme to be screened, the conditions of the sensor configuration scheme are determined Entropy; the conditional entropy of a sensor configuration scheme to be screened is used to characterize the stability of the measurement results of the sensors in the sensor configuration scheme under the simulated measurement data corresponding to the sensor configuration scheme; based on the determined sensor configuration schemes to be screened The conditional entropy of the target sensor configuration scheme is determined from the plurality of sensor configuration schemes to be screened.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本公开实施例提供了一种传感器配置方案确定方法、装置、计算机设备、存储介质及程序,包括:获取多个待筛选的传感器配置方案;确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;基于所述每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。

Description

传感器配置方案确定方法、装置、计算机设备、存储介质及程序
相关申请的交叉引用
本专利申请要求2021年04月13日提交的中国专利申请号为202110395399.8、申请人为上海商汤临港智能科技有限公司,申请名称为“传感器配置方案确定方法、装置、计算机设备及存储介质”的优先权,该申请文件以引用的方式并入本申请中。
技术领域
本公开涉及智能驾驶技术领域,尤其涉及一种传感器配置方案确定方法、装置、计算机设备、存储介质及程序。
背景技术
行驶过程中主要依靠自动驾驶车辆上安装的各个传感器(如雷达、相机等)进行车辆控制,例如利用传感器的检测结果进行障碍物识别。
传感器的选型以及安装位置,直接影响到了障碍物识别的精度。相关技术中,在确定自动驾驶车辆上安装的传感器的位置以及型号时,一般主要依靠人为经验和规则,然而这种方法需要进行反复调整,在进行传感器部署时效率较低。
发明内容
本公开实施例至少提供一种传感器配置方案确定方法、装置、计算机设备、存储介质及程序。
本公开实施例提供了一种传感器配置方案确定方法,包括:
获取多个待筛选的传感器配置方案;
确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;
基于每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个带筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;
基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。如此,可以通过量化指标来评判各个传感器配置方案,进而可 以通过各个传感器配置方案的条件熵,实现最优的传感器配置方案的选择。
本公开实施例还提供一种传感器配置方案确定装置,包括:
获取模块,配置为获取多个待筛选的传感器配置方案;
第一确定模块,配置为确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;
第二确定模块,配置为基于每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;
选择模块,配置为基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。
本公开实施例还提供一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述任一实施例所述的传感器配置方案确定方法的步骤。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述任一实施例所述的传感器配置方案确定方法的步骤。
本公开实施例还提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现上述任一实施例所述的传感器配置方案确定方法的步骤。
关于上述传感器配置方案确定装置、计算机设备、计算机可读存储介质及程序的效果描述参见上述传感器配置方案确定方法的说明。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种传感器配置方案确定方法的流程示意图;
图2示出了本公开实施例所提供的传感器配置方案确定方法的***架构示意图;
图3示出了本公开实施例所提供的传感器配置方案确定方法中,获取多个待筛选的传感器配置方案的方法的流程示意图;
图4示出了本公开实施例所提供的目标物体体素化的示意图;
图5示出了本公开实施例所提供的一种传感器配置方案确定装置的架构示意图;
图6示出了本公开实施例所提供的一种计算机设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
经研究发现,相关技术中,在确定传感器的配置方案时,一般是基于人为经验或者规则进行确定,例如所述规则可以是尽量减少盲区、提升感知范围等。然而认为经验和规则并不能转换为具体的数据,因此并不能直观的对各个传感器配置方案进行评判,这样导致传感器部署时效率较低。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种传感器配置方案确定方法进行详细介绍,本公开实施例所提供的传感器配置方案确定方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该传感器配置方案确定方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例提供的传感器配置方案确定方法的流程图,所述方法包括 步骤101~步骤104,其中:
步骤101、获取多个待筛选的传感器配置方案。
步骤102、确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据。
步骤103、基于每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性。
步骤104、基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。
在本公开实施例提供的传感器配置方案确定方法中,可以获取多个待筛选的传感器配置方案,不同的传感器配置方案对应不同的仿真测量数据,然后可以基于不同传感器配置方案对应的仿真测量数据,分别确定不同传感器配置方案的条件熵,所述条件熵可以理解为在一个已知随机变量的条件下,另外一个变量的稳定性,引用至本方案,传感器配置方案的条件熵即为传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性,由于仿真测量数据间接表征传感器配置方案,因此传感器配置方案的条件熵也可以理解为传感器测量结果在不同传感器配置方案下的稳定性,这样,可以通过量化指标来评判各个传感器配置方案,进而可以通过各个传感器配置方案的条件熵,实现最优的传感器配置方案的选择。
图2示出可以应用本公开实施例的传感器配置方案确定方法的***架构示意图;如图2所示,该***架构中包括:多个待筛选的传感器配置方案获取终端201、网络202和控制终端203。为实现支撑一个示例性应用,多个待筛选的传感器配置方案获取终端201和控制终端203通过网络202建立通信连接,多个待筛选的传感器配置方案获取终端201通过网络202向控制终端203上报多个待筛选的传感器配置方案,控制终端203响应于多个待筛选的传感器配置方案,确定每个待筛选的传感器配置方案对应的仿真测量数据,其次,基于仿真测量数据,确定每个仿真测量数据对应的传感器配置方案的条件熵;再次,基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。最后,控制终端203将目标传感器配置方案上传至网络202,并通过网络202发送给多个待筛选的传感器配置方案获取终端201。
作为示例,多个待筛选的传感器配置方案获取终端201可以包括图像采集设备,控制终端203可以包括具有视觉信息处理能力的视觉处理设备或远程服务器。网络202可以采用有线或无线连接方式。其中,当控制终端203为视觉处理设备时,多个待筛选的传感器配置方案获取终端201可以通过有线连接的方式与视觉处理设备通信连接,例如通过总线进行数据通信;当控制终端203为远程服务器时,多个待筛选的传感器配置方案获取终端201可以通过无线网络与远程服务器进行数据交互。
或者,在一些场景中,多个待筛选的传感器配置方案获取终端201可以是带有视频采集模组的视觉处理设备,可以是带有摄像头的主机。这时,本公开实施例的增强现实场景下的展示方法可以由多个待筛选的传感器配置方案获取终端201执行,上述***架构可以不包含网络202和控制终端203。
以下是对上述步骤的详细描述。
针对步骤101、
所述传感器配置方案可以是自动驾驶装置中的传感器的部署方案,包括传感器安装信息和传感器内参信息。
所述传感器安装信息包括所述传感器在预先定义的感知空间中的安装位置(例如可以是在所述感知空间中的三维坐标)以及安装朝向(例如可以是旋转矩阵);其中,所述感知空间为所述自动驾驶装置周围需要被感知的区域范围。
实际应用中,由于只有自动驾驶装置周边的物体会对自动驾驶装置的行驶产生影响,而条件熵用于反应传感器的测量结果在仿真测量数据下的稳定性,因此,在确定条件熵时,只需要确定在感知空间内的条件熵即可。
在设置感知空间时,可以以自动驾驶装置的中心为体对角线的交点,按照预设的长宽高,设置自动驾驶装置对应的感知空间。由于不同尺寸的自动驾驶装置的感知需求不同,例如对于高度较矮的小轿车来说,则无需感知较高空间内的物体(如指示牌等),而对于高度较高的开车来说,则需要感知较高空间内的物体;对于带有自动泊车功能的自动驾驶装置来说,则需要尽可能多的关注车辆尾部的空间内的物体;而对于没有自动泊车功能的自动驾驶装置来说,则尽可能多的关注车辆前方和两侧空间内的物体,因此,为满足不同自动驾驶装置的感知需求,以及降低传感器配置方案选择过程中的计算量,可以为不同尺寸的自动驾驶装置设置不同尺寸的感知空间。示例性的,感知空间的长宽高可以与自动驾驶装置的长宽高成一定比例。
或者,为所有自动驾驶装置设置相同尺寸的感知空间,但是对于感知空间中的不同位置设置不同的权重,所述权重用于表示对于自动驾驶装置而言该位置出现的物体的检测的重要程度。示例性的,对于不带有自动泊车功能的自动驾驶装置,其车辆尾部的权重可以设置较低。
在所述传感器包括激光雷达时,所述传感器内参信息可以包括垂直角分辨率、水平角分辨率;在所述传感器包括图像采集装置时,所述传感器内参信息可以包括图像采集装置的内参矩阵,所述图像采集装置例如可以是相机。
不同类型的传感器,对应不同的仿真测量数据,这样通过不同类型的仿真测量数据,可以实现针对多种类型的传感器的传感器配置方案的确定。
在本公开的一些实施例中,在获取多个待筛选的传感器配置方案时,可以参照如图3所示的方法,包括以下几个步骤:
步骤301、获取多个传感器的初始安装位置。
步骤302、对各个传感器的初始安装位置按照设置的偏移量进行偏移,得到多个待筛选的安装位置。
步骤303、将不同传感器的多个待筛选的安装位置进行组合,得到多个待筛选的传感器配置方案。
这里,所述初始安装位置可以是指设置的大致的安装位置,例如某激光雷达需要安装在自动驾驶车辆的车顶,但是并无法确定精确的最优的安装位置,因此可以将车顶的任一位置设置为所述初始安装位置,然后通过步骤302进行位置搜索,得到多个待筛选的安装位置,再确定最优的安装位置。
这里,所述偏移量可以指偏移步长,不同类型的传感器在进行偏移时偏移量可以不同。当所述偏移量只有一个时,每次对初始安装位置进行偏移时,可以都是从初始安装位置进行偏移,偏移方向可以不同,每次对初始安装位置进行偏移可以得到一个待筛选的安装位置。
在本公开的一些实施例中,当所述偏移量只有一个时,在对初始安装位置按照偏移量进行偏移时,可以是在第N次偏移的安装位置的基础上进行第N+1次偏移,第一次偏移是在初始安装位置的基础上进行偏移,第二次偏移是在第一次偏移的安装位置的基础上进行偏移…以此类推,直至完成预设次数的偏移,N为大于等于1的正整数。
当所述偏移量有多个时,可以先按照从大到小的顺序,对所述多个偏移量进行排序,再基于最大的偏移量,对初始安装位置进行偏移(例如可以偏移预设次数),得到多个中间偏移安装位置。然后将多个中间偏移安装位置分别作为初始安装位置,再利用排序结果中第二个偏移量进行偏移…,以此类推,直至基于基于每一个偏移量都进行偏移。
这样,当偏移量有m个,每次偏移时偏移次数相同,均为n次时,最终可以得到n*m个待筛选的安装位置。
为了提高计算速度,在基于任一个偏移量进行偏移后,可以确定本次偏移后得到的多个传感器配置方案,然后将多个传感器配置方案中条件熵最小的传感器配置方案对应的安装位置作为所述初始安装位置,并重新执行上述过程,直至得到最小的偏移量对应的传感器配置方案。
这样,当偏移量有m个,每次偏移时偏移次数相同,均为n次时,最终可以得到n*m个待筛选的安装位置。步骤101中所述获取多个待筛选的传感器配置方案,可以是获取最小的偏移量对应的传感器配置方案。
所述将不同传感器的多个待筛选的安装位置进行组合,示例性,若有a个传感器,每个传感器有b个待筛选的安装位置,则会有b*a个组合,即有b*a个待筛选的传感器配置方案。
在本公开的一些实施例中,还可以将不同传感器的多个待筛选的安装位置进行组合之后,再与预先配置的至少一种传感器内参信息进行组合,得到多个待筛选的传感器配置方案。
所述预先配置的至少一种传感器内参信息可以是指不同型号的传感器的内参信息,例如对于激光雷达,可以有64线的雷达,可以有32线的雷达,不同型号的传感器对应的传感器内参 信息不同。
在将不同传感器的多个待筛选的安装位置进行组合,其中,不同传感器包括类别不同的传感器,如雷达、摄像头等,还包括内参信息不同的同一类传感器,如具有不同内参信息的摄像头,或者具有不同内参信息的雷达。
通过这种方式,可以实现对于传感器配置方案的自动搜索,这样再通过搜索出的不同传感器配置方案对应的条件熵,实现对于最优配置方案的选择。
针对步骤102、
所述仿真测量数据为所述传感器配置方案中的传感器所测量到的目标物体的数据。在所述传感器包括激光雷达的情况下,所述仿真测量数据包括由所述目标物体反射得到的点云点的个数;在所述传感器包括图像采集装置的情况下,所述仿真测量数据包括所述目标物体在所述图像采集装置拍摄的图像中所占的面积。
这里,所述目标物体可以是指在所述感知空间中的预设的需要感知的物体,例如可以包括车辆、行人等。
为了方便对仿真测试数据的统计,在本公开的一些实施例中,在确定多个待筛选的传感器配置方案分别对应的仿真测量数据时,可以先将在预先定义的感知空间内的目标物体进行体素化处理,得到目标物体对应的多个体素;然后针对每个待筛选的传感器配置方案,基于该传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据。
这里,所述对目标物体进行体素化处理,可以理解为将目标物体的表面进行分割,分割为预设尺寸的立方体,示例性的,体素化的过程如图4所示。
在将目标物体进行体素化之后,可以直接根据目标物体的体素的位置坐标,确定仿真测量数据,这样可以加快仿真测量数据的计算速度。
在所述传感器配置方案包括激光雷达的安装位置,以及激光雷达的垂直角分辨率和水平角分辨率的情况下,即在传感器配置方案中的传感器包括激光雷达的情况下,在基于该所述传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据时,可以包括以下步骤:
步骤a、基于所述激光雷达的安装位置、垂直角分辨率和水平角分辨率,确定所述激光雷达的任一束激光光束对应的旋转矩阵,所述旋转矩阵用于表示所述激光光束的发射方向。
例如,所述激光光束的发射方向可以通过旋转矩阵表示,示例性的可以通过如下公式进行计算:
V G=R -1·[sin(θ)cos(φ),sin(θ)sinφ,cos(θ)] T      (1);
其中,V G表示激光光束的旋转矩阵,R表示激光雷达的旋转矩阵,θ表示垂直检测角度, φ表示水平检测角度,所述水平检测角度根据水平角分辨率计算得到,所述垂直检测角度根据垂直角分辨率计算得到。
示例性的,若激光雷达的水平检测角度范围为-90°~90°,水平角分辨率为10°,垂直检测角度范围为0°~-60°,水平角分辨率为10°,则可以确定激光雷达每次发射激光光束时,共发射18*6=108根激光光束,并可以根据水平角分辨率和垂直角分辨率确定每根激光光束的水平检测角度和垂直检测角度。然后将每根激光光束的水平检测角度和垂直检测角度带入上述公式(1),得到每根激光光束的旋转矩阵。
步骤b、基于所述任一束激光光束对应的旋转矩阵,和所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定由所述目标物体反射得到的点云点的个数。
这里,由于激光雷达的位置坐标是已知的,每个激光光束的朝向通过上述公式(1)可以计算的得到,因此每个激光光束可以看作是以激光雷达的位置为原点的射线,射线的方向为激光光束的朝向。或者,由于激光雷达的探测距离也是激光雷达的内参之一,因此每个激光光束可以看作是以激光雷达的位置为原点,以探测距离为长度的有向线段,线段的方向为每个激光光束的朝向。
在基于所述任一束激光光束对应的旋转矩阵,和所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定落在所述目标物体上的点云点的个数时,可以计算各个体素对应的位置坐标与所述任一束激光光束的距离(例如可以通过点到直线的距离,或者点到射线的距离),若所述距离小于预设距离,则确定所述激光光束落在该体素上,该体素有反射的点云点。
当目标物体与激光雷达之间的距离较近时,可能会出现多条激光光束落在同一体素上的情况,因此可能会有一个体素有多个反射的点云点的情况。
实际应用中,为了提高仿真测量数据的计算速度,可以根据目标物体与激光雷达之间的相对位置关系,确定预先设置的在该相对位置关系下需统计的目标体素,然后计算目标体素与各激光光束之间的距离。示例性的,若目标物体为目标车辆,目标车辆与激光雷达的相对位置关系为纵向平行,则激光雷达在检测过程中,仅能检测到车辆车尾,因此在确定仿真测量数据时,可以仅确定车尾位置的体素与激光光束之间的距离。
在所述传感器配置方案包括图像采集装置的安装信息和图像采集装置的内参矩阵的情况下,在基于所述传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据时,包括以下几个步骤:
步骤a、基于所述图像采集装置的安装信息和所述内参矩阵,将所述目标物体对应的多个体素在所述感知空间中的位置坐标转换到所述图像采集装置对应的图像坐标系下,得到所述多个体素对应的目标像素点。
示例性的,在将多个体素在感知空间中的位置坐标转换到图像坐标系下时,可以通过如下 公式转换:
p C≡K·(R·p G+t)    (2);
其中,p C表示体素对应的在图像坐标系下的二维坐标,p G表示体素在感知空间中的三维坐标,K表示图像采集装置的内参矩阵,R表示图像采集装置的安装位置(即在感知空间中的三维坐标),t表示图像采集装置的旋转矩阵。
步骤b、将所述目标像素点构成的位置区域的面积,作为所述目标物体在所述图像采集装置拍摄的图像中所占的面积。
实际应用中,为了提高仿真测量数据的计算速度,可以先根据目标物体与图像采集装置之间的相对位置关系,确定预先设置的在该相对位置关系下的关键体素,然后将关键体素基于上述公式(2)转换到图像坐标系中,并将转换之后的目标像素点之间相互连接,并将连接成的区域的面积作为目标物体在所述图像采集装置拍摄的图像中所占的面积。
示例性的,若目标物体为目标车辆,目标车辆与图像采集装置的相对位置关系为纵向平行,则图像采集装置在检测过程中,仅能拍摄到车辆车尾,因此在确定仿真测量数据时,可以确定车尾的四个顶点对应的体素为目标体素,并将目标体素转换到图像坐标系中,得到四个目标像素点,将四个目标像素点之间连接,构成的区域的面积为目标车辆在所述图像采集装置拍摄的图像中所占的面积。
针对步骤103、
所述条件熵可以理解为一个变量在另一个已知随机变量的条件下的稳定性,或确定性,本公开实施例中所述的条件熵可以为一种特殊的场景下的条件熵,也可以称为感知熵。
条件熵的定义公式可以如下所示:
Figure PCTCN2022071455-appb-000001
这里,所述条件熵H(U|V)表示在变量V下变量U取值的稳定性,条件熵H(U|V)的取值越大,变量U稳定性越低,条件熵H(U|V)的取值越小,变量U稳定性越高。
在一些实施例中,公式(3)可以用如下形式表示:
H(U|V)=-∫ vup(u|v)ln(p(u|v))dup(v)dv=E v~pVH(U|v)   (4);
这里,所述条件熵H(U|V)可以表示为当变量v服从pV分布时,变量U在变量V取值为v时的期望值。
本公开实施例中的传感器检测结果为传感器检测到的目标物体,影响传感器检测结果的为传感器的选型和传感器的安装位置,即影响传感器检测结果的为传感器配置方案,而当传感器 配置方案确定时,仿真测量数据也是唯一确定的,即仿真测量数据可以间接的表征传感器配置方案,因此影响传感器检测结果的为仿真测量数据。
基于此,本公开实施例中条件熵公式可以表示如下:
Figure PCTCN2022071455-appb-000002
其中,q表示传感器配置方案中的参数,即包括传感器内参信息和传感器安装信息,m表示体素对应的仿真测量数据,M表示目标物体对应的仿真测量数据的分布,S表示传感器测量到的目标物体在感知空间中的概率分布。
由于体素对应的仿真测量数据m可以通过公式(1)、(2)计算得到,因此可以设仿真测量数据m=f(s,q),s表示体素在感知空间中的坐标,这里,由于自动驾驶车辆在行驶过程中,目标物体的变化主要是在x方向和y方向上,z方向的取值为定值,因此可以仅包括(s x,s y)。
因此,上述公式(5)可以等同于如下公式:
Figure PCTCN2022071455-appb-000003
这里,条件熵可以表示为当s服从ps分布时的期望。
条件熵的取值越小,在确定仿真测量数据m后,概率分布S的位置越确定,概率分布S用于表征目标物体可能分布的位置,即条件熵的取值越小,目标物体的位置越确定,检测到目标物体的概率就越高。
若计算公式(6)则需要确定体素s的先验分布,该先验分布可以通过统计大量数据集确定,即统计被感知到的目标物体在感知空间中出现的位置的分布情况。
实际应用中,在得到仿真测量数据之后,可以假设传感器检测到的体素s服从高斯分布,高斯分布可以通过如下公式表示:
p((S x,S y)|m,q)=Ν(μ=(s x,s y),∑=σ 2Ι)     (7);
因此,结合公式(7),公式(6)可以通过如下公式表示:
H(S|m,q)=2 ln(σ)+1+ln(2π)     (8);
其中,σ表示高斯分布的标准方差。
由于表示目标估算的不确定性,其与目标物体的检测性能紧密相连,一般的,目标物体检测的平均精度(average precision,AP)越高,σ越小。当AP达到最大值1时,不确定性σ接近最小值,当AP等于0时,不确定性接近无穷大,因此AP和不确定性之间的关系可以如公式(9)所示:
Figure PCTCN2022071455-appb-000004
AP的取值依赖于3D检测算法的检验精度。通过统计和分析AP与仿真测量数据,可以设 置AP与仿真测量数据m之间的关系如公式(10)所示:
AP≈a ln(m)+b    (10);
这里,a和b为预先设置的线性变换系数,不同类型的传感器对应的线性变换系数不同,例如激光雷达对应的线性变换系数可以为a 1、b 1,图像采集装置对应的线性变换系数可以为a 2、b 2
结合公式(8)、(9)、(10),可以得到传感器的条件熵可以通过如下公式表示:
Figure PCTCN2022071455-appb-000005
需要说明的是,上述公式(11)是指传感器配置方案中的单个传感器对应的条件熵。实际应用中为了保证数据的稳定性,AP的取值一般在[0.001,0.999]的范围内。
在本公开的一些实施例中,传感器配置方案中的传感器可能有多个,即传感器配置方案包括多个传感器的安装信息以及传感器内参信息。在基于每个传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵时,可以基于传感器配置方案中的传感器的类型的不同,执行不同的融合方法。
示例性的,若一个待筛选的传感器配置方案为针对多个激光雷达的配置方案,即该待筛选的传感器配置方案中仅包括激光雷达(即该传感器配置方案中并不包括图像采集装置),则可以基于该传感器配置方案中的每个激光雷达的仿真测量数据,确定该传感器配置方案对应的目标仿真测量数据;然后基于所述目标仿真测量数据,确定该传感器配置方案的条件熵。
例如,在基于每个激光雷达对应的仿真测量数据,确定该传感器配置方案对应的目标仿真测量数据时,可以通过如下公式进行计算:
Figure PCTCN2022071455-appb-000006
其中,m i表示第i个传感器的仿真测量数据,i取遍所有的传感器,m fused表示目标仿真测量数据。
由于这种融合方式是直接将各个传感器对应的仿真测量数据进行求和,若传感器配置方案为针对多个图像采集装置的配置方案,由于不同的图像采集装置的部署位置不同,拍摄出的图像也肯定不同,所以不同图像采集装置拍摄的图像不能直接相加。
若传感器配置方案为针对至少一个图像采集装置和至少一个激光雷达的配置方案,由于图像采集装置的仿真测量数据和激光雷达的仿真测量数据的数据类型并不相同,因此也并不能直接相加。
而不同激光雷达的仿真测量数据能够直接相加的原因是,激光雷达的仿真测量数据为点云数据,点云数据是用来反映物体成像的,点云数据在叠加之后可以使得目标物体上的点云变密, 因此目标物体的检测结果能够更加精确。
在基于公式(12)确定目标仿真测量数据m fused之后,可以将目标仿真测量数据m fused带入公式(11)中,确定传感器配置方案的条件熵。
在另外本公开的一些实施例中,在所述传感器配置方案为针对多个图像采集装置的配置方案,或者为针对至少一个图像采集装置和至少一个激光雷达的配置方案,或者为针对多个激光雷达的配置方案的情况下,可以先基于所述传感器配置方案对应的任一个传感器的仿真测量数据,确定所述任一个传感器对应的检测到的对象所服从的高斯分布的标准方差;然后将在所述传感器配置方案下,多个传感器对应的所述标准方差进行融合,得到目标标准方差;再基于所述目标标准方差,确定所述传感器配置方案的条件熵。
这里,所述传感器对应的检测到的对象所服从的高斯分布可以理解为,所述传感器按照传感器配置方案安装在自动驾驶车辆上之后,传感器检测到的体素s的分布。
例如,在基于所述传感器配置方案对应的任一个传感器的仿真测量数据,确定所述任一个传感器对应的检测到的对象所服从的高斯分布的标准方差时,可以先基于所述任一传感器的仿真测量数据,确定所述任一传感器对应的AP(带入公式(10)),然后基于所述任一传感器对应的AP,确定所述任一传感器对应的检测到的对象所服从的高斯分布的标准方差(带入公式(9))。
示例性的,在将多个传感器对应的所述标准方差进行融合,得到目标标准方差时,可以通过如下公式进行融合:
Figure PCTCN2022071455-appb-000007
其中,σ i表示第i个传感器的标准方差,i取遍所有的传感器,σ fused表示目标标准方差。
在基于目标标准方差,确定传感器配置方案的条件熵时,可以将目标标准方差σ fused带入公式(8),得到传感器配置方案的条件熵。
不同类型的传感器,可以采用不同的数据融合方法,进而基于融合后的数据(这里指目标仿真测量数据或目标标准方差)确定传感器配置方案的条件熵,因此可以实现对于多个多种类型的传感器的安装信息和内参信息的确定。
针对步骤104、
一个传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性,条件熵越高,稳定性越差,条件熵越低,稳定性越高,在基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案时,示例性的,可以将对应的条件熵最小的待筛选的传感器配置方案作为所述目标传感器配置方案。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与传感器配置方案确定方法对应的传感器配置方案确定装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述传感器配置方案确定方法相似,因此装置的实施可以参见方法的实施。
参照图5所示,为本公开实施例提供的一种传感器配置方案确定装置的架构示意图,所述装置包括:获取模块501、第一确定模块502、第二确定模块503、选择模块504;其中,
获取模块501,配置为获取多个待筛选的传感器配置方案;
第一确定模块502,配置为确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;
第二确定模块503,配置为基于所述每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵配置为表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;
选择模块504,配置为基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。
在本公开的一些实施例中,所述传感器配置方案为自动驾驶装置中的传感器的部署方案;
所述传感器配置方案包括传感器安装信息和传感器内参信息;
所述传感器安装信息包括多个传感器在预先定义的感知空间中的安装位置以及安装朝向;其中,所述感知空间为所述自动驾驶装置周围需要被感知的区域范围。
在本公开的一些实施例中,所述获取模块501,在获取多个待筛选的传感器配置方案时,配置为:
获取多个传感器的初始安装位置;
对各个传感器的初始安装位置按照设置的偏移量进行偏移,得到多个待筛选的安装位置;
将不同传感器的多个待筛选的安装位置进行组合,得到所述多个待筛选的传感器配置方案。
在本公开的一些实施例中,在所述传感器包括图像采集装置的情况下,所述仿真测量数据包括所述目标物体在所述图像采集装置拍摄的图像中所占的面积。
在本公开的一些实施例中,在所述传感器包括激光雷达的情况下,所述仿真测量数据包括由所述目标物体反射得到的点云点的个数。
本公开的一些实施例中,所述第一确定模块502,在确定每个待筛选的传感器配置方案对应的仿真测量数据时,配置为:
将在所述预先定义的感知空间内的目标物体进行体素化处理,得到所述目标物体对应的多 个体素;
针对所述每个待筛选的传感器配置方案,基于该传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据。
在本公开的一些实施例中,在所述传感器配置方案包括激光雷达的安装位置,以及激光雷达的垂直角分辨率和水平角分辨率的情况下,所述第一确定模块502,在基于该传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据时,配置为:
基于所述激光雷达的安装位置、垂直角分辨率和水平角分辨率,确定所述激光雷达的任一束激光光束对应的旋转矩阵,所述旋转矩阵配置为表示所述激光光束的发射方向;
基于所述任一束激光光束对应的旋转矩阵,和所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定由所述目标物体反射得到的点云点的个数。
在本公开的一些实施例中,在所述传感器配置方案包括图像采集装置的安装信息和图像采集装置的内参矩阵的情况下,所述第一确定模块502,在基于该传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据时,配置为:
基于所述图像采集装置的安装信息和所述内参矩阵,将所述目标物体对应的多个体素在所述感知空间中的位置坐标转换到所述图像采集装置对应的图像坐标系下,得到所述多个体素对应的目标像素点;
将所述目标像素点构成的位置区域的面积,作为所述目标物体在所述图像采集装置拍摄的图像中所占的面积。
在本公开的一些实施例中,所述第二确定模块503,配置为采用以下方法确定每个传感器配置方案的条件熵:
在一个待筛选的传感器配置方案中仅包括激光雷达情况下,基于该传感器配置方案中每个激光雷达的仿真测量数据,确定该传感器配置方案对应的目标仿真测量数据;
基于所述目标仿真测量数据,确定该传感器配置方案的条件熵。
在本公开的一些实施例中,所述第二确定模块503,配置为采用以下方法确定每个传感器配置方案的条件熵:
针对任一个待筛选的传感器配置方案,基于该传感器配置方案中的任一个传感器的仿真测量数据,确定该传感器检测到的对象所服从的高斯分布的标准方差;
将该传感器配置方案中多个传感器对应的所述标准方差进行融合,得到目标标准方差;
基于所述目标标准方差,确定该传感器配置方案的条件熵。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
基于同一技术构思,本公开实施例还提供了一种计算机设备。参照图6所示,为本公开实施例提供的计算机设备600的结构示意图,包括处理器601、存储器602、和总线603。其中,存储器602配置为存储执行指令,包括内存6021和外部存储器6022;这里的内存6021也称内存储器,配置为暂时存放处理器601中的运算数据,以及与硬盘等外部存储器6022交换的数据,处理器601通过内存6021与外部存储器6022进行数据交换,当计算机设备600运行时,处理器601与存储器602之间通过总线603通信,使得处理器601在执行以下指令:
获取多个待筛选的传感器配置方案;
确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;
基于每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;
基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的传感器配置方案确定方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机产品承载有程序代码,所述程序代码包括的指令可配置为执行上述方法实施例中所述的传感器配置方案确定方法的步骤,可以可参见上述方法实施例。
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品可以体现为计算机存储介质,在另一个可选实施例中,计算机程序产品可以体现为软件产品,例如软件开发包(SoftwAPe Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。
工业实用性
本申请实施例提供了一种传感器配置方案确定方法、装置、计算机设备及存储介质,包括:获取多个待筛选的传感器配置方案;确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;基于每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。

Claims (14)

  1. 一种传感器配置方案确定方法,所述方法由电子设备执行,所述方法包括:
    获取多个待筛选的传感器配置方案;
    确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;
    基于所述每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;
    基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。
  2. 根据权利要求1所述的方法,其中,所述传感器配置方案为自动驾驶装置中的传感器的部署方案;
    所述传感器配置方案包括传感器安装信息和传感器内参信息;
    所述传感器安装信息包括多个传感器在预先定义的感知空间中的安装位置以及安装朝向;其中,所述感知空间为所述自动驾驶装置周围需要被感知的区域范围。
  3. 根据权利要求2所述的方法,其中,所述获取多个待筛选的传感器配置方案,包括:
    获取多个传感器的初始安装位置;
    对各个传感器的初始安装位置按照设置的偏移量进行偏移,得到多个待筛选的安装位置;
    将不同传感器的多个待筛选的安装位置进行组合,得到所述多个待筛选的传感器配置方案。
  4. 根据权利要求1至3任一所述的方法,其中,在所述传感器包括图像采集装置的情况下,所述仿真测量数据包括所述目标物体在所述图像采集装置拍摄的图像中所占的面积。
  5. 根据权利要求1至3任一所述的方法,其中,在所述传感器包括激光雷达的情况下,所述仿真测量数据包括由所述目标物体反射得到的点云点的个数。
  6. 根据权利要求1至5任一所述的方法,其中,所述确定每个待筛选的传感器配置方案对应的仿真测量数据,包括:
    将在所述预先定义的感知空间内的目标物体进行体素化处理,得到所述目标物体对应的多个体素;
    针对所述每个待筛选的传感器配置方案,基于该传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据。
  7. 根据权利要求6所述的方法,其中,在所述传感器配置方案包括激光雷达的安装位置,以及激光雷达的垂直角分辨率和水平角分辨率的情况下,所述基于该传感器配置方案以及所述 目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据,包括:
    基于所述激光雷达的安装位置、垂直角分辨率和水平角分辨率,确定所述激光雷达的任一束激光光束对应的旋转矩阵,所述旋转矩阵用于表示所述激光光束的发射方向;
    基于所述任一束激光光束对应的旋转矩阵,和所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定由所述目标物体反射得到的点云点的个数。
  8. 根据权利要求6所述的方法,其中,在所述传感器配置方案包括图像采集装置的安装信息和图像采集装置的内参矩阵的情况下,所述基于该传感器配置方案以及所述目标物体对应的多个体素在所述感知空间中的位置坐标,确定该传感器配置方案对应的仿真测量数据,包括:
    基于所述图像采集装置的安装信息和所述内参矩阵,将所述目标物体对应的多个体素在所述感知空间中的位置坐标转换到所述图像采集装置对应的图像坐标系下,得到所述多个体素对应的目标像素点;
    将所述目标像素点构成的位置区域的面积,作为所述目标物体在所述图像采集装置拍摄的图像中所占的面积。
  9. 根据权利要求1至8任一所述的方法,其中,采用以下方法确定每个传感器配置方案的条件熵:
    在一个待筛选的传感器配置方案中仅包括激光雷达情况下,基于该传感器配置方案中每个激光雷达的仿真测量数据,确定该传感器配置方案对应的目标仿真测量数据;
    基于所述目标仿真测量数据,确定该传感器配置方案的条件熵。
  10. 根据权利要求1至8任一所述的方法,其中,采用以下方法确定每个传感器配置方案的条件熵:
    针对任一个待筛选的传感器配置方案,基于该传感器配置方案中的任一个传感器的仿真测量数据,确定该传感器检测到的对象所服从的高斯分布的标准方差;
    将该传感器配置方案中多个传感器对应的所述标准方差进行融合,得到目标标准方差;
    基于所述目标标准方差,确定该传感器配置方案的条件熵。
  11. 一种传感器配置方案确定装置,其中,包括:
    获取模块,配置为获取多个待筛选的传感器配置方案;
    第一确定模块,配置为确定每个待筛选的传感器配置方案对应的仿真测量数据;一个待筛选的传感器配置方案对应的仿真测量数据为该传感器配置方案中的传感器所测量到的目标物体的数据;
    第二确定模块,配置为基于所述每个待筛选的传感器配置方案对应的仿真测量数据,确定该传感器配置方案的条件熵;一个待筛选的传感器配置方案的条件熵用于表征该传感器配置方案中的传感器的测量结果在该传感器配置方案对应的仿真测量数据下的稳定性;
    选择模块,配置为基于确定的各个待筛选的传感器配置方案的条件熵,从所述多个待筛选的传感器配置方案中,确定目标传感器配置方案。
  12. 一种计算机设备,其中,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至10任一项所述的传感器配置方案确定方法的步骤。
  13. 一种计算机可读存储介质,其中,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至10任一项所述的传感器配置方案确定方法的步骤。
  14. 一种计算机程序,其中,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如权利要求1至10任一项所述的传感器配置方案确定方法的步骤。
PCT/CN2022/071455 2021-04-13 2022-01-11 传感器配置方案确定方法、装置、计算机设备、存储介质及程序 WO2022217988A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110395399.8 2021-04-13
CN202110395399.8A CN113111513B (zh) 2021-04-13 2021-04-13 传感器配置方案确定方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022217988A1 true WO2022217988A1 (zh) 2022-10-20

Family

ID=76716288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071455 WO2022217988A1 (zh) 2021-04-13 2022-01-11 传感器配置方案确定方法、装置、计算机设备、存储介质及程序

Country Status (2)

Country Link
CN (1) CN113111513B (zh)
WO (1) WO2022217988A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111513B (zh) * 2021-04-13 2024-04-12 上海商汤临港智能科技有限公司 传感器配置方案确定方法、装置、计算机设备及存储介质
CN114485398B (zh) * 2022-01-17 2023-03-28 武汉精立电子技术有限公司 光学检测方案生成方法、存储介质、电子设备及***
CN116167252B (zh) * 2023-04-25 2024-01-30 小米汽车科技有限公司 雷达配置信息的确定方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650150A (zh) * 2016-12-30 2017-05-10 浙江工业职业技术学院 一种基于柔度应变熵的传感器布置方法
CN108304605A (zh) * 2017-11-09 2018-07-20 清华大学 汽车驾驶辅助***传感器优选配置方法
US10451416B1 (en) * 2016-06-20 2019-10-22 Bentley Systems, Incorporated Optimizing sensor placement for structural health monitoring based on information entropy or total modal energy
CN111324945A (zh) * 2020-01-20 2020-06-23 北京百度网讯科技有限公司 传感器方案确定方法、装置、设备及存储介质
CN113111513A (zh) * 2021-04-13 2021-07-13 上海商汤临港智能科技有限公司 传感器配置方案确定方法、装置、计算机设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190004160A1 (en) * 2017-06-30 2019-01-03 Delphi Technologies, Inc. Lidar sensor alignment system
CN112464421B (zh) * 2020-11-23 2022-07-05 长江水利委员会长江科学院 基于联合信息熵的供水管网漏损识别传感器优化布置方法
CN112596050B (zh) * 2020-12-09 2024-04-12 上海商汤临港智能科技有限公司 一种车辆及车载传感器***、以及行车数据采集方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10451416B1 (en) * 2016-06-20 2019-10-22 Bentley Systems, Incorporated Optimizing sensor placement for structural health monitoring based on information entropy or total modal energy
CN106650150A (zh) * 2016-12-30 2017-05-10 浙江工业职业技术学院 一种基于柔度应变熵的传感器布置方法
CN108304605A (zh) * 2017-11-09 2018-07-20 清华大学 汽车驾驶辅助***传感器优选配置方法
CN111324945A (zh) * 2020-01-20 2020-06-23 北京百度网讯科技有限公司 传感器方案确定方法、装置、设备及存储介质
CN113111513A (zh) * 2021-04-13 2021-07-13 上海商汤临港智能科技有限公司 传感器配置方案确定方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN113111513B (zh) 2024-04-12
CN113111513A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
CN111179358B (zh) 标定方法、装置、设备及存储介质
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
WO2022217988A1 (zh) 传感器配置方案确定方法、装置、计算机设备、存储介质及程序
US20180307924A1 (en) Method and apparatus for acquiring traffic sign information
CN111383279B (zh) 外参标定方法、装置及电子设备
CN108875804B (zh) 一种基于激光点云数据的数据处理方法和相关装置
JP6573419B1 (ja) 位置決め方法、ロボット及びコンピューター記憶媒体
CN110146096B (zh) 一种基于图像感知的车辆定位方法及装置
CN110470333B (zh) 传感器参数的标定方法及装置、存储介质和电子装置
CN107025663A (zh) 视觉***中用于3d点云匹配的杂波评分***及方法
KR102167835B1 (ko) 영상 처리 방법 및 장치
CN109828250B (zh) 一种雷达标定方法、标定装置及终端设备
CN111709988B (zh) 一种物体的特征信息的确定方法、装置、电子设备及存储介质
WO2022135594A1 (zh) 目标物体的检测方法及装置、融合处理单元、介质
CN113048980A (zh) 位姿优化方法、装置、电子设备及存储介质
KR20220026422A (ko) 카메라 캘리브레이션 장치 및 이의 동작 방법
CN113658241A (zh) 单目结构光的深度恢复方法、电子设备及存储介质
KR20190062852A (ko) 보행자 검출 시스템 및 모듈, 방법, 컴퓨터프로그램
KR20230065978A (ko) 구조화된 광을 사용하여 장면에서 평면 표면들을 직접 복구하기 위한 시스템, 방법 및 매체
CN112529957A (zh) 确定摄像设备位姿的方法和装置、存储介质、电子设备
CN112184793A (zh) 深度数据的处理方法、装置及可读存储介质
CN113077523B (zh) 一种标定方法、装置、计算机设备及存储介质
CN112381873B (zh) 一种数据标注方法及装置
JP2023503750A (ja) ロボットの位置決め方法及び装置、機器、記憶媒体
CN117250956A (zh) 一种多观测源融合的移动机器人避障方法和避障装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22787220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE