WO2021092771A1 - 一种目标检测方法及装置、设备、存储介质 - Google Patents

一种目标检测方法及装置、设备、存储介质 Download PDF

Info

Publication number
WO2021092771A1
WO2021092771A1 PCT/CN2019/117639 CN2019117639W WO2021092771A1 WO 2021092771 A1 WO2021092771 A1 WO 2021092771A1 CN 2019117639 W CN2019117639 W CN 2019117639W WO 2021092771 A1 WO2021092771 A1 WO 2021092771A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
plane
sample
data
point cloud
Prior art date
Application number
PCT/CN2019/117639
Other languages
English (en)
French (fr)
Inventor
张洪伟
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2019/117639 priority Critical patent/WO2021092771A1/zh
Priority to CN201980100517.9A priority patent/CN114424240A/zh
Publication of WO2021092771A1 publication Critical patent/WO2021092771A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This application relates to the technical field of plane detection, and relates to but not limited to a target detection method, device, equipment, and storage medium.
  • an exemplary embodiment of the present application provides a target detection method, device, device, and storage medium in order to solve at least one problem in the related art.
  • An exemplary embodiment of the present application provides a target detection method, including:
  • the object to be detected is detected according to the point cloud data covered by the target plane.
  • the generating multiple candidate planes based on the coordinate values of the point cloud data includes:
  • the multiple candidate planes are generated.
  • the sampling the noise reduction data according to preset sampling conditions to obtain sample data includes:
  • the noise reduction data is divided into multiple sets of sample data; wherein, the number of data in each set of sample data is greater than or equal to a preset number.
  • the generating the multiple candidate planes according to the coordinate values of the sample data includes:
  • corresponding candidate planes are generated to obtain the multiple candidate planes.
  • the generating the multiple candidate planes according to the coordinate values of the sample data includes:
  • a candidate plane meeting the number threshold is generated.
  • the generating corresponding candidate planes according to the coordinate values of each set of sample data in the multiple sets of sample data to obtain the multiple candidate planes includes:
  • the sample plane is a candidate plane.
  • the generating corresponding candidate planes according to the coordinate values of each set of sample data in the multiple sets of sample data to obtain the multiple candidate planes includes:
  • the sample plane corresponding to the candidate bounding box is the candidate plane.
  • determining that the sample plane corresponding to the candidate bounding box is a candidate plane includes:
  • the sample plane corresponding to the candidate bounding box is a candidate plane.
  • the determining a target plane that satisfies a preset condition from the multiple candidate planes includes:
  • the candidate plane corresponding to the largest number of points is determined as the target plane.
  • An exemplary embodiment of the present application provides a target detection device.
  • the device includes: a first acquisition module, a first generation module, a first determination module, and a first detection module, wherein:
  • the first obtaining module is used to obtain point cloud data of the object to be detected
  • the first generating module is configured to generate multiple candidate planes based on the coordinate values of the point cloud data
  • the first determining module is configured to determine a target plane that meets a preset condition from the multiple candidate planes
  • the first detection module is configured to detect the object to be detected according to the point cloud data covered by the target plane.
  • the first generating module includes:
  • the first noise reduction sub-module is configured to perform noise reduction processing on the point cloud data to obtain noise reduction data
  • the first adopting sub-module is used to sample the noise reduction data according to preset sampling conditions to obtain sample data;
  • the first generation sub-module is configured to generate the multiple candidate planes according to the coordinate values of the sample data.
  • the first-adopted sub-module includes:
  • the first dividing unit is configured to divide the noise reduction data into multiple groups of sample data; wherein the number of data in each group of sample data is greater than or equal to a preset number.
  • the first generating module includes:
  • the first generating sub-module is configured to generate corresponding candidate planes according to the coordinate values of each set of sample data in the multiple sets of sample data, to obtain the multiple candidate planes.
  • the first generating module includes:
  • the second generation sub-module is configured to generate candidate planes that meet the number threshold according to the coordinate values of each set of sample data in the multiple sets of sample data.
  • the first generating submodule includes:
  • the first generating unit is configured to generate corresponding sample planes according to the coordinate values of the i-th group of sample data; where i is an integer greater than or equal to 1;
  • a first determining unit configured to determine the first coordinate value of the intersection between the optical axis of the collection device and the sample plane
  • the second determining unit is configured to determine that the sample plane is a candidate plane if the first coordinate value meets a preset feasible region condition.
  • the first generating submodule includes:
  • the first enclosing unit is used to enclose the point cloud data covered by multiple sample planes to obtain multiple bounding boxes meeting a specific shape
  • the third determining unit is configured to determine the second coordinate value of the intersection of the optical axis of the collection device and the central axis of each bounding box to obtain a second coordinate value set;
  • a first selection unit configured to select, from the second coordinate value set, a candidate bounding box corresponding to a second coordinate value that satisfies the preset feasible region condition
  • the fourth determining unit is configured to determine that the sample plane corresponding to the candidate bounding box is a candidate plane if the attribute information of the candidate bounding box meets a corresponding preset condition.
  • the fourth determining unit includes:
  • the first determining subunit is used to determine if the size of the candidate bounding box meets the size threshold, and/or the center point coordinates of the candidate bounding box are within a preset measurement range, and/or the second coordinate value is within the preset measurement range. Within the preset measurement range, it is determined that the sample plane corresponding to the candidate bounding box is a candidate plane.
  • the first determining module includes:
  • the first sub-determining module is used to determine the number of points included in the preset range of each candidate plane, and obtain multiple point values
  • the second sub-determination module is used to determine the candidate plane corresponding to the largest number of points as the target plane.
  • An exemplary embodiment of the present application provides a target detection device, including a memory and a processor, the memory stores a computer program that can run on the processor, and when the processor executes the program, the target detection method is implemented A step of.
  • An exemplary embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned target detection method are realized.
  • An exemplary embodiment of the present application exemplarily provides a target detection method, device, device, and storage medium.
  • a target detection method By using the point cloud data of the object to be detected, multiple candidate planes are generated, and then selected from the multiple candidate planes In this way, based on the point cloud data covered by the target plane, the object to be detected is detected, which greatly reduces the invalid point cloud data, thereby improving the accuracy of detecting the object to be detected.
  • FIG. 1 is a schematic diagram of the implementation process of a target detection method according to an exemplary embodiment of this application;
  • FIG. 2 is a schematic diagram of another implementation process of the target detection method according to an exemplary embodiment of this application.
  • FIG. 3 is a schematic diagram of another implementation process of the target detection method according to an exemplary embodiment of this application.
  • FIG. 4 is an application scenario diagram of a target detection method according to an exemplary embodiment of this application.
  • FIG. 5 is a schematic diagram of another implementation process of the target detection method according to an exemplary embodiment of this application.
  • FIG. 6 is a diagram of another application scenario of the target detection method according to an exemplary embodiment of this application.
  • FIG. 7 is a schematic diagram of the composition structure of a target detection method and apparatus according to an exemplary embodiment of this application.
  • Fig. 8 is a schematic diagram of a device hardware entity according to an exemplary embodiment of the application.
  • An exemplary embodiment of the present application proposes a target detection method, which is applied to a mobile device with a front camera or a rear camera function, and the mobile device can be implemented in various forms.
  • the mobile device described in an exemplary embodiment of the present application may include a mobile phone, a tablet computer, a palmtop computer, a personal digital assistant (Personal Digital Assistant, PDA), and so on.
  • the functions implemented by the method can be implemented by the processor in the mobile device calling program code.
  • the program code can be stored in a computer storage medium. It can be seen that the mobile device at least includes a processor and a storage medium.
  • FIG. 1 is a schematic diagram of the implementation process of a target detection method according to an exemplary embodiment of this application. As shown in FIG. 1, the following description will be made in conjunction with FIG.
  • Step S101 Obtain the point cloud data of the object to be detected.
  • the object to be detected may be any three-dimensional (3 Dimensions, 3D) object, such as a table, a house, or an animal.
  • 3D three-dimensional
  • the point cloud data can be understood as a large number of points collected to the object to be detected.
  • Step S102 Generate multiple candidate planes based on the coordinate values of the point cloud data.
  • the coordinate value of the point cloud data in the coordinate system corresponding to the collecting device that collects the point cloud data is determined, and then, based on the coordinate value, a plurality of candidate planes are generated.
  • the acquisition device is a camera
  • the coordinate value of the point cloud data in the camera coordinate system is determined, and a candidate plane is generated based on the coordinate values of at least three points each time, thereby obtaining multiple candidate planes.
  • Step S103 Determine a target plane that meets a preset condition from the multiple candidate planes.
  • each candidate plane determines the number of points included in the preset range of each candidate plane to obtain multiple point values; for example, determine the number of points less than 2 centimeters (cm) away from the candidate plane to obtain the point value. Then, the candidate plane corresponding to the largest number of points is determined as the target plane. From the multiple candidate planes, select a plane that covers more point cloud data as the target plane. For example, select a plane that covers more than a certain number of point cloud data as the target plane, indicating that the target plane is among the multiple candidate planes Better plane.
  • Step S104 Detect the object to be detected according to the point cloud data covered by the target plane.
  • the point cloud data covered by the target plane is determined, and then the plane of the object to be detected is detected based on these data to determine the attributes of the object to be detected. For example, 3D imaging of the object to be detected is generated based on the target point cloud data.
  • multiple candidate planes are generated by using the point cloud data of the object to be detected, and then the target plane is selected from the multiple candidate planes; in this way, based on the point cloud data covered by the target plane,
  • the generation of 3D images of the object to be detected greatly reduces invalid point cloud data, thereby reducing the misjudgment of the object to be detected.
  • the step S102 can be implemented by the following steps, as shown in FIG. 2, which is Apply for a schematic diagram of another implementation process of the target detection method of an exemplary embodiment, based on FIG. 1, the following description will be made:
  • Step S201 Perform noise reduction processing on the point cloud data to obtain noise reduction data.
  • the step 201 may be to reduce noise based on the characteristics of the noise data, for example, first set a position range and remove points that fall outside the position range; or calculate the point and nearby point cloud for each point
  • the average distance for example, 30
  • the average distance is for a certain distance threshold (for example, greater than 5 standard deviations), it can be judged as noise and go to this point.
  • Step S202 sampling the noise reduction data according to preset sampling conditions to obtain sample data.
  • the noise reduction data is divided into multiple sets of sample data; wherein, the number of data in each set of sample data is greater than or equal to a preset number.
  • the preset number is set to 3, and every three points in the noise reduction data are used as a set of sample data.
  • Step S203 Generate the multiple candidate planes according to the coordinate values of the sample data.
  • the step S203 may be that the coordinate values of each set of sample data in the multiple sets of sample data are generated to generate corresponding candidate planes to obtain the multiple candidate planes. For example, there are a total of 10 sets of sample data, and corresponding candidate planes are generated based on the coordinate values of each set of sample data, and 10 candidate planes are obtained. It may also be that, according to the coordinate value of each set of sample data in the multiple sets of sample data, a candidate plane that meets the number threshold is generated. That is, when the number of the candidate planes is equal to the number threshold, the generation of the candidate planes is stopped.
  • the first step is to generate the corresponding sample plane based on the coordinate value of the i-th group of sample data.
  • i is an integer greater than or equal to 1.
  • the second step is to determine the first coordinate value of the intersection between the optical axis of the collection device and the sample plane.
  • the acquisition device is a camera
  • the intersection point between the optical axis of the camera and the sample plane is determined
  • the coordinate value of the intersection point in the camera coordinate system that is, the first coordinate value
  • the third step is to determine that the sample plane is a candidate plane if the first coordinate value meets a preset feasible region condition.
  • the preset feasible region condition may be set according to the calibration parameter (for example, field angle) of the collection device in the coordinate system of the collection device.
  • the calibration parameter for example, field angle
  • the thresholds of the calibration parameters in the x, y, and z directions are set in the camera coordinate system as C, D, and 0; when the first coordinate value in the camera coordinate system is greater than the corresponding
  • the threshold value of the calibration parameter that is, the first coordinate value is considered to meet the preset feasible region condition, indicating that the sample plane obtained based on the first coordinate value is feasible, that is, effective, and can be used to detect the plane of the object to be detected.
  • Method 2 The first step is to enclose the point cloud data covered by multiple sample planes to obtain multiple bounding boxes that meet a specific shape.
  • bounding box detection is performed on the point cloud data covered by multiple sample planes.
  • the specific shape may be a rectangular parallelepiped shape or a cube shape.
  • the second step is to determine the second coordinate value of the intersection of the optical axis of the collection device and the central axis of each bounding box to obtain a second coordinate value set.
  • the acquisition device is a camera and there are 10 bounding boxes
  • the intersection of the optical axis of the camera and the central axis of each bounding box is determined, and the coordinate value in the camera coordinate system is the second coordinate value set.
  • the third step is to select, from the second coordinate value set, a candidate bounding box corresponding to the second coordinate value that satisfies the preset feasible region condition.
  • the second coordinate value that satisfies the preset feasible region condition can be understood as that the coordinates of the first coordinate value in the camera coordinate system are respectively greater than the threshold value of the corresponding calibration parameter, indicating that the second coordinate value corresponds to The plane covered by the candidate bounding box is feasible.
  • the fourth step if the attribute information of the candidate bounding box satisfies the corresponding preset condition, it is determined that the sample plane corresponding to the candidate bounding box is the candidate plane.
  • the size of the candidate bounding box satisfies the size threshold (for example, the front area of the candidate bounding box is greater than 0.1 square meters, and the aspect ratio is less than 10)
  • the center point coordinates of the candidate bounding box are measured in a preset measurement Within the range (for example, the coordinates of the center point of the candidate bounding box are within the set angle of view threshold, and do not exceed the effective measurement range of the time-of-flight (TOF) sensor)
  • the second coordinate value is within the Within the preset measurement range
  • it is determined that the sample plane corresponding to the candidate bounding box is a candidate plane (for example, the second coordinate value is within a set angle of view threshold and does not exceed the effective measurement range of the TOF sensor).
  • the noise data in the point cloud data is reduced, and the Detect the accuracy of the object to be detected.
  • the TOF sensor has the characteristics of not being affected by changes in illumination and the texture of the object, and can also reduce costs on the premise of meeting the accuracy requirements.
  • Random Sample Consensus (RANSAC) algorithm is often used for 3D simple object detection.
  • RANSAC Random Sample Consensus
  • an exemplary embodiment of the present application provides a data processing method.
  • the prior knowledge that the detected target is a plane and the noise characteristics of TOF are used on the basis of the RANSAC algorithm.
  • FIG. 3 is a schematic diagram of another implementation process of the target detection method of an exemplary embodiment of the application, combined with the figure The steps shown in 3 are described below:
  • Step S301 Obtain the point cloud data output by the TOF sensor.
  • the TOF sensor data memory is initially filtered and transformed to the three-dimensional coordinates under the camera coordinates to generate three-dimensional point cloud data.
  • Step S302 Denoising the point cloud data according to the noise characteristics of the sensor.
  • the noise characteristics include: a large number of clusters near the camera (within 10 cm); a long-distance point cloud (for example, 15 meters or more) has low reliability.
  • the denoising methods include: firstly set a position range and remove the points that fall outside the position range; or calculate the average distance between the point and nearby point clouds (for example, 30) for each point, if the average distance is A certain distance threshold (for example, greater than 5 standard deviations), can be judged as noise, go to this point.
  • Step S303 randomly sample 3 points from the point cloud data as a set of sample data.
  • Step S304 Determine the sample plane according to the coordinate values of the three points obtained by sampling.
  • the spatial coordinates of the three points are x 1 , x 2 , x 3
  • the expression of the candidate plane can be expressed as ⁇ (a,b,c,d), where,
  • Step S305 Perform feasible region detection on the sample plane, and if the detection result does not meet the preset condition, return to step S303.
  • the feasible region detection on the sample plane includes the following two methods:
  • intersection point x p (x, y, z) is judged according to the set feasible region conditions, for example: Z>0, where A and B are the thresholds of the angle of view in the x and y directions respectively, that is, the field angle is obtained from the calibration parameters of the camera.
  • step S306 If the intersection point x p (x, y, z) satisfies all the above conditions, it is determined that the sample plane is feasible, that is, the judgment is successful, and step S306 is entered. Otherwise, the judgment fails and returns to perform random sampling again, that is, returns to step S303.
  • the sample plane corresponding to the dashed line 401 is a plane that does not meet the feasible region conditions
  • the sample plane corresponding to the dashed line 402 is a plane that meets the feasible region conditions.
  • the points under the sample plane can be used to generate 3D imaging of the object to be inspected.
  • the points covered under the sample plane corresponding to the dashed line 401 are ignored.
  • the feasible region detection of the sample plane includes two stages, stage 411, to determine the intersection of the sample plane and the optical axis of the camera; then, enter stage 412, stage 412, to determine whether the intersection meets the feasible region Condition, the final detection result is obtained, that is, the judgment is successful 414 or the judgment is failed 413, where the successful judgment 414 indicates that the sample plane can be used as a candidate plane, and the judgment failure 413 indicates that the sample plane cannot be used as a candidate plane;
  • the feasible region of this plane is detected, and a candidate plane that meets the conditions of the feasible region is obtained.
  • Method 2 When the feasible region is detected, the judgment based on the size and position of the target surface axis alignment bounding box is added, thereby further reducing the misjudgment of the object to be detected, as shown in Figure 5, the process is as follows:
  • step S501 the point cloud data that can be covered by the sample plane is screened out to provide the subsequent bounding box detection.
  • step S502 the point cloud data is denoised, so that the object to be detected is limited to the center of the camera acquisition range.
  • Step S503 Determine the 3D size of the bounding box and the axis direction of the central axis of the bounding box.
  • Step S504 If the second coordinate value of the intersection of the optical axis of the camera and the central axis of the bounding box satisfies the preset feasible region condition, it is determined that the bounding box is a candidate bounding box.
  • step S501 if the second coordinate value does not pass the detection, return to step S501, and perform step S501 to step S504 again.
  • Step S505 Determine whether the size of the candidate bounding box meets the size threshold.
  • step S506 if the size of the candidate bounding box meets the size threshold, go to step S506; otherwise, go back to step S501.
  • the front area of the candidate bounding box is greater than 0.1 square meters, and the aspect ratio is less than 10.
  • Step S506 Determine whether the center point coordinates of the candidate bounding box are within a preset measurement range.
  • step S507 if the center point coordinates of the candidate bounding box are within the preset measurement range, go to step S507; otherwise, go back to step S501. For example, it is judged that the coordinates of the center point of the candidate bounding box are within the set picture angle threshold and do not exceed the effective measurement range of the TOF sensor.
  • the feasible region is limited to the vicinity of the center of the screen, and the threshold is judged.
  • a x-max and A y-max are the maximum feasible picture angle thresholds in the x and y directions (for example: two-thirds of the camera's horizontal and vertical picture angle);
  • Z max is the maximum optical axis distance threshold (for example: TOF Three quarters of the effective measurement range).
  • the effective measurement range of the TOF sensor 603 is set between the dashed lines 61 and 62, then the coordinate of the center point of the bounding box falls within this range, that is, the effective bounding box, that is, the candidate bounding box, otherwise Is an invalid bounding box.
  • the coordinate of the center point of the bounding box 601 falls within the set angle of view threshold, indicating that the plane corresponding to the bounding box 601 is valid and can be used in the detection process of the image to be detected; the same is true for the center point of the bounding box 602
  • the coordinates do not fall within the set picture angle threshold, indicating that the plane corresponding to the bounding box 602 is invalid (for example, it may contain more noise), and cannot be used in the detection process of the image to be detected.
  • Step S507 Determine whether the second coordinate value is within the preset measurement range.
  • step S501 is returned.
  • the second coordinate value falls within the set picture angle threshold and does not exceed the effective measurement range of the TOF sensor.
  • Step S306 Count the number of points contained in the point cloud of the range covered by the candidate plane.
  • count points that are less than 2 cm away from the candidate plane For example, count points that are less than 2 cm away from the candidate plane.
  • Step S307 When the number of iterations reaches the threshold of the number of iterations, the iteration is terminated, and multiple candidate planes are obtained.
  • Step S308 Determine the candidate plane that covers the most point cloud as the target plane, and output it as the optimal parameter.
  • Step S309 Detect the object to be detected according to the point cloud data covered by the target plane.
  • the prior knowledge that the object to be detected is a plane and the noise characteristics of the TOF are used to add a detection method based on the intersection of the candidate plane and the optical axis. Feasible region detection, thereby greatly reducing the misjudgment of the object to be detected.
  • An exemplary embodiment of the present application provides a target detection method and device.
  • the device includes each module included and each unit included in each module, which can be implemented by a processor in a computer device; of course, it can also be implemented by a specific Logic circuit implementation; in the implementation process, the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA), etc.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • FIG. 7 is a schematic diagram of the composition structure of a target detection method device according to an exemplary embodiment of the application.
  • the device 70 includes: a first acquiring module 71, a first generating module 72, a first determining module 73, and a first determining module 73;
  • a detection module 74 in which:
  • the first obtaining module 71 is configured to obtain point cloud data of the object to be detected
  • the first generating module 72 is configured to generate multiple candidate planes based on the coordinate values of the point cloud data
  • the first determining module 73 is configured to determine a target plane that meets a preset condition from the multiple candidate planes;
  • the first detection module 74 is configured to detect the object to be detected according to the point cloud data covered by the target plane.
  • the first generating module 72 includes:
  • the first noise reduction sub-module is configured to perform noise reduction processing on the point cloud data to obtain noise reduction data
  • the first adopting sub-module is used to sample the noise reduction data according to preset sampling conditions to obtain sample data;
  • the first generation sub-module is configured to generate the multiple candidate planes according to the coordinate values of the sample data.
  • the first-adopted sub-module includes:
  • the first dividing unit is configured to divide the noise reduction data into multiple groups of sample data; wherein the number of data in each group of sample data is greater than or equal to a preset number.
  • the first generating module 72 includes:
  • the first generating sub-module is configured to generate corresponding candidate planes according to the coordinate values of each set of sample data in the multiple sets of sample data, to obtain the multiple candidate planes.
  • the first generating module 72 includes:
  • the second generation sub-module is configured to generate candidate planes that meet the number threshold according to the coordinate values of each set of sample data in the multiple sets of sample data.
  • the first generating submodule includes:
  • the first generating unit is configured to generate corresponding sample planes according to the coordinate values of the i-th group of sample data; where i is an integer greater than or equal to 1;
  • a first determining unit configured to determine the first coordinate value of the intersection between the optical axis of the collection device and the sample plane
  • the second determining unit is configured to determine that the sample plane is a candidate plane if the first coordinate value meets a preset feasible region condition.
  • the first generating submodule includes:
  • the first enclosing unit is used to enclose the point cloud data covered by multiple sample planes to obtain multiple bounding boxes meeting a specific shape
  • the third determining unit is configured to determine the second coordinate value of the intersection of the optical axis of the collection device and the central axis of each bounding box to obtain a second coordinate value set;
  • a first selection unit configured to select, from the second coordinate value set, a candidate bounding box corresponding to a second coordinate value that satisfies the preset feasible region condition
  • the fourth determining unit is configured to determine that the sample plane corresponding to the candidate bounding box is a candidate plane if the attribute information of the candidate bounding box meets a corresponding preset condition.
  • the fourth determining unit includes:
  • the first determining subunit is used to determine if the size of the candidate bounding box meets the size threshold, and/or the center point coordinates of the candidate bounding box are within a preset measurement range, and/or the second coordinate value is within the preset measurement range. Within the preset measurement range, it is determined that the sample plane corresponding to the candidate bounding box is a candidate plane.
  • the first determining module 73 includes:
  • the first sub-determination module is used to determine the number of points included in the preset range of each candidate plane to obtain multiple point values
  • the second sub-determination module is used to determine the candidate plane corresponding to the largest number of points as the target plane.
  • the above-mentioned target detection method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the technical solution of an exemplary embodiment of the present application can be embodied in the form of a software product in essence or a part that contributes to related technologies.
  • the computer software product is stored in a storage medium and includes several instructions. It is used to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes.
  • U disk mobile hard disk
  • read only memory Read Only Memory
  • ROM Read Only Memory
  • magnetic disk or optical disk and other media that can store program codes.
  • an exemplary embodiment of the present application is not limited to any specific combination of hardware and software.
  • FIG. 8 is a schematic diagram of a device hardware entity according to an exemplary embodiment of the present application. As shown in FIG. 8, an exemplary embodiment of the present application provides a device 800, including:
  • the storage medium 82 relies on the processor 81 to perform operations through the communication bus 83.
  • the instructions are executed by the processor 81, Perform the notification method described in the first embodiment above.
  • the various components in the device are coupled together through the communication bus 83.
  • the communication bus 83 is used to implement connection and communication between these components.
  • the communication bus 83 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the communication bus 83 in FIG. 8.
  • the device is usually a mobile device with a front dual camera or a rear dual camera function, and the mobile device may be implemented in various forms.
  • the mobile device described in an exemplary embodiment of the present application may include a mobile phone, a tablet computer, a palmtop computer, a personal digital assistant (Personal Digital Assistant, PDA), and so on.
  • PDA Personal Digital Assistant
  • an exemplary embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the target detection method provided in the foregoing embodiments are implemented.
  • one embodiment or “an embodiment” mentioned throughout the specification means that a specific feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, the appearances of "in one embodiment” or “in an embodiment” in various places throughout the specification do not necessarily refer to the same embodiment. In addition, these specific features, structures or characteristics can be combined in one or more embodiments in any suitable manner. It should be understood that in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not be an example of this application. The implementation process of the exemplary embodiment constitutes any limitation. The serial number of an exemplary embodiment of the present application described above is only for description, and does not represent the superiority or inferiority of the embodiment.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of an exemplary embodiment of the present application.
  • the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • an exemplary embodiment of the present application can be embodied in the form of a software product in essence or a part that contributes to related technologies.
  • the computer software product is stored in a storage medium and includes several instructions. It is used to make the device execute all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.
  • the target detection method in the embodiment of the present application is applied to a device with a shooting function, and includes: acquiring point cloud data of the object to be detected; generating multiple candidate planes based on the coordinate values of the point cloud data; Among the candidate planes, a target plane that meets a preset condition is determined; and the object to be detected is detected according to the point cloud data covered by the target plane.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本申请一示例性实施例公开了一种目标检测方法,所述方法应用于一设备中,包括:获取待检测对象的点云数据;基于所述点云数据的坐标值,生成多个候选平面;从所述多个候选平面中,确定满足预设条件的目标平面;根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。本申请一示例性实施例还同时提供了一种设备及计算机存储介质。

Description

一种目标检测方法及装置、设备、存储介质 技术领域
本申请涉及平面检测技术领域,涉及但不限于一种目标检测方法及装置、设备、存储介质。
背景技术
在相关技术中,实现三维成像传感器的平面检测的过程中,一定程度上可以屏蔽掉大量平面外的点,然而如果输入点云数据存在较多噪音时,容易出现大量的无效检测,导致对于作为检测目标的三维物体的误判。
发明内容
有鉴于此,本申请一示例性实施例为解决相关技术中存在的至少一个问题而提供一种目标检测方法及装置、设备、存储介质。
本申请一示例性实施例的技术方案是这样实现的:
本申请一示例性实施例提供了一种目标检测方法,包括:
获取待检测对象的点云数据;
基于所述点云数据的坐标值,生成多个候选平面;
从所述多个候选平面中,确定满足预设条件的目标平面;
根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。
在上述方法中,所述基于所述点云数据的坐标值,生成多个候选平面,包括:
对所述点云数据进行降噪处理,得到降噪数据;
按照预设采样条件,对所述降噪数据进行采样,得到样本数据;
根据所述样本数据的坐标值,生成所述多个候选平面。
在上述方法中,所述按照预设采样条件,对所述降噪数据进行采样,得到样本数据,包括:
将所述降噪数据划分为多组样本数据;其中,每组样本数据中的数据数量大于等于预设数量。
在上述方法中,所述根据所述样本数据的坐标值,生成所述多个候选平面,包括:
根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面。
在上述方法中,所述根据所述样本数据的坐标值,生成所述多个候选 平面,包括:
根据所述多组样本数据中的每组样本数据的坐标值,生成满足数量阈值的候选平面。
在上述方法中,所述根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面,包括:
根据第i组样本数据的坐标值,生成对应地样本平面;其中,i为大于等于1的整数;
确定所述采集设备的光轴线与所述样本平面之间的交叉点的第一坐标值;
如果所述第一坐标值满足预设的可行域条件,确定所述样本平面为候选平面。
在上述方法中,所述根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面,包括:
对多个样本平面覆盖的点云数据进行包围,得到满足特定形状的多个包围盒;
确定所述采集设备的光轴线与每一包围盒的中心轴的交叉点的第二坐标值,得到第二坐标值集合;
从所述第二坐标值集合中,选择满足所述预设的可行域条件的第二坐标值对应的候选包围盒;
如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面。
在上述方法中,所述如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面,包括:
如果所述候选包围盒的尺寸满足尺寸阈值,和/或所述候选包围盒的中心点坐标在预设测量范围内,和/或所述第二坐标值在所述预设测量范围内,确定所述候选包围盒对应的样本平面为候选平面。
在上述方法中,所述从所述多个候选平面中,确定满足预设条件的目标平面,包括:
确定每一候选平面预设范围内包含的点数,得到多个点数数值;
将最大的点数数值对应的候选平面,确定为目标平面。
本申请一示例性实施例提供一种目标检测装置,所述装置包括:第一获取模块、第一生成模块、第一确定模块和第一检测模块,其中:
所述第一获取模块,用于获取待检测对象的点云数据;
所述第一生成模块,用于基于所述点云数据的坐标值,生成多个候选平面;
所述第一确定模块,用于从所述多个候选平面中,确定满足预设条件的目标平面;
所述第一检测模块,用于根据所述目标平面覆盖的点云数据,对所述 待检测对象进行检测。
在上述装置中,所述第一生成模块,包括:
第一降噪子模块,用于对所述点云数据进行降噪处理,得到降噪数据;
第一采用子模块,用于按照预设采样条件,对所述降噪数据进行采样,得到样本数据;
第一生成子模块,用于根据所述样本数据的坐标值,生成所述多个候选平面。
在上述装置中,所述按第一采用子模块,包括:
第一划分单元,用于将所述降噪数据划分为多组样本数据;其中,每组样本数据中的数据数量大于等于预设数量。
在上述装置中,所述第一生成模块,包括:
第一生成子模块,用于根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面。
在上述装置中,所述第一生成模块,包括:
第二生成子模块,用于根据所述多组样本数据中的每组样本数据的坐标值,生成满足数量阈值的候选平面。
在上述装置中,所述第一生成子模块,包括:
第一生成单元,用于根据第i组样本数据的坐标值,生成对应地样本平面;其中,i为大于等于1的整数;
第一确定单元,用于确定所述采集设备的光轴线与所述样本平面之间的交叉点的第一坐标值;
第二确定单元,用于如果所述第一坐标值满足预设的可行域条件,确定所述样本平面为候选平面。
在上述装置中,所述第一生成子模块,包括:
第一包围单元,用于对多个样本平面覆盖的点云数据进行包围,得到满足特定形状的多个包围盒;
第三确定单元,用于确定所述采集设备的光轴线与每一包围盒的中心轴的交叉点的第二坐标值,得到第二坐标值集合;
第一选择单元,用于从所述第二坐标值集合中,选择满足所述预设的可行域条件的第二坐标值对应的候选包围盒;
第四确定单元,用于如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面。
在上述装置中,所述第四确定单元,包括:
第一确定子单元,用于如果所述候选包围盒的尺寸满足尺寸阈值,和/或所述候选包围盒的中心点坐标在预设测量范围内,和/或所述第二坐标值在所述预设测量范围内,确定所述候选包围盒对应的样本平面为候选平面。
在上述装置中,所述第一确定模块,包括:
第一子确定模块,用于确定每一候选平面预设范围内包含的点数,得 到多个点数数值;
第二子确定模块,用于将最大的点数数值对应的候选平面,确定为目标平面。
本申请一示例性实施例提供一种目标检测设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述目标检测方法中的步骤。
本申请一示例性实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述目标检测方法中的步骤。
本申请一示例性实施例一示例性提供了一种目标检测方法及装置、设备、存储介质,通过利用待检测对象的点云数据,生成多个候选平面,然后从这多个候选平面中选择出目标平面;如此,基于目标平面覆盖的点云数据,对待检测对象进行检测,大大减少了无效的点云数据,从而提高了检测待检测对象的准确度。
附图说明
图1为本申请一示例性实施例目标检测方法实现流程示意图;
图2为本申请一示例性实施例目标检测方法又一实现流程示意图;
图3为本申请一示例性实施例目标检测方法的另一实现流程示意图;
图4为本申请一示例性实施例目标检测方法的应用场景图;
图5为本申请一示例性实施例目标检测方法的另一实现流程示意图;
图6为本申请一示例性实施例目标检测方法的另一应用场景图;
图7为本申请一示例性实施例目标检测方法装置的组成结构示意图;
图8为本申请一示例性实施例的一种设备硬件实体示意图。
具体实施方式
下面将结合本申请一示例性实施例中的附图,对本申请一示例性实施例中的技术方案进行清楚、完整地描述。
本申请一示例性实施例提出一种目标检测方法,该方法应用于具有前置摄像或者后置摄像功能的移动设备,所述移动设备可以以各种形式来实施。例如,本申请一示例性实施例中所描述的移动设备可以包括手机、平板电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)等。另外,该方法所实现的功能可以通过移动设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该移动设备至少包括处理器和存储介质。
图1为本申请一示例性实施例目标检测方法实现流程示意图,如图1所示,结合图1进行以下说明:
步骤S101,获取待检测对象的点云数据。
这里,所述待检测对象可以是任意三维(3 Dimensions,3D)物体,比如,桌子、房子或动物等。所述点云数据可以理解为是采集给待检测对象的得到的大量的点。
步骤S102,基于所述点云数据的坐标值,生成多个候选平面。
这里,首先,确定该点云数据在采集所述点云数据的采集设备对应的坐标系下的坐标值,然后,基于该坐标值,生成多个候选平面。比如,采集设备为相机,确定点云数据在相机坐标系下的坐标值,每次基于至少三个点的坐标值,生成一个候选平面,以此,得到多个候选平面。
步骤S103,从所述多个候选平面中,确定满足预设条件的目标平面。
这里,首先,确定每一候选平面预设范围内包含的点数,得到多个点数数值;例如,确定与候选平面距离小于2厘米(cm)的点数,得到点数数值。然后,将最大的点数数值对应的候选平面,确定为目标平面。从所述多个候选平面中,选择覆盖点云数据较多的平面为目标平面,比如,选择覆盖的点云数据大于一定数量的平面为目标平面,说明该目标平面为该多个候选平面中较优的平面。
步骤S104,根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。
这里,确定目标平面覆盖的点云数据,然后,依据这些数据对待检测对象的平面进行检测,以确定该待检测对象的属性。比如,基于目标点云数据生成待检测对象的3D成像。
在本申请一示例性实施例中,通过利用待检测对象的点云数据,生成多个候选平面,然后从这多个候选平面中选择出目标平面;如此,基于目标平面覆盖的点云数据,生成待检测对象的3D图像,大大减少了无效的点云数据,从而降低了对于待检测对象的误判。
在一个示例性实施例中,为了降低点云数据中的噪声数据,以提高待检测对象进行检测的准确度,所述步骤S102,可以通过以下步骤实现,如图2所示,图2为本申请一示例性实施例目标检测方法又一实现流程示意图,基于图1,进行以下说明:
步骤S201,对所述点云数据进行降噪处理,得到降噪数据。
这里,所述步骤201可以是通过噪声数据的特性进行降噪,例如,首先设定一个位置范围,去除落在该位置范围之外的点;或者,对每个点计算该点与附近点云(例如30个)的平均距离,如果平均距离对于一定距离阈值(例如大于5个标准差),可判断为噪音,去处该点。
步骤S202,按照预设采样条件,对所述降噪数据进行采样,得到样本数据。
这里,将所述降噪数据划分为多组样本数据;其中,每组样本数据中的数据数量大于等于预设数量。比如,设定预设数量为3,每三个降噪数据中的点,作为一组样本数据。
步骤S203,根据所述样本数据的坐标值,生成所述多个候选平面。
这里,所述步骤S203可以是,所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面。比如,一共有10组样本数据,基于每组样本数据的坐标值生成对应的候选平面,得到10个候选平面。还可以是,根据所述多组样本数据中的每组样本数据的坐标值,生成满足数量阈值的候选平面。即,当所述候选平面的数量等于数量阈值时,停止生成候选平面。
上述得到候选平面的过程,可以通过以下两种方式实现:
方式一,第一步,根据第i组样本数据的坐标值,生成对应地样本平面。
这里,i为大于等于1的整数。比如,第i组样本数据包含三个点,根据这三个点的坐标值,确定形成平面的参数,即平面y=ax+by+cz+d中的,a,b,c和d。基于得到的参数,生成对应的样本平面。
第二步,确定所述采集设备的光轴线与所述样本平面之间的交叉点的第一坐标值。
这里,比如,所述采集设备为相机,确定相机的光轴线与样本平面之间的交叉点,并得到该交叉点在相机坐标系中的坐标值,即第一坐标值。
第三步,如果所述第一坐标值满足预设的可行域条件,确定所述样本平面为候选平面。
这里,所述预设的可行域条件可以是根据采集设备在该采集设备坐标系下的标定参数(比如,场角)设定。比如,采集设备为相机,在该相机坐标系下设定x,y和z方向的标定参数的阈值为C,D和0;当第一坐标值在该相机坐标系下的坐标分别大于对应的标定参数的阈值,即认为第一坐标值满足预设的可行域条件,说明基于该第一坐标值得到的样本平面是可行的,即是有效的,可以用于检测待检测对象的平面。
方式二,第一步,对多个样本平面覆盖的点云数据进行包围,得到满足特定形状的多个包围盒。
这里,对多个样本平面覆盖的点云数据进行包围盒检测。所述特定形状可以是长方体形状或正方体等形状。
第二步,确定所述采集设备的光轴线与每一包围盒的中心轴的交叉点的第二坐标值,得到第二坐标值集合。
这里,如果采集设备为相机,包围盒10为个,确定相机的光轴线与每一个包围盒的中心轴线的交叉点,在相机坐标系下的坐标值,即得到第二坐标值集合。
第三步,从所述第二坐标值集合中,选择满足所述预设的可行域条件的第二坐标值对应的候选包围盒。
这里,满足所述预设的可行域条件的第二坐标值,可以理解为,第为坐标值在该相机坐标系下的坐标分别大于对应的标定参数的阈值,说明基于该第二坐标值对应的候选包围盒覆盖的平面是可行的。
第四步,如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面。
这里,如果所述候选包围盒的尺寸满足尺寸阈值(比如,候选包围盒的正面面积大于0.1平方米,长宽比小于10),和/或所述候选包围盒的中心点坐标在预设测量范围内(比如,候选包围盒的中心点坐标在设定画角阈值内,不超过飞行时间(time-of-flight,TOF)传感器有效测量范围),和/或所述第二坐标值在所述预设测量范围内,确定所述候选包围盒对应的样本平面为候选平面(比如,第二坐标值在设定画角阈值内,不超过TOF传感器有效测量范围)。
在本申请一示例性实施例中,通过在对待检测对象进行平面检测的过程中,对通过点云数据生成的平面进行可行域检测,降低了点云数据中的噪声数据,大幅度提高了对于检测待检测对象的准确度。
在相关技术中,3D成像传感器中,TOF传感器具有不受光照变化和物体纹理影响的特点,在满足精度要求的前提下,还能够降低成本。而借助TOF的数据进行3D平面等物体检测,可以实现很多应用。随机采样一致性(Random Sample Consensus,RANSAC)算法常被用来进行3D简单物体检测。但是TOF传感器的原始数据仍会存在大量的噪音,进行3D点云测算时,会出现大量的噪点,如此,在物体的3D成像过程中,一定程度上可以屏蔽掉大量平面外的点,然而如果输入点云数据存在较多噪音时,容易出现大量的无效检测。
为解决上述问题,本申请一示例性实施例提供一种数据处理方法,针对平面检测这一具体情形,利用检测的目标为平面这一先验知识以及TOF的噪音特性,在RANSAC算法的基础上,加入基于候选平面-光轴交差检测的可行域检测,从而大幅减少误判,过程如图3所示,图3为本申请一示例性实施例目标检测方法的另一实现流程示意图,结合图3所示的步骤进行以下说明:
步骤S301,获取TOF的传感器输出的点云数据。
比如,对TOF的传感器数据记性初步滤波,变换至相机坐标下的三维坐标,生成三维点云数据。
步骤S302,根据传感器的噪音特性,对点云数据进行去噪。
这里,所述噪音特性包括:在相机附近(10cm范围内)大量聚集;远距离点云(例如15米以上)可信度较低。去噪的方式包括:首先设定一个位置范围,去除落在该位置范围之外的点;或者,对每个点计算该点与附近点云(例如30个)的平均距离,如果平均距离对于一定距离阈值(例如大于5个标准差),可判断为噪音,去处该点。
步骤S303,从点云数据当中随机采样3点,作为一组样本数据。
比如,从未进行过采样的点云数据当中随机采样3点,并将采样点归为已采样。
步骤S304,根据采样得到的三点的坐标值,确定样本平面。
这里,比如,三点的空间坐标为x 1,x 2,x 3,所述候选平面的表达式可以表示为π(a,b,c,d),其中,
Figure PCTCN2019117639-appb-000001
步骤S305,对所述样本平面进行可行域检测,如果检测结果不满足预设条件,返回步骤S303。
这里,所述对样本平面进行可行域检测包括以下两种方式:
方式一:检测对样本平面(P-P 0)·n=0与相机光轴线p=dl+l 0进行交叉点检测,其中,交叉点x p(x,y,z)为
Figure PCTCN2019117639-appb-000002
过程如下:
对交叉点x p(x,y,z)按照设定的可行域条件进行判定,例如:
Figure PCTCN2019117639-appb-000003
Z>0其中,A和B分别x和y方向画角阈值,即由相机的标定参数来得到场角。
如果交叉点x p(x,y,z)满足上述所有条件,则确定该样本平面是可行的,即判断成功,进入步骤S306。否则,判断失败返回重新进行随机采样,即返回步骤S303。
如图4所示,在图4(a)中,虚线401对应的样本平面为不符合可行域条件的平面,虚线402对应的样本平面为符合可行域条件的平面,那么覆盖在虚线402对应的样本平面下的点,即可用于生成待检测对象的3D成像。忽略覆盖在虚线401对应的样本平面下的点。在图4(b)中,对样本平面的可行域检测包括两个阶段,阶段411,确定样本平面和相机光轴线的交叉点;然后,进入阶段412,阶段412,判定交叉点是否满足可行域条件,最终得到检测结果,即判断成功414或者判断失败413,其中,判断成功414表明该样本平面可以作为候选平面,判断失败413表明该样本平面不可以作为候选平面;如此,实现了对每一样本平面的可行域检测,从而得到满足可行域条件的候选平面。
方式二:可行域检测时加入基于对目标面轴对齐包围盒的尺寸位置等信息的判断,从而进一步减少对于待检测对象的误判,如图5所示,过程如下:
步骤S501,筛选出样本平面所能覆盖的点云数据,以提供给后续的包围盒检测。
步骤S502,对点云数据进行去噪,从而将待检测对象限制在相机采集范围的中心。
步骤S503,确定包围盒的3D尺寸以及包围盒的中心轴的轴方向。
步骤S504,如果相机的光轴线与包围盒的中心轴的交叉点的第二坐标值,满足所述预设的可行域条件,确定该包围盒为候选包围盒。
这里,如果第二坐标值没有通过检测,返回步骤S501,重新进行步骤 S501至步骤S504。
步骤S505,判断候选包围盒的尺寸是否满足尺寸阈值。
这里,如果所述候选包围盒的尺寸满足尺寸阈值,进入步骤S506;否则,返回步骤S501。比如,判断候选包围盒的正面面积大于0.1平方米,长宽比小于10。
步骤S506,判断候选包围盒的中心点坐标是否在预设测量范围内。
这里,如果所述候选包围盒的中心点坐标在预设测量范围内,进入步骤S507;否则,返回步骤S501。比如,判候选包围盒的中心点坐标在设定画角阈值内,不超过TOF传感器有效测量范围。如图6所示,通过对包围盒601和602中心点的相机坐标系下的坐标P(P x,P y,P z)进行阈值判断,将可行域限定为画面中心附近,进行阈值判断的条件为
Figure PCTCN2019117639-appb-000004
其中A x-max和A y-max,分别为x,y方向最大可行画角阈值(例如:相机横纵画角的三分之二);Z max为最大光轴距离阈值(例:TOF的有效测量范围四分之三)。如图6所示,将TOF传感器603有效测量范围设定在虚线61和62之间,那么包围盒的中心点坐标落在这个范围内的,即是有效的包围盒,即候选包围盒,否则为无效的包围盒。比如,包围盒601的中心点坐标落在设定的画角阈值内,说明包围盒601对应的平面的有效的,可以用于对待检测图像的检测过程中;同理,包围盒602的中心点坐标未落在在设定的画角阈值内,说明包围盒602对应的平面的无效的(比如,可能包含的噪点较多),不可以用于对待检测图像的检测过程中。
步骤S507,判断第二坐标值是否在所述预设测量范围内。
这里,如果所述第二坐标值在所述预设测量范围内,结束判断过程,确定该包围盒对应的样本平面为候选平面;否则,返回步骤S501。比如,第二坐标值落在设定画角阈值内,不超过TOF传感器有效测量范围。
步骤S306,对所述候选平面所能覆盖范围的点云中包含的点数进行计数。
比如,对与候选平面距离小于2cm的点,进行计数。
步骤S307,当迭代次数达到次数阈值,终止迭代,得到多个候选平面。
步骤S308,确定覆盖点云最多的候选平面为目标平面,并作为最优参数输出。
步骤S309,根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。
在本申请一示例性实施例中,在待检测对象的平面检测的过程中,利用检测的待检测对象为平面这一先验知识以及TOF的噪音特性,加入基于候选平面-光轴交差检测的可行域检测,从而大幅度减少对于待检测对象的 误判。
本申请一示例性实施例提供一种目标检测方法装置,该装置包括所包括的各模块、以及各模块所包括的各单元,可以通过计算机设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图7为本申请一示例性实施例目标检测方法装置的组成结构示意图,如图7所示,所述装置70包括:第一获取模块71、第一生成模块72、第一确定模块73和第一检测模块74,其中:
所述第一获取模块71,用于获取待检测对象的点云数据;
所述第一生成模块72,用于基于所述点云数据的坐标值,生成多个候选平面;
所述第一确定模块73,用于从所述多个候选平面中,确定满足预设条件的目标平面;
所述第一检测模块74,用于根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。
在上述装置中,所述第一生成模块72,包括:
第一降噪子模块,用于对所述点云数据进行降噪处理,得到降噪数据;
第一采用子模块,用于按照预设采样条件,对所述降噪数据进行采样,得到样本数据;
第一生成子模块,用于根据所述样本数据的坐标值,生成所述多个候选平面。
在上述装置中,所述按第一采用子模块,包括:
第一划分单元,用于将所述降噪数据划分为多组样本数据;其中,每组样本数据中的数据数量大于等于预设数量。
在上述装置中,所述第一生成模块72,包括:
第一生成子模块,用于根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面。
在上述装置中,所述第一生成模块72,包括:
第二生成子模块,用于根据所述多组样本数据中的每组样本数据的坐标值,生成满足数量阈值的候选平面。
在上述装置中,所述第一生成子模块,包括:
第一生成单元,用于根据第i组样本数据的坐标值,生成对应地样本平面;其中,i为大于等于1的整数;
第一确定单元,用于确定所述采集设备的光轴线与所述样本平面之间的交叉点的第一坐标值;
第二确定单元,用于如果所述第一坐标值满足预设的可行域条件,确定所述样本平面为候选平面。
在上述装置中,所述第一生成子模块,包括:
第一包围单元,用于对多个样本平面覆盖的点云数据进行包围,得到满足特定形状的多个包围盒;
第三确定单元,用于确定所述采集设备的光轴线与每一包围盒的中心轴的交叉点的第二坐标值,得到第二坐标值集合;
第一选择单元,用于从所述第二坐标值集合中,选择满足所述预设的可行域条件的第二坐标值对应的候选包围盒;
第四确定单元,用于如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面。
在上述装置中,所述第四确定单元,包括:
第一确定子单元,用于如果所述候选包围盒的尺寸满足尺寸阈值,和/或所述候选包围盒的中心点坐标在预设测量范围内,和/或所述第二坐标值在所述预设测量范围内,确定所述候选包围盒对应的样本平面为候选平面。
在上述装置中,所述第一确定模块73,包括:
第一子确定模块,用于确定每一候选平面预设范围内包含的点数,得到多个点数数值;
第二子确定模块,用于将最大的点数数值对应的候选平面,确定为目标平面。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,本申请一示例性实施例中,如果以软件功能模块的形式实现上述的目标检测方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请一示例性实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请一示例性实施例不限制于任何特定的硬件和软件结合。
图8为本申请一示例性实施例的一种设备硬件实体示意图,如图8所示,本申请一示例性实施例提供了一种设备800,包括:
处理器81以及存储有所述处理器81可执行指令的存储介质82,所述存储介质82通过通信总线83依赖所述处理器81执行操作,当所述指令被所述处理器81执行时,执行上述实施例一所述的通知方法。
需要说明的是,实际应用时,设备中的各个组件通过通信总线83耦合在一起。可理解,通信总线83用于实现这些组件之间的连接通信。通信总 线83除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图8中将各种总线都标为通信总线83。
这里,需要说明的是,所述设备通常为具有前置双摄或者后置双摄功能的移动设备,所述移动设备可以以各种形式来实施。例如,本申请一示例性实施例中所描述的移动设备可以包括手机、平板电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)等。
对应地,本申请一示例性实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的目标检测方法中的步骤。
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请一示例性实施例的实施过程构成任何限定。上述本申请一示例性实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分 或全部单元来实现本申请一示例性实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请一示例性实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得设备执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中的目标检测方法,应用于具有拍摄功能的设备,包括:获取待检测对象的点云数据;基于所述点云数据的坐标值,生成多个候选平面;从所述多个候选平面中,确定满足预设条件的目标平面;根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。

Claims (12)

  1. 一种目标检测方法,其特征在于,应用于具有拍摄功能的设备,所述方法包括:
    获取待检测对象的点云数据;
    基于所述点云数据的坐标值,生成多个候选平面;
    从所述多个候选平面中,确定满足预设条件的目标平面;
    根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述点云数据的坐标值,生成多个候选平面,包括:
    对所述点云数据进行降噪处理,得到降噪数据;
    按照预设采样条件,对所述降噪数据进行采样,得到样本数据;
    根据所述样本数据的坐标值,生成所述多个候选平面。
  3. 根据权利要求2所述的方法,其特征在于,所述按照预设采样条件,对所述降噪数据进行采样,得到样本数据,包括:
    将所述降噪数据划分为多组样本数据;其中,每组样本数据中的数据数量大于等于预设数量。
  4. 根据权利要求2或3所述的方法,其特征在于,所述根据所述样本数据的坐标值,生成所述多个候选平面,包括:
    根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面。
  5. 根据权利要求2或3所述的方法,其特征在于,所述根据所述样本数据的坐标值,生成所述多个候选平面,包括:
    根据所述多组样本数据中的每组样本数据的坐标值,生成满足数量阈值的候选平面。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面,包括:
    根据第i组样本数据的坐标值,生成对应地样本平面;其中,i为大于等于1的整数;
    确定所述采集设备的光轴线与所述样本平面之间的交叉点的第一坐标值;
    如果所述第一坐标值满足预设的可行域条件,确定所述样本平面为候选平面。
  7. 根据权利要求4所述的方法,其特征在于,所述根据所述多组样本数据中的每组样本数据的坐标值,生成对应地候选平面,得到所述多个候选平面,包括:
    对多个样本平面覆盖的点云数据进行包围,得到满足特定形状的多个包围盒;
    确定所述采集设备的光轴线与每一包围盒的中心轴的交叉点的第二坐标值,得到第二坐标值集合;
    从所述第二坐标值集合中,选择满足所述预设的可行域条件的第二坐标值对应的候选包围盒;
    如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面。
  8. 根据权利要求7所述的方法,其特征在于,所述如果所述候选包围盒的属性信息满足对应的预设条件,确定所述候选包围盒对应的样本平面为候选平面,包括:
    如果所述候选包围盒的尺寸满足尺寸阈值,和/或所述候选包围盒的中心点坐标在预设测量范围内,和/或所述第二坐标值在所述预设测量范围内,确定所述候选包围盒对应的样本平面为候选平面。
  9. 根据权利要求1所述的方法,其特征在于,所述从所述多个候选平面中,确定满足预设条件的目标平面,包括:
    确定每一候选平面预设范围内包含的点数,得到多个点数数值;
    将最大的点数数值对应的候选平面,确定为目标平面。
  10. 一种目标检测装置,其特征在于,所述装置包括:第一获取模块、第一生成模块、第一确定模块和第一检测模块,其中:
    所述第一获取模块,用于获取待检测对象的点云数据;
    所述第一生成模块,用于基于所述点云数据的坐标值,生成多个候选平面;
    所述第一确定模块,用于从所述多个候选平面中,确定满足预设条件的目标平面;
    所述第一检测模块,用于根据所述目标平面覆盖的点云数据,对所述待检测对象进行检测。
  11. 一种目标检测设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至9任一项所述目标检测方法中的步骤。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至9任一项所述方法中的步骤。
PCT/CN2019/117639 2019-11-12 2019-11-12 一种目标检测方法及装置、设备、存储介质 WO2021092771A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/117639 WO2021092771A1 (zh) 2019-11-12 2019-11-12 一种目标检测方法及装置、设备、存储介质
CN201980100517.9A CN114424240A (zh) 2019-11-12 2019-11-12 一种目标检测方法及装置、设备、存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/117639 WO2021092771A1 (zh) 2019-11-12 2019-11-12 一种目标检测方法及装置、设备、存储介质

Publications (1)

Publication Number Publication Date
WO2021092771A1 true WO2021092771A1 (zh) 2021-05-20

Family

ID=75911315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117639 WO2021092771A1 (zh) 2019-11-12 2019-11-12 一种目标检测方法及装置、设备、存储介质

Country Status (2)

Country Link
CN (1) CN114424240A (zh)
WO (1) WO2021092771A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114310875A (zh) * 2021-12-20 2022-04-12 珠海格力智能装备有限公司 一种曲轴定位识别方法、装置、存储介质和设备
CN115937069A (zh) * 2022-03-24 2023-04-07 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247041A (zh) * 2013-05-16 2013-08-14 北京建筑工程学院 一种基于局部采样的多几何特征点云数据的分割方法
KR101547940B1 (ko) * 2014-12-17 2015-08-28 가톨릭관동대학교산학협력단 동일평면상에 있는 지상 라이다 자료의 오차 조정 시스템 및 방법
CN105976375A (zh) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 一种基于rgb-d类传感器的托盘识别和定位方法
CN107292921A (zh) * 2017-06-19 2017-10-24 电子科技大学 一种基于kinect相机的快速三维重建方法
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
CN108257213A (zh) * 2018-01-17 2018-07-06 视缘(上海)智能科技有限公司 一种点云轻量级的多边形曲面重建方法
CN108288277A (zh) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 一种基于rap的三维场景重建方法
CN109087345A (zh) * 2018-09-06 2018-12-25 上海仙知机器人科技有限公司 基于ToF成像***的栈板识别方法及自动导引运输车
CN109693387A (zh) * 2017-10-24 2019-04-30 三纬国际立体列印科技股份有限公司 基于点云数据的3d建模方法
CN110285754A (zh) * 2019-07-02 2019-09-27 深圳市镭神智能***有限公司 基于激光扫描的工件定位方法、装置、***和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201205563D0 (en) * 2012-03-29 2012-05-09 Sec Dep For Business Innovation & Skills The Coordinate measurement system and method
JP2017220051A (ja) * 2016-06-08 2017-12-14 ソニー株式会社 画像処理装置、画像処理方法、および車両
US10055882B2 (en) * 2016-08-15 2018-08-21 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN109814564A (zh) * 2019-01-29 2019-05-28 炬星科技(深圳)有限公司 目标对象的检测、避障方法、电子设备及存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247041A (zh) * 2013-05-16 2013-08-14 北京建筑工程学院 一种基于局部采样的多几何特征点云数据的分割方法
KR101547940B1 (ko) * 2014-12-17 2015-08-28 가톨릭관동대학교산학협력단 동일평면상에 있는 지상 라이다 자료의 오차 조정 시스템 및 방법
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
CN105976375A (zh) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 一种基于rgb-d类传感器的托盘识别和定位方法
CN107292921A (zh) * 2017-06-19 2017-10-24 电子科技大学 一种基于kinect相机的快速三维重建方法
CN109693387A (zh) * 2017-10-24 2019-04-30 三纬国际立体列印科技股份有限公司 基于点云数据的3d建模方法
CN108257213A (zh) * 2018-01-17 2018-07-06 视缘(上海)智能科技有限公司 一种点云轻量级的多边形曲面重建方法
CN108288277A (zh) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 一种基于rap的三维场景重建方法
CN109087345A (zh) * 2018-09-06 2018-12-25 上海仙知机器人科技有限公司 基于ToF成像***的栈板识别方法及自动导引运输车
CN110285754A (zh) * 2019-07-02 2019-09-27 深圳市镭神智能***有限公司 基于激光扫描的工件定位方法、装置、***和存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114310875A (zh) * 2021-12-20 2022-04-12 珠海格力智能装备有限公司 一种曲轴定位识别方法、装置、存储介质和设备
CN114310875B (zh) * 2021-12-20 2023-12-05 珠海格力智能装备有限公司 一种曲轴定位识别方法、装置、存储介质和设备
CN115937069A (zh) * 2022-03-24 2023-04-07 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质
CN115937069B (zh) * 2022-03-24 2023-09-19 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114424240A (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
US11301954B2 (en) Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium
WO2020119684A1 (zh) 一种3d导航语义地图更新方法、装置及设备
CN111381224B (zh) 激光数据校准方法、装置及移动终端
CN111415420B (zh) 空间信息确定方法、装置及电子设备
WO2021092771A1 (zh) 一种目标检测方法及装置、设备、存储介质
CN111950543A (zh) 一种目标检测方法和装置
CN108628442B (zh) 一种信息提示方法、装置以及电子设备
CN106131408A (zh) 一种图像处理方法及终端
JP5592039B2 (ja) 信頼度スコアに基づいた3次元モデルの併合
CN113989376B (zh) 室内深度信息的获取方法、装置和可读存储介质
CN110276794B (zh) 信息处理方法、信息处理装置、终端设备及服务器
JP7484492B2 (ja) レーダーに基づく姿勢認識装置、方法及び電子機器
CN113298122A (zh) 目标检测方法、装置和电子设备
US11480661B2 (en) Determining one or more scanner positions in a point cloud
JP2013206034A (ja) 情報処理装置、画像処理方法およびプログラム
WO2023165175A1 (zh) 渲染处理方法、装置、设备以及存储介质
CN113379826A (zh) 物流件的体积测量方法以及装置
CN115511944A (zh) 基于单相机的尺寸估计方法、装置、设备及存储介质
CN115861403A (zh) 一种非接触式物体体积测量方法、装置、电子设备及介质
CN110019596B (zh) 待显示瓦片的确定方法、装置及终端设备
CN116386016B (zh) 一种异物处理方法、装置、电子设备及存储介质
CN111383262A (zh) 遮挡检测方法、***、电子终端以及存储介质
CN117635875B (zh) 一种三维重建方法、装置及终端
CN114972769B (zh) 图像处理方法、三维地图生成方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19952216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19952216

Country of ref document: EP

Kind code of ref document: A1