WO2023015938A1 - Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023015938A1
WO2023015938A1 PCT/CN2022/088149 CN2022088149W WO2023015938A1 WO 2023015938 A1 WO2023015938 A1 WO 2023015938A1 CN 2022088149 W CN2022088149 W CN 2022088149W WO 2023015938 A1 WO2023015938 A1 WO 2023015938A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
target
point
points
image
Prior art date
Application number
PCT/CN2022/088149
Other languages
English (en)
Chinese (zh)
Inventor
吴思泽
金晟
刘文韬
钱晨
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023015938A1 publication Critical patent/WO2023015938A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, and relates to a three-dimensional point detection method, device, electronic equipment and storage medium.
  • Three-dimensional (Threee-Dimensional, 3D) human body pose estimation refers to estimating the pose of a human target from an image, video or point cloud, and is often used in various industrial fields such as human body reconstruction, human-computer interaction, behavior recognition, and game modeling. In practical application scenarios, there is often a need for multi-person pose estimation. Among them, human body center point detection can be used as a precursor task for multi-person pose estimation.
  • a human body center point detection scheme is provided in the related art, which performs multi-view feature extraction based on 3D space voxelization, and detects the body center point through a convolutional neural network (Convolutional Neural Networks, CNN).
  • CNN convolutional Neural Networks
  • spatial voxelization is to divide the 3D space equidistantly into grids of equal size, and the multi-view image features after voxelization can be used as the input of 3D convolution.
  • Embodiments of the present disclosure at least provide a three-dimensional point detection method, device, electronic device, and storage medium, which improve detection efficiency while improving detection accuracy.
  • an embodiment of the present disclosure provides a method for three-dimensional point detection, the method is executed by an electronic device, and the method includes:
  • For each target object based on the three-dimensional coordinate information of the candidate three-dimensional point of the target object, determine the candidate three-dimensional space corresponding to the target object;
  • three-dimensional coordinate information of a target three-dimensional point of the target object is determined.
  • the three-dimensional coordinate information of the candidate three-dimensional points of each target object in the case of determining the three-dimensional coordinate information of the candidate three-dimensional points of each target object based on the target images obtained by shooting multiple target objects under multiple viewing angles, it can be based on the candidate three-dimensional points of each target object
  • the three-dimensional coordinate information of the three-dimensional point and the target image determine the three-dimensional coordinate information of the target three-dimensional point of each target object.
  • the embodiments of the present disclosure can accurately detect the 3D points of each target object by utilizing the projection relationship between the candidate 3D space where the candidate 3D points of the target object are located and the target images under multiple viewing angles. At the same time, for the candidate 3D points The projection operation of the point in the candidate 3D space avoids the voxelization operation of the entire space, which will significantly improve the detection efficiency.
  • the embodiment of the present disclosure also provides a three-dimensional point detection device, the device includes:
  • the acquiring part is configured to acquire a target image obtained by shooting multiple target objects under multiple viewing angles, and a 3D position of a candidate 3D point of each of the multiple target objects determined based on the acquired target images. coordinate information;
  • the detection part is configured to, for each target object, determine a candidate three-dimensional space corresponding to the target object based on three-dimensional coordinate information of a candidate three-dimensional point of the target object; based on the candidate three-dimensional space corresponding to the target object, As well as the target image, determine the three-dimensional coordinate information of the target three-dimensional point of the target object.
  • an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the steps of the three-dimensional point detection method described in any one of the first aspect and its various implementation modes are executed .
  • the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed when the processor runs, as in the first aspect and its various implementation modes The steps of any one of the methods for three-dimensional point detection.
  • the embodiment of the present disclosure also provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer executes the computer program described in the first aspect.
  • FIG. 1 shows a flow chart of a method for three-dimensional point detection provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of the application of a method for three-dimensional point detection provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a three-dimensional point detection device provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • a human point detection scheme is provided in related technologies, which extracts multi-view features based on 3D space voxelization, and detects human body points through CNN.
  • spatial voxelization is to divide the 3D space equidistantly into grids of equal size, and the multi-view image features after voxelization can be used as the input of 3D convolution.
  • the present disclosure provides a method, device, electronic device and storage medium for three-dimensional point detection, which improves detection efficiency while improving the accuracy of point detection.
  • the execution subject of the method for 3D point detection provided in the embodiment of the present disclosure is generally an electronic computer with a certain computing power.
  • the electronic equipment includes, for example: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), Handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the method for detecting a three-dimensional point may be implemented in a manner in which a processor invokes computer-readable instructions stored in a memory.
  • FIG. 1 is a flowchart of a method for three-dimensional point detection provided by an embodiment of the present disclosure
  • the method includes steps S101 to S104, wherein:
  • S101 Acquire target images obtained by shooting multiple target objects from multiple viewing angles, and 3D coordinate information of candidate 3D points of each of the multiple target objects determined based on the acquired target images;
  • S102 For each target object, based on the three-dimensional coordinate information of the candidate three-dimensional point of the target object, determine the candidate three-dimensional space corresponding to the target object; based on the candidate three-dimensional space corresponding to the target object and the target image, determine the target three-dimensional point of the target object 3D coordinate information.
  • the application scenario of the method may be briefly described next.
  • the method of 3D point detection in the embodiments of the present disclosure can be applied to the relevant application fields of multi-person 3D pose estimation.
  • the 3D pose estimation is performed on multiple pedestrians in front of the self-driving vehicle.
  • Another example is intelligent security.
  • the field estimates the three-dimensional poses of multiple road vehicles, etc., which is not limited in the embodiments of the present disclosure.
  • the embodiments of the present disclosure provide a detection scheme combined with two-dimensional point matching under multiple perspectives and candidate three-dimensional point reconstruction, which not only improves detection accuracy, but also improves detection efficiency.
  • the target image acquired in the embodiment of the present disclosure may be obtained by shooting multiple target objects under multiple viewing angles, and one viewing angle may correspond to one target image.
  • the above-mentioned target images can be obtained by synchronously photographing multiple target objects with multiple cameras installed on the vehicle.
  • the multiple cameras here can be selected in combination with different user needs.
  • the three cameras installed correspondingly at the side and the center position aim at the three target images captured by the pedestrians in front.
  • Each target image may correspond to multiple target objects, for example, it may be a captured target image including two pedestrians.
  • the three-dimensional coordinate information of the candidate three-dimensional point of each target object can be determined based on multiple target images captured under multiple viewing angles.
  • the three-dimensional coordinate information of the target three-dimensional point of each target object can be determined based on the candidate three-dimensional space corresponding to each target object and multiple target images under multiple viewing angles.
  • the candidate 3D point of the target object can be the candidate 3D central point located at the center of the target object, or other specific points that can characterize the target object. It may be to take a specific point on the pedestrian's head, upper body and lower body. The number of specific points can be set based on different application scenarios, and there is no limitation here.
  • the candidate 3D center point is used as the candidate 3D point as an example in the following.
  • the 3D coordinate information about the candidate 3D points can be obtained by first pairing 2D points based on the target image in the 3D space, and then reconstructing based on the paired 2D points. In addition, it may also be determined based on other methods, which are not limited here.
  • one or more candidate 3D points can be constructed for each target object.
  • each candidate 3D point in the multiple candidate 3D points of the target object can determine a spherical range with the 3D coordinate information of the candidate 3D point as the center of the sphere, and then The determination of the candidate three-dimensional space corresponding to the target object can be realized by performing a union operation on the spherical ranges determined by the plurality of candidate three-dimensional points of the target object.
  • the three-dimensional coordinate information of the target three-dimensional point of each target object can be determined based on the projection relationship between the spatial sampling of the corresponding candidate three-dimensional space and the target image under multiple viewing angles. For voxelization, it is only necessary to perform three-dimensional projection on the candidate three-dimensional space where the specified target object is located, that is, more accurate three-dimensional coordinate information of the target three-dimensional point can be determined, and the calculation amount is significantly reduced.
  • the three-dimensional coordinate information of the candidate three-dimensional point of the target object can be determined according to the following steps:
  • Step 1 Extract image feature information of a plurality of two-dimensional points from a plurality of target images, wherein each two-dimensional point in the plurality of two-dimensional points is a pixel located in a corresponding target object;
  • Step 2 Determine paired two-dimensional points belonging to the same target object based on image feature information respectively extracted from a plurality of target images, wherein the paired two-dimensional points come from different target images;
  • Step 3 Determine the 3D coordinate information of the candidate 3D points of the same target object according to the determined 2D coordinate information of the paired 2D points in the respective target images.
  • the image feature information of multiple two-dimensional points can be extracted from each target image based on the image feature extraction method, or the target image can be directly identified by the two-dimensional point recognition network to determine the image of each two-dimensional point
  • the feature information, the image feature information of the two-dimensional point here may represent the relevant feature of the corresponding target object, for example, it may be the position feature of the center point of the person.
  • the determination of paired 2D points in the embodiments of the present disclosure can effectively correlate the corresponding relationship of target objects in 2D space, so that the constructed candidate 3D points can point to the same target object to a certain extent, thus providing multiple Accurate detection of target objects provides good data support.
  • the paired 2D points belonging to the same target object can be determined based on the image feature information respectively extracted from multiple target images, where the paired 2D points come from different target images.
  • the embodiment of the present disclosure may perform image pairing first and then determine the paired 2D points based on the feature update of the 2D points corresponding to the paired images. On the other hand, all 2D The features of the points are updated, and then the paired two-dimensional points are determined, which is not limited in the embodiments of the present disclosure.
  • the candidate 3D points corresponding to the target object can be reconstructed 3D coordinate information.
  • a candidate 3D point can be reconstructed through triangulation, that is, under a multi-camera system, the 2D The 2D coordinates of the point and the camera parameters are used to reconstruct the 3D coordinates corresponding to the two-dimensional point.
  • the method for detecting three-dimensional points can perform image matching first, and then determine paired two-dimensional points.
  • the paired two-dimensional points can be determined through the following steps:
  • Step 1 combining target images in pairs to obtain at least one set of target images
  • Step 2 Based on the image feature information of the two-dimensional points in the plurality of target images, determine whether there are two two-dimensional points matching the image features in each group of target images in at least one group of target images; the two two-dimensional points belong to Different target images in the same set of target images;
  • Step 3 When it is determined that there are two 2D points with matching image features in each group of target images, determine the two 2D points with matching image features as a pair of 2D points belonging to the same target object.
  • multiple target images can be combined in pairs to obtain one or more sets of target images, and then it can be determined whether there are two two-dimensional points that match image features in each set of target images.
  • the matching here can be two two-dimensional points
  • the matching degree of the image feature information of the two-dimensional points is greater than a preset threshold, so that two two-dimensional points whose image features match can be determined as a pair of two-dimensional points belonging to the same target object.
  • the 2D points in the two target images of the group of target images can be combined in pairs to obtain multiple groups of 2D points.
  • the image feature information of two two-dimensional points included in each group of two-dimensional points in multiple groups of two-dimensional points is compared, that is, the following steps can be used to determine whether there are two two-dimensional points that match the image features in each group of target images.
  • Step 1 for each group of two-dimensional points, input the image feature information of two two-dimensional points of the group of two-dimensional points into the feature matching network, and determine whether the image feature information of the two two-dimensional points matches;
  • Step 2 In the case of determining that the image feature information of the two 2D points matches, determine the two 2D points whose image features match as the two 2D points whose image features match exist in the group of target images.
  • a feature matching network can be used to determine whether the image feature information of two two-dimensional points matches.
  • the input value of the feature matching network is a set of two-dimensional points corresponding to a set of target images.
  • the matching operation of the image feature information of two two-dimensional points in each group of two-dimensional points is realized by combining the feature matching network, and the operation is simple.
  • the feature matching network In the process of training the feature matching network, it can be trained based on image samples from multiple perspectives and the labeling information of the same target object, that is, the image corresponding to the two-dimensional point is extracted from the image samples from multiple perspectives
  • the extracted multiple image feature information can be input into the feature matching network to be trained.
  • the network parameter values of the feature matching network can be adjusted until the network output result is consistent with the labeling information, so as to train the feature matching network.
  • the trained feature matching network can be used to determine whether the image feature information of two two-dimensional points matches.
  • the image feature matching of two two-dimensional points indicates that the two two-dimensional points correspond to the same target object.
  • embodiments of the present disclosure may combine the above-mentioned information of other 2D points.
  • the image feature information updates the image feature information of the two-dimensional points, and then inputs the updated image feature information into the feature matching network to determine whether the image feature information of the two two-dimensional points matches.
  • the image feature information of the two-dimensional point can be updated based on the image feature information of other two-dimensional points in other target images, so that the accuracy of the determined updated image feature information is higher, and the matching accuracy is further improved.
  • Step 1 Based on the two-dimensional coordinate information of the two-dimensional point in the corresponding target image and the two-dimensional coordinate information of other two-dimensional points in other target images different from the target image where the two-dimensional point is located, determine the difference between the two-dimensional point and other two-dimensional points. epipolar distance between points;
  • Step 2 Based on the image feature information of the two-dimensional point, the image feature information of other two-dimensional points in other target images different from the target image where the two-dimensional point is located, and the epipolar distance, the image feature information of the two-dimensional point is updated to obtain Updated image feature information.
  • the image feature information of the two-dimensional point can be based on the image feature information of the two-dimensional point, the image feature information of other two-dimensional points in other target images different from the target image where the two-dimensional point is located, and the image feature information of the two-dimensional point from the epipolar line Update and efficiently integrate multi-view features, and the matching accuracy can be significantly improved.
  • the epipolar distance between two 2D points can be determined based on the respective 2D coordinate information of a 2D point and other 2D points, and then based on the epipolar distance and the respective image features of the two 2D points
  • the information realizes the updating of image feature information of two-dimensional points.
  • the epipolar distances corresponding to the cameras under different viewing angles can reflect the relationship between different target points, with two cameras (respectively camera 1 and camera 2) and two target points (point A and point B)
  • point A in the perspective of camera 1 corresponds to a line (polar line) in the perspective of camera 2
  • the distance between the epipolar line and point B in the perspective of camera 2 determines the distance between the two points.
  • the degree of proximity, using the epipolar distance to update the feature of the two-dimensional point can make the content of the updated image feature information richer, which is more conducive to the determination of the subsequent three-dimensional point.
  • the updated image feature information corresponding to the two-dimensional point can be directly selected for matching without updating all the two-dimensional points, which will improve the overall detection efficiency.
  • the method of 3D point detection provided by the embodiment of the present disclosure can perform feature update first, and then determine the paired 2D points, and the paired 2D points can be determined through the following steps:
  • Step 1 For the first target image among the multiple target images, based on the image feature information extracted from the first target image and the image feature information extracted from other target images except the first target image among the multiple target images , updating image feature information of multiple two-dimensional points in the first target image to obtain updated image feature information respectively corresponding to multiple two-dimensional points in the first target image;
  • Step 2 Determine pairs of two-dimensional points belonging to the same target object based on the updated image feature information respectively corresponding to the multiple target images.
  • the image feature information of multiple two-dimensional points in the first target image can be updated based on the image feature information extracted from other target images in the multiple target images except the first target image, and then from the multiple target images Select two target images arbitrarily, and select two corresponding two-dimensional points from the two selected target images, and input the updated image feature information corresponding to the two selected two-dimensional points into the pre-trained In the feature matching network, it is determined whether the selected two 2D points are paired 2D points belonging to the same target object.
  • two target images may be arbitrarily selected from a plurality of target images, and two corresponding two-dimensional points may be respectively selected from the selected two target images.
  • point to use the pre-trained feature matching network to verify the feature matching please refer to the description of the first aspect above.
  • the matching operation corresponding to two 2D points in the two target images can be realized based on the selection operation, and once it is determined that the two 2D points in the two target images are successfully matched, it can be locked to a target object.
  • the amount of computation is significantly reduced.
  • two target images can be selected arbitrarily, and then two corresponding two-dimensional points can be selected. Once the image features of these two two-dimensional points are successfully matched, Then, the candidate 3D points of the corresponding target object can be determined based on the paired 2D points without verifying all the pairing situations, which will improve the overall detection efficiency.
  • the corresponding candidate 3D space in the process of determining the target 3D point of each target object based on the constructed candidate 3D points, can be determined first, and then based on the projection operation from the 3D space to the 2D space, each The determination of the three-dimensional coordinate information of the target three-dimensional point of the target object.
  • the three-dimensional coordinate information of the target three-dimensional point can be determined through the following steps:
  • Step 1 Carry out spatial sampling of the candidate three-dimensional space of the target object, and determine a plurality of sampling points;
  • Step 2 for each sampling point in the plurality of sampling points, based on the three-dimensional coordinate information of the sampling point in the candidate three-dimensional space and the target image, determine the three-dimensional point detection result corresponding to the sampling point;
  • Step 3 Determine the three-dimensional coordinate information of the target three-dimensional point of each target object based on the obtained three-dimensional point detection result.
  • adaptive sampling can be carried out for the candidate 3D space corresponding to each target object.
  • equidistant sampling is performed in the search space, and then based on the 3D coordinate information of the sampling point in the candidate 3D space and multiple target images, determine each In this way, further fine sampling around the reconstructed candidate 3D point can be realized, so that a more accurate target 3D point position can be obtained.
  • the corresponding candidate three-dimensional space can be determined, and the detection of relevant three-dimensional points can be realized based on the sampling of the candidate three-dimensional space.
  • the sampling operation for the candidate three-dimensional space significantly improves the detection s efficiency.
  • the three-dimensional point detection results corresponding to the sampling points can be determined through the following steps:
  • Step 1 For each sampling point among multiple sampling points, based on the corresponding relationship between the three-dimensional coordinate system of the candidate three-dimensional space and the two-dimensional coordinate system of each viewing angle, project the three-dimensional coordinate information to different viewing angles to determine the sampling point Two-dimensional projection point information in multiple target images respectively;
  • Step 2 based on the two-dimensional projection point information of the sampling points in multiple target images, determine the feature information of the sampling points under different viewing angles;
  • Step 3 Determine the 3D point detection result corresponding to the sampling point based on the feature information of the sampling point under different viewing angles.
  • the 3D point detection method provided by the embodiments of the present disclosure can first determine the two-dimensional projection point information of the sampling point in multiple target images, and then determine the sampling point feature information of the sampling point under different viewing angles based on the two-dimensional projection point information .
  • connection relationship of sampling points under different viewing angles can be determined by using the sampling point feature information of sampling points under different viewing angles. Such connection relationship will help to determine more accurate sampling point feature information, and further make all The determined 3D point detection results and accuracy are improved.
  • the information about the two-dimensional projection point can be determined based on the conversion relationship between the three-dimensional coordinate system where the sampling point is located and the two-dimensional coordinate system where the target image is located, that is, the sampling point can be projected onto the target image by using the conversion relationship, thereby Determine the image position and other information of the two-dimensional projection point of the sampling point on the target image.
  • the feature information of the sampling points under different viewing angles can be determined.
  • the feature information of the sampling points determined here can be the feature information of different viewing angles. This is Considering that for the same target object, there is a certain connection relationship between the corresponding sampling points under different perspectives, using this connection relationship can realize the update of the characteristics of the sampling points. In addition, under the same perspective, the corresponding sampling points There is also a certain connection relationship between the points, which can also be used to update the characteristics of the sampling points, so that the determined feature information of the sampling points is more in line with the actual 3D information of the target object.
  • Step 1 extracting image features respectively corresponding to a plurality of target images
  • Step 2 For each of the multiple target images, based on the image position information of the two-dimensional projection points of the sampling points in the multiple target images, extract an image corresponding to the image position information from the image features corresponding to the target image feature;
  • Step 3 The extracted image features corresponding to the image position information are used to determine the feature information of the sampling points under different viewing angles.
  • the characteristic information of the sampling point matching the sampling point can be determined based on the correspondence between the two-dimensional projection point information of the sampling point in multiple target images and the image features, and the operation is simple.
  • the 3D point detection method provided by the embodiment of the present disclosure in order to extract the feature information of the sampling point matching the sampling point, can be based on the image position information of the 2D projection point of the sampling point in multiple target images, from the corresponding target image
  • the image feature corresponding to the image position information is extracted from the image feature, and the extracted image feature is used as the feature information of the sampling point matched with the sampling point.
  • the image features corresponding to the target image can be obtained based on image processing, or extracted based on a trained feature extraction network, or other information that can be extracted to represent the target object, the scene where the target object is located, etc. determined by other methods, which are not limited in this embodiment of the present disclosure.
  • the sampling point feature information of the sampling point can be updated first, and then based on the updated sampling point feature information, the corresponding 3D point detection result of the sampling point can be determined through the following Steps to determine the 3D point detection results corresponding to the sampling points:
  • Step 1 Based on the sampling point feature information of the sampling point under different viewing angles and the sampling point feature information of other sampling points associated with the sampling point, determine the updated sampling point feature information of the sampling point under different viewing angles;
  • Step 2 Based on the updated sampling point feature information corresponding to the sampling point, determine the three-dimensional point detection result corresponding to the sampling point.
  • sampling point feature information of the sampling point under different viewing angles and the sampling point feature information of other sampling points associated with the sampling point can be used to update the sampling point feature information of the sampling point, and update the sampling point feature information to a certain extent.
  • the above includes the features of other sampling points in one view, and also includes the features of sampling points between different views, making the features of sampling points more accurate, and thus making the determined 3D pose information more accurate.
  • sampling points associated with the sampling point may be sampling points that have a connection relationship with the sampling point.
  • the connection relationship here corresponds to the connection relationship between the sampling points in the same view, and for sampling points under different viewing angles
  • point feature information what can be determined is the connection relationship between the two-dimensional projection points determined for the same sampling point under different views.
  • Step 1 Based on the sampling point feature information of the sampling point under different viewing angles and the first connection relationship between the two-dimensional projection points of the sampling point under different viewing angles, the sampling point feature information of the sampling point under different viewing angles is first performed. update, to obtain the first updated sampling point feature information; and, based on the sampling point feature information of the sampling point under the target perspective and the sampling points belonging to the target perspective and other sampling points that have a second connection relationship with the sampling point The sampling point characteristic information performs a second update on the sampling point characteristic information of the sampling point under the target perspective to obtain the second updated sampling point characteristic information;
  • Step 2 Based on the first updated sampling point feature information and the second updated sampling point feature information, determine the updated sampling point feature information of the sampling point under the target perspective.
  • the first connection relationship between the two-dimensional projection points of the sampling point under different viewing angles is predetermined, based on the first connection relationship, the feature information of the sampling point under one viewing angle can be updated, that is, the first The updated sampling point feature information is fused with the sampling point features of the same sampling point in other views.
  • the sampling point feature information of the sampling point can be updated based on the sampling point feature information of other sampling points that belong to the target perspective and have a second connection relationship with the sampling point, where the second connection relationship can also be predetermined , so that the determined second updated sampling point feature information incorporates the sampling point features of other sampling points in the same view.
  • Combining the first updated sampling point feature information and the second updated sampling point feature information can make the updated sampling point feature information of the determined sampling point under the target view angle more accurate. For updates of sampling points in other perspectives, refer to the above description.
  • Graph Neural Network can be used to update the feature information of the above sampling points.
  • a graph model can be constructed based on the first connection relationship, the second connection relationship, and the feature information of the sampling point, and the feature information of the sample point of the sample point can be continuously updated by performing a convolution operation on the graph model.
  • the updated sampling point feature information corresponding to all the sampling points of the target object can be input into the three-dimensional point detection network, and the corresponding 3D point detection results of the target object.
  • the three-dimensional coordinate information of the sampling point with the highest prediction probability may be determined as the three-dimensional coordinate information of the target three-dimensional point corresponding to the target object.
  • the node V corresponds to the image feature information of the 2D center points at each viewing angle
  • the edge E corresponds to the relationship between the nodes, which can be the epipolar distance between the 2D center points.
  • the image feature information of the 2D center point under different viewing angles can be updated.
  • the graph neural network 201 can be used to update the feature.
  • the feature matching network 202 can be used to determine whether each pair of 2D center points (that is, a side) belongs to the same target object, and after feature updating and feature matching, the lower part of Figure 2 can be obtained. The pairing relationship shown by the solid line.
  • a candidate three-dimensional space can be determined for each target object, such as the spherical three-dimensional space pointed by the dotted line in the lower part of FIG. 2 .
  • the determination of the detection result of the three-dimensional center point corresponding to the relevant sampling point can be realized, and then it can be determined
  • the three-dimensional coordinate information of the target three-dimensional center point of each target object is obtained.
  • the 3D point detection method provided by the embodiment of the present disclosure can further search for the target 3D point of each target object in the candidate 3D space, this can reduce the reconstruction error caused by the inaccurate reconstructed candidate 3D point to a certain extent.
  • the target 3D point of the target object can be determined through the search operation of the candidate 3D space where each target object is located, for example, between the target object A and the target Object B has a pairing error, and when the pairing of target object B and target object C is correct, the search result of pairing error can be verified based on the search result of correct pairing, and the detection accuracy of multiple target objects can be further improved.
  • the approximate position of each target object can be determined, and subsequent multi-person gesture recognition can be realized.
  • other correlations can also be realized. application.
  • the embodiment of the present disclosure also provides a three-dimensional point detection device corresponding to the three-dimensional point detection method, because the problem-solving principle of the device in the embodiment of the present disclosure is the same as the above-mentioned three-dimensional point detection method of the embodiment of the present disclosure Similarly, the implementation of the device can refer to the implementation of the method.
  • FIG. 3 it is a schematic diagram of a three-dimensional point detection device provided by an embodiment of the present disclosure.
  • the device includes: an acquisition part 301 and a detection part 302; wherein,
  • the acquiring part 301 is configured to acquire target images obtained by shooting multiple target objects under multiple viewing angles, and three-dimensional coordinate information of candidate three-dimensional points of each of the multiple target objects determined based on the acquired target images;
  • the detection part 302 is configured to, for each target object, determine the candidate three-dimensional space corresponding to the target object based on the three-dimensional coordinate information of the candidate three-dimensional point of the target object; determine the target object based on the candidate three-dimensional space corresponding to the target object and the target image The three-dimensional coordinate information of the target three-dimensional point.
  • the embodiments of the present disclosure can accurately detect the 3D points of each target object by utilizing the projection relationship between the candidate 3D space where the candidate 3D points of the target object are located and the target images under multiple viewing angles. At the same time, for the candidate 3D points The projection operation of the point in the candidate 3D space avoids the voxelization operation of the entire space, which will significantly improve the detection efficiency.
  • the 3D point includes a 3D center point; the candidate 3D point includes a candidate 3D center point, and the candidate 3D center point of the target object is located at the center of the target object; the target 3D point includes the target 3D center point.
  • the detection part 302 is configured to determine the three-dimensional coordinate information of the target three-dimensional point of the target object based on the candidate three-dimensional space corresponding to the target object and the target image according to the following steps, including:
  • the detection part 302 is configured to determine the 3D point detection result corresponding to the sampling point based on the 3D coordinate information of the sampling point in the candidate 3D space and the target image according to the following steps:
  • the three-dimensional coordinate information is projected to different viewing angles, and the sampling point is determined to be in multiple Two-dimensional projected point information in a target image;
  • the feature information of the sampling points under different viewing angles is determined
  • the 3D point detection results corresponding to the sampling points are determined.
  • the two-dimensional projection point information includes image position information of the two-dimensional projection point;
  • the detection part 302 is configured to follow the steps below based on the two-dimensional projection point information of the sampling points respectively in multiple target images , to determine the feature information of the sampling point under different viewing angles:
  • the extracted image features corresponding to the image position information are used to determine the feature information of the sampling points under different viewing angles.
  • the detection part 302 is configured to determine the three-dimensional point detection result corresponding to the sampling point based on the sampling point feature information of the sampling point under different viewing angles according to the following steps:
  • sampling point feature information of the sampling point under different viewing angles Based on the sampling point feature information of the sampling point under different viewing angles and the sampling point feature information of other sampling points associated with the sampling point, determine the updated sampling point feature information of the sampling point under different viewing angles;
  • a three-dimensional point detection result corresponding to the sampling point is determined.
  • the acquisition part 301 is configured to determine the three-dimensional coordinate information of the candidate three-dimensional points of each target object according to the following steps:
  • each two-dimensional point is a pixel located in a corresponding target object
  • the three-dimensional coordinate information of the candidate three-dimensional points of the same target object is determined according to the determined two-dimensional coordinate information of the paired two-dimensional points in the respective target images.
  • the acquiring part 301 is configured to determine pairs of two-dimensional points belonging to the same target object based on image feature information respectively extracted from multiple target images according to the following steps:
  • the two two-dimensional points Based on the image feature information of the two-dimensional points in the plurality of target images, determine whether there are two two-dimensional points matching the image features in each group of target images; the two two-dimensional points respectively belong to different target images in the same group of target images;
  • the two two-dimensional points with matching image features are determined as a pair of two-dimensional points belonging to the same target object.
  • the acquiring part 301 is configured to determine whether there are two 2D points whose image features match in each group of target images based on the image feature information of multiple target images according to the following steps: point:
  • For each group of target images combining the two-dimensional points in the two target images of the group of target images in pairs to obtain multiple groups of two-dimensional points; based on the image feature information of the two two-dimensional points included in each group of two-dimensional points, Determine whether there are two 2D points in the set of target images that match the image features.
  • the acquisition part 301 is configured to determine whether there are two pairs of image features matching in the group of target images based on the image feature information of two two-dimensional points included in each group of two-dimensional points according to the following steps: two-dimensional points:
  • For each group of two-dimensional points input the image feature information of two two-dimensional points of the group of two-dimensional points into the feature matching network, and determine whether the image feature information of the two two-dimensional points matches;
  • any group of two two-dimensional points whose image features match is determined as two two-dimensional points whose image features match exist in the group of target images.
  • the acquisition part 301 is configured to input the image feature information of two two-dimensional points of the group of two-dimensional points into the feature matching network according to the following steps, and determine the images of the two two-dimensional points Whether the feature information matches:
  • the The image feature information of the two-dimensional point is updated to obtain the updated image feature information
  • the acquiring part 301 is configured to determine pairs of two-dimensional points belonging to the same target object based on image feature information respectively extracted from multiple target images according to the following steps:
  • the second The image feature information of multiple two-dimensional points in a target image is updated to obtain updated image feature information respectively corresponding to multiple two-dimensional points in the first target image;
  • pairs of two-dimensional points belonging to the same target object are determined.
  • the acquisition part 301 is configured to determine the paired two-dimensional points belonging to the same target object based on the image feature information respectively updated in multiple target images according to the following steps:
  • the acquisition part 301 is configured to update the image feature information of the two-dimensional point according to the following steps:
  • the image feature information of the two-dimensional point Based on the image feature information of the two-dimensional point, the image feature information of other two-dimensional points in other target images different from the target image where the two-dimensional point is located, and the epipolar distance, the image feature information of the two-dimensional point is updated to obtain the updated Image feature information.
  • FIG. 4 is a schematic structural diagram of the electronic device provided by the embodiment of the present disclosure, including: a processor 401 , a memory 402 , and a bus 403 .
  • the memory 402 stores machine-readable instructions executable by the processor 401 (for example, execution instructions corresponding to the acquisition part 301 and the detection part 302 in the device in FIG. 3 ), and when the electronic device is running, the processor 401 and the memory 402 communicates through the bus 403, and when the machine-readable instructions are executed by the processor 401, the following processing is performed:
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for three-dimensional point detection described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer executes the method described in the above method embodiment.
  • the steps of the method for three-dimensional point detection refer to the foregoing method embodiments.
  • the above-mentioned computer program product may be realized by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the computer software product is stored in a storage medium, including several
  • the instructions are used to make an electronic device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned computer-readable storage medium may be a tangible device capable of retaining and storing instructions used by an instruction execution device, and may be a volatile storage medium or a nonvolatile storage medium.
  • a computer readable storage medium may be, for example but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above.
  • Non-exhaustive list of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Type programmable read-only memory (Erasable Programmable Read Only Memory, EPROM or flash memory), static random-access memory (Static Random-Access Memory, SRAM), portable compact disk read-only memory (Compact Disk Read Only Memory, CD-ROM) , Digital versatile discs (Digital versatile Disc, DVD), memory sticks, floppy disks, mechanically encoded devices, such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • the embodiment of the present disclosure acquires target images obtained by shooting multiple target objects under multiple viewing angles, and 3D coordinate information of candidate 3D points of each of the multiple target objects determined based on the acquired target images ; For each target object, perform the following steps: based on the three-dimensional coordinate information of the candidate three-dimensional point of the target object, determine the candidate three-dimensional space corresponding to the target object; based on the candidate three-dimensional space corresponding to the target object, and The target image determines the three-dimensional coordinate information of the target three-dimensional point of the target object. In this way, using the projection relationship between the candidate 3D space where the candidate 3D point of the target object is located and the target images under multiple viewing angles, the 3D point of each target object can be accurately detected. At the same time, for the candidate 3D point in The projection operation in the candidate 3D space avoids the voxelization operation of the entire space, which will significantly improve the detection efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un appareil de détection de point tridimensionnel, un dispositif électronique et un support de stockage. Le procédé consiste à : acquérir une image cible obtenue par photographie d'une pluralité d'objets cibles à une pluralité d'angles de visualisation et des informations de coordonnées tridimensionnelles d'un point tridimensionnel candidat de chaque objet cible dans la pluralité d'objets cibles déterminés sur la base de l'image cible obtenue ; et pour chaque objet cible, effectuer les étapes suivantes consistant à : déterminer un espace tridimensionnel candidat correspondant à l'objet cible sur la base des informations de coordonnées tridimensionnelles du point tridimensionnel candidat de l'objet cible ; et à déterminer des informations de coordonnées tridimensionnelles d'un point tridimensionnel cible de l'objet cible sur la base de l'espace tridimensionnel candidat correspondant à l'objet cible et à l'image cible.
PCT/CN2022/088149 2021-08-13 2022-04-21 Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage WO2023015938A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110929512.6A CN113610967B (zh) 2021-08-13 2021-08-13 三维点检测的方法、装置、电子设备及存储介质
CN202110929512.6 2021-08-13

Publications (1)

Publication Number Publication Date
WO2023015938A1 true WO2023015938A1 (fr) 2023-02-16

Family

ID=78340615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088149 WO2023015938A1 (fr) 2021-08-13 2022-04-21 Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN113610967B (fr)
WO (1) WO2023015938A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610967B (zh) * 2021-08-13 2024-03-26 北京市商汤科技开发有限公司 三维点检测的方法、装置、电子设备及存储介质
CN114821497A (zh) * 2022-02-24 2022-07-29 广州文远知行科技有限公司 目标物位置的确定方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180218513A1 (en) * 2017-02-02 2018-08-02 Intel Corporation Method and system of automatic object dimension measurement by using image processing
CN109766882A (zh) * 2018-12-18 2019-05-17 北京诺亦腾科技有限公司 人体光点的标签识别方法、装置
CN112200851A (zh) * 2020-12-09 2021-01-08 北京云测信息技术有限公司 一种基于点云的目标检测方法、装置及其电子设备
CN112950668A (zh) * 2021-02-26 2021-06-11 北斗景踪技术(山东)有限公司 一种基于模位置测量的智能监控方法及***
CN113610967A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维点检测的方法、装置、电子设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815754B (zh) * 2019-04-12 2023-05-30 Oppo广东移动通信有限公司 一种三维信息确定方法、三维信息确定装置及终端设备
CN111951326B (zh) * 2019-05-15 2024-07-05 北京地平线机器人技术研发有限公司 基于多摄像装置的目标对象骨骼关键点定位方法和装置
CN112154454A (zh) * 2019-09-10 2020-12-29 深圳市大疆创新科技有限公司 目标对象的检测方法、***、设备及存储介质
CN112991440B (zh) * 2019-12-12 2024-04-12 纳恩博(北京)科技有限公司 车辆的定位方法和装置、存储介质和电子装置
CN113168716A (zh) * 2020-03-19 2021-07-23 深圳市大疆创新科技有限公司 对象解算、绕点飞行方法及设备
CN111582207B (zh) * 2020-05-13 2023-08-15 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN112528831B (zh) * 2020-12-07 2023-11-24 深圳市优必选科技股份有限公司 多目标姿态估计方法、多目标姿态估计装置及终端设备
CN112926395A (zh) * 2021-01-27 2021-06-08 上海商汤临港智能科技有限公司 目标检测方法、装置、计算机设备及存储介质
CN112926461B (zh) * 2021-02-26 2024-04-19 商汤集团有限公司 神经网络训练、行驶控制方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180218513A1 (en) * 2017-02-02 2018-08-02 Intel Corporation Method and system of automatic object dimension measurement by using image processing
CN109766882A (zh) * 2018-12-18 2019-05-17 北京诺亦腾科技有限公司 人体光点的标签识别方法、装置
CN112200851A (zh) * 2020-12-09 2021-01-08 北京云测信息技术有限公司 一种基于点云的目标检测方法、装置及其电子设备
CN112950668A (zh) * 2021-02-26 2021-06-11 北斗景踪技术(山东)有限公司 一种基于模位置测量的智能监控方法及***
CN113610967A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维点检测的方法、装置、电子设备及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"16th European Conference - Computer Vision – ECCV 2020", vol. 12, 1 January 1900, CORNELL UNIVERSITY LIBRARY,, 201 Olin Library Cornell University Ithaca, NY 14853, article TU HANYUE; WANG CHUNYU; ZENG WENJUN: "VoxelPose: Towards Multi-camera 3D Human Pose Estimation in Wild Environment", pages: 197 - 212, XP047593627, DOI: 10.1007/978-3-030-58452-8_12 *
WU SIZE; JIN SHENG; LIU WENTAO; BAI LEI; QIAN CHEN; LIU DONG; OUYANG WANLI: "Graph-Based 3D Multi-Person Pose Estimation Using Multi-View Images", 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 10 October 2021 (2021-10-10), pages 11128 - 11137, XP034092993, DOI: 10.1109/ICCV48922.2021.01096 *

Also Published As

Publication number Publication date
CN113610967B (zh) 2024-03-26
CN113610967A (zh) 2021-11-05

Similar Documents

Publication Publication Date Title
WO2020259248A1 (fr) Procédé et dispositif de détermination de pose en fonction d'informations de profondeur, support et appareil électronique
Chen et al. Crowd map: Accurate reconstruction of indoor floor plans from crowdsourced sensor-rich videos
KR101532864B1 (ko) 모바일 디바이스들에 대한 평면 맵핑 및 트래킹
WO2023015938A1 (fr) Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage
CN109934065B (zh) 一种用于手势识别的方法和装置
CN113196296A (zh) 使用几何上下文检测人群中的对象
CN109920055A (zh) 三维视觉地图的构建方法、装置与电子设备
CN111444744A (zh) 活体检测方法、装置以及存储介质
WO2023015903A1 (fr) Procédé et appareil de réglage de pose en trois dimensions, dispositif électronique et support de stockage
WO2023016271A1 (fr) Procédé de détermination d'attitude, dispositif électronique et support de stockage lisible
US11922658B2 (en) Pose tracking method, pose tracking device and electronic device
CN111623765B (zh) 基于多模态数据的室内定位方法及***
EP3836085B1 (fr) Positionnement tridimensionnel multivues
CN111094895A (zh) 用于在预构建的视觉地图中进行鲁棒自重新定位的***和方法
Akkaladevi et al. Tracking multiple rigid symmetric and non-symmetric objects in real-time using depth data
WO2016133697A1 (fr) Transformations de projection pour l'estimation de profondeur
WO2023168957A1 (fr) Procédé et appareil de détermination de pose, dispositif électronique, support d'enregistrement et programme
WO2023016182A1 (fr) Procédé et appareil de détermination de pose, dispositif électronique et support d'enregistrement lisible
CN116051736A (zh) 一种三维重建方法、装置、边缘设备和存储介质
JP2016500890A (ja) 並列化可能なアーキテクチャ内の画像を使用して、サーフェルの局所的なジオメトリまたは面法線を初期化し、解くための方法
US10242453B2 (en) Simultaneous localization and mapping initialization
CN112258647B (zh) 地图重建方法及装置、计算机可读介质和电子设备
CN112270748A (zh) 基于图像的三维重建方法及装置
Wang et al. Handling occlusion and large displacement through improved RGB-D scene flow estimation
KR20050027796A (ko) 물체 인식 및 추적방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22854942

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE