WO2023284479A1 - 平面估计方法、装置、电子设备及存储介质 - Google Patents

平面估计方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023284479A1
WO2023284479A1 PCT/CN2022/099337 CN2022099337W WO2023284479A1 WO 2023284479 A1 WO2023284479 A1 WO 2023284479A1 CN 2022099337 W CN2022099337 W CN 2022099337W WO 2023284479 A1 WO2023284479 A1 WO 2023284479A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frames
plane
feature points
homography matrix
adjacent
Prior art date
Application number
PCT/CN2022/099337
Other languages
English (en)
French (fr)
Inventor
郭亨凯
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023284479A1 publication Critical patent/WO2023284479A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • Embodiments of the present disclosure relate to the field of computer technology, for example, to a plane estimation method, device, electronic equipment, and storage medium.
  • plane estimation in multiple video frames can be applied to many scenarios, for example, it can be applied to scenarios such as 3D reconstruction.
  • the traditional method is usually: first obtain 3D point cloud data based on the multi-frame structure from motion (SfM) technology; then perform plane estimation based on the 3D point cloud data.
  • the disadvantages of traditional methods include at least the following: the quality of 3D point cloud data depends on the SfM accuracy, and when the quality of 3D point cloud data is poor, it leads to poor plane estimation.
  • Embodiments of the present disclosure provide a plane estimation method, device, electronic equipment, and storage medium, capable of performing plane estimation on multiple video frames, and the plane estimation effect is better.
  • An embodiment of the present disclosure provides a plane estimation method, including:
  • the parameter information of the plane is determined.
  • An embodiment of the present disclosure also provides a plane estimation device, including:
  • the location information determination module is configured to acquire multiple video frames of the target video, extract feature points in each video frame, and determine that a plurality of identical feature points among the feature points in the multiple video frames are in the positional information in multiple video frames;
  • the homography matrix determination module is configured to determine the homography matrix of the plane between two adjacent video frames according to the position information of the plurality of identical feature points in the plurality of video frames based on the random sampling consensus algorithm;
  • the plane parameter determination module is configured to determine the parameter information of the plane according to the homography matrix of the plane between two adjacent video frames.
  • An embodiment of the present disclosure also provides an electronic device, and the electronic device includes:
  • a storage device configured to store at least one program
  • the at least one program is executed by the at least one processor, so that the at least one processor implements the plane estimation method described in any one of the embodiments of the present disclosure.
  • An embodiment of the present disclosure further provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to execute the plane estimation method as described in any one of the embodiments of the present disclosure when executed by a computer processor.
  • FIG. 1 is a schematic flowchart of a plane estimation method provided by Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic flowchart of a plane estimation method provided in Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic flowchart of a plane estimation method provided by Embodiment 3 of the present disclosure
  • FIG. 4 is a schematic structural diagram of a plane estimation device provided by Embodiment 4 of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by Embodiment 5 of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a schematic flowchart of a plane estimation method provided by Embodiment 1 of the present disclosure.
  • the embodiments of the present disclosure are applicable to the situation of performing plane estimation on multiple frames of images, for example, the situation of performing multiple plane estimation on multiple video frames in a video.
  • the method can be executed by a plane estimation device, which can be implemented in the form of software and/or hardware, and the device can be configured in an electronic device, such as a computer.
  • the plane estimation method provided by this embodiment includes:
  • the target video can be regarded as a video that needs to determine the plane information contained in it.
  • the target video can be obtained from a preset storage location, or a video captured in real time can be used as the target video.
  • the way of obtaining multiple video frames of the target video may include but not limited to: use open source programs such as ffmpeg to parse each frame of the target video to obtain video frames; The video frame of the video, etc.
  • Feature points may be directly extracted from the acquired video frames, or the video frames may be filtered first, and then feature points may be extracted from the filtered video frames.
  • Video frame filtering can be performed based on the similarity of the image content contained in the video frame (for example, for adjacent video frames whose content similarity is greater than a preset value, only one of them is kept); or, based on the interval length of the video frame, perform Video frame filtering (for example, within a certain time interval, only one video frame is retained); video frame filtering can also be based on other methods, which can be selected according to the application scenario, and are not exhaustive here.
  • the feature points in the video frame may include, but are not limited to, corner points and/or local pixel feature points (such as pixel maximum value points, pixel minimum value points, etc.), and the type of feature points to be extracted can be set according to the application scenario. Different types of feature points to be extracted may be extracted using different extraction algorithms. For example, when the feature point is a corner point, the Harris corner detection algorithm can be used for corner point extraction; when the feature point is a local pixel feature point, the Gaussian function difference (Difference of Gaussian, DoG) operator can be used for local pixel feature The extraction is not exhaustive here.
  • the position information of the feature point in the video frame can be regarded as the pixel coordinates of the feature point in the video frame. After the feature points of each video frame are determined, the positions of the same feature points in each video frame can be tracked. Furthermore, the position information of each identical feature point in each video frame can be determined.
  • the position of the same feature point in each video frame can be tracked based on an optical flow algorithm, such as the KLT tracking algorithm (Kanade-Lucas-Tomasi Tracking Method).
  • KLT tracking algorithm Kerade-Lucas-Tomasi Tracking Method
  • feature point matching can also be performed based on the Oriented FAST and Rotated BRIEF (ORB) feature principle, so as to determine the position information of the same feature point in each video frame, which is not exhaustive here.
  • a track may be generated for each identical feature point to represent position information of the same feature point in each video frame.
  • the track corresponding to feature point 1 may be [x 11 , y 11 , x 21 , y 21 , x 31 , y 31 ...
  • x n1 , y n1 can represent the abscissa of the pixel
  • y can represent the ordinate of the pixel
  • the first number can represent the frame number of the video frame
  • the second number can represent the label of the feature point, for example, x 21 can represent the feature point 1
  • the random sampling consensus algorithm can be considered as an algorithm that calculates the parameters of the mathematical model that conform to the normal data based on a set of sample data sets that contain normal data and abnormal data.
  • the track of feature points can be used as a sample data set; the track of feature points belonging to the plane among the feature points can be As normal data, the track of feature points that do not belong to the plane is regarded as abnormal data; the homography matrix of the plane between two adjacent video frames can be used as the calculated mathematical model parameters that conform to normal data.
  • the mathematical model parameters corresponding to plane 1 may include, the homography matrix of the plane between the first video frame and the second video frame, the second video frame to the third video frame The homography matrix of the plane between frame video frames, ..., the homography matrix of the plane between the n-1th video frame and the nth frame.
  • the above n-1 homography matrices can be used as the mathematical model parameters of plane 1 conforming to normal data.
  • the homography matrix of a plane which can be thought of as a perspective transformation matrix of a plane, can be used to represent the perspective transformation of a plane from one view to another.
  • Intrinsic parameters such as camera focal length, lens distortion, etc.
  • extrinsic parameters such as rotation matrix and translation matrix
  • the parameters in the homography matrix of the plane between two adjacent video frames can be calculated according to the position information of the same feature point between two adjacent video frames. It can be considered that based on the position information of the feature points contained in the track in each video frame, the homography matrix of the plane between every two adjacent video frames can be calculated.
  • the position information of each identical feature point in each video frame may be used as an input of the random sampling consistent algorithm. Randomly sample the feature points based on the random sampling consensus algorithm to determine the feature points belonging to a plane, and output the homography of the plane between every two adjacent video frames based on the track of the determined feature points belonging to the plane matrix. By directly performing random sampling consensus on the entire video, the homography matrix of the plane is obtained, which can lay the foundation for determining the parameter information of the plane.
  • the parameter information of the plane can be regarded as the parameters in the expression of the plane in the space coordinate system.
  • the homography matrix of the plane between two adjacent video frames may be decomposed to obtain the camera intrinsic parameters and extrinsic parameters defined therein. Furthermore, the parameters of the expression of the plane in the space coordinate system can be obtained according to the position information of the feature points belonging to the plane in the video frame, as well as the internal and external parameters of the camera, so as to obtain the parameter information of the plane.
  • the homography matrix of the plane between two adjacent video frames can be decomposed by a variety of decomposition methods, including: Oriented FAST and Rotated BRIEF-Simultaneous Localization And Mapping 2 (Oriented FAST and Rotated BRIEF-Simultaneous Localization And Mapping 2, The method of Faugeras (1988) in ORB-SLAM2) realizes the homography matrix decomposition; or, the homography matrix decomposition can be realized by the decompose Homography Mat function in OpenCV, and the decompose Homography Mat function can be realized by the method of INRIA (2007), in This is not exhaustive.
  • the parameter information of the plane in multiple video frames can be obtained, and the error introduced by the SfM algorithm can be avoided.
  • the plane estimation based on the video frames can not only quickly and conveniently determine the parameter information of multiple planes, but also has a better effect of plane estimation.
  • determining the parameter information of the plane it also includes: determining the position and posture information of the virtual object according to the parameter information of the plane; Associated display.
  • the virtual object may be, for example, an appearance image of a virtual artificial intelligence (AI), or may be a virtual control that can be interacted with.
  • the position and attitude information of the virtual object may include but not limited to information such as the position and rotation angle of the virtual object in the space coordinate system.
  • the preset video frame may be, for example, a video frame including a plane that needs to be displayed in association, or may be a video frame within a preset time period, or the like.
  • the associated display methods may include, but are not limited to, virtual objects displayed below the plane, suspended on the plane, and other associated display methods.
  • information such as the position and rotation angle of the virtual object in the space coordinate system can also be determined according to the expression of the plane in the space coordinate system and the associated display method of the virtual object and the plane. For example, if the associated display method is that the virtual object is displayed on a plane, then in the spatial coordinate system, the virtual object can be vertically and positively displaced by a certain value based on the position and rotation angle corresponding to the plane.
  • the virtual object After determining the position and posture information of the virtual object, the virtual object can also be determined according to the position and rotation angle of the virtual object in the space coordinate system, as well as the transformation relationship between the plane in the space coordinate system and the pixel coordinate system in the preset video frame Information such as pixel position and rotation angle in the preset video frame. Furthermore, the virtual object can be rendered according to information such as the pixel position and the rotation angle of the virtual object in the preset video frame. Since the position and attitude information of the virtual object in the space coordinate system is determined according to the associated display mode of the virtual object and the plane, rendering the virtual object can realize the associated display of the virtual object and the plane in the preset video frame.
  • the determined plane contained in the video can be applied to render virtual objects, so that augmented reality can be implemented on video frames to improve user experience.
  • the technical solution of the embodiment of the present disclosure acquires multiple video frames of the target video, extracts the feature points in each video frame, and determines the position information of multiple identical feature points in multiple video frames; Algorithm, according to the position information of multiple identical feature points in two adjacent video frames, determine the homography matrix of the plane between two adjacent video frames; according to the homography matrix of the plane between two adjacent video frames, determine the plane parameter information.
  • the parameter information of the plane in multiple video frames can be obtained, and the error introduced by the SfM algorithm can be avoided.
  • the plane estimation based on the video frames can not only quickly and conveniently determine the parameter information of multiple planes, but also has a better effect of plane estimation.
  • the embodiments of the present disclosure may be combined with multiple optional solutions in the plane estimation method provided in the foregoing embodiments.
  • the plane estimation method provided in this embodiment describes the steps of determining the homography matrix of a plane between two adjacent video frames. Estimating the initial homography matrix by cyclically extracting a preset number of first feature points, and determining the number of interior points in the second feature point according to the initial homography matrix, can iterate out the situation with the largest number of interior points; furthermore, according to the interior When the number of points is the largest, position information of multiple interior points is used to determine the homography matrix of the plane to which these interior points belong. Therefore, it is possible to directly perform random sampling and consistency on the video, so as to quickly and conveniently determine the parameter information of multiple planes in multiple frames of images.
  • FIG. 2 is a schematic flowchart of a plane estimation method provided by Embodiment 2 of the present disclosure. As shown in Figure 2, the plane estimation method provided in this embodiment includes:
  • S210 Acquire multiple video frames of the target video, extract feature points in each video frame, and determine position information of multiple identical feature points in multiple video frames.
  • determining the homography matrix of the plane between two adjacent video frames may include steps S220-S250. Since the homography matrix has 8 degrees of freedom, in order to determine the 8 unknown parameters in the homography matrix, at least 4 are required for the preset quantity. It can be considered that in each cycle, a preset number of first feature points can be randomly extracted.
  • S230 Determine the initial homography of the plane between each adjacent two video frames according to the position information of the preset number of first feature points in each adjacent two video frames. matrix.
  • Position information of the preset number of first feature points in every two adjacent video frames may be determined according to the tracks of the preset number of first feature points. Furthermore, the initial homography matrix of the plane between two adjacent video frames may be determined according to the position information of the preset number of first feature points in each two adjacent video frames.
  • each second feature point can be used to verify whether the set of initial homography matrices is optimal. For example, if in the track of a second feature point, the position information of the second feature point in every two adjacent video frames conforms to the mapping relationship represented by the group of initial homography matrices, then the second feature point can be considered Points are matched to each initial homography in the set. If the number of second feature points matched with each initial homography matrix is larger, it can be considered that the group of initial homography matrices is better, and it can be considered that the group of initial homography matrices can correctly represent a plane in multiple Projective relationship between video frames.
  • each second feature point in every two adjacent video frames, and each initial homography matrix it is judged whether each second feature point is consistent with each initial homography Response matrix matching, including: according to the position information of each second feature point in every two adjacent video frames, the position information in the video frame that is sorted before, and the initial unit of the plane between each two adjacent video frames Response matrix, determine the reprojection position information of each second feature point in each of the two adjacent video frames, and in the video frame after the sequence; according to each of the second feature points in each of the In two adjacent video frames, the position information and reprojection position information in the video frames that are sorted later determine the reprojection error of each second feature point between the two adjacent video frames; According to the reprojection errors of each second feature point in every two adjacent video frames, it is judged whether each second feature point matches each of the initial homography matrices.
  • the position information of a second feature point in two adjacent video frames which is ranked first in the video frame, can be substituted into the initial homography matrix of the plane between the two adjacent video frames to obtain the second feature point In the two adjacent video frames, the reprojection position information in the lower video frame.
  • the pixel distance for example, the Euclidean distance of the pixel coordinates
  • the final error value may be determined by calculating the average re-projection error, determining the median of the re-projection errors, or selecting the largest/smallest error among the multiple re-projection errors based on the multiple re-projection errors. Finally, according to the final error value, it can be judged whether the second feature point matches each initial homography matrix. match.
  • the initial homography matrix of the plane between the first video frame and the second video frame is H 1 , ..., between the n-1th video frame and the nth frame
  • the initial homography matrix of the plane is H n-1 , and a total of n-1 initial homography matrices are determined.
  • Feature point 1 belongs to the second feature point, and the track corresponding to feature point 1 may be [x 11 , y 11 , x 21 , y 21 , x 31 , y 31 . . . x n1 , y n1 ].
  • judging whether the feature point 1 matches the initial homography matrix H1-Hn-1 may include: for H1, substituting the position information x11 and y11 of the feature point 1 in the first frame of the first frame of the video frame into H1 to obtain the feature point 1
  • the reprojection position information x21', y21' in the second frame of the second video frame calculate the reprojection error w1 between x21, y21 and x21', y21' (such as the Euclidean distance between two points, etc. ); according to the above steps, adaptively calculate the reprojection error w2-wn-1 corresponding to H2-Hn-1; according to w1-wn-1, determine whether feature point 1 matches the initial homography matrix H1-Hn-1.
  • judging whether each second feature point matches each initial homography matrix includes: according to each second feature The reprojection error of the point between every two adjacent video frames is determined as the average reprojection error; according to the average reprojection error and the preset threshold, it is judged whether each second feature point matches each initial homography matrix . For example, it may be determined that the second feature point matches each initial homography matrix when the average reprojection error is smaller than a preset threshold.
  • the preset threshold can be set according to experimental values or empirical values.
  • each of the second feature points matches each of the initial homography matrices, use each of the second feature points as an inlier, and stop loop extraction until the number of loops reaches the preset number of times, and determine each The target loop with the largest number of inliers in the secondary loop.
  • the preset number of first feature points randomly extracted may be feature points belonging to the same plane, or feature points not belonging to the same plane.
  • each second feature point belonging to the plane can be matched with each initial homography matrix; when the randomly drawn preset number of When the first feature points do not belong to the same plane, basically only a small number of second feature points can match each initial homography matrix.
  • the number of cycles can be preset based on experience or experimental values. By performing a preset number of cycles, a preset number of first feature points can be extracted in each cycle, multiple initial homography matrices of the plane can be determined, and the number of second feature points as interior points can be determined, which can be based on the interior.
  • the position information of all internal points in the cycle with the largest number of points in two adjacent video frames is used to determine the initial homography matrix of the plane between two adjacent video frames, so that the optimal solution of the initial homography matrix can be obtained.
  • S260 Determine the homography matrix of the plane between the two adjacent video frames according to the position information of all inliers determined in the target loop in the two adjacent video frames.
  • the tracks of all interior points can be combined to optimize each initial homography matrix to obtain the final plane between two adjacent video frames homography matrix.
  • the technical solutions of the embodiments of the present disclosure describe the steps of determining homography matrices of multiple planes between adjacent video frames.
  • Estimating the initial homography matrix by cyclically extracting a preset number of first feature points, and determining the number of interior points in the second feature point according to the initial homography matrix, can iterate out the situation with the largest number of interior points; furthermore, according to the interior When the number of points is the largest, the position information of each interior point is used to determine the homography matrix of the plane to which these interior points belong. Therefore, it is possible to directly perform random sampling and consistency on the video, so as to quickly and conveniently determine the plane parameter information in the multi-frame images.
  • the plane estimation method provided by the embodiment of the present disclosure belongs to the same disclosed concept as the plane estimation method provided by the above-mentioned embodiment, and the technical details not described in detail in this embodiment can be referred to the above-mentioned embodiment, and the same technical features are described in this embodiment Example has the same effect as in the above-mentioned embodiment.
  • the embodiments of the present disclosure may be combined with various optional solutions in the plane estimation method provided in the foregoing embodiments.
  • the plane estimation method provided in this embodiment describes the steps of determining homography matrices of multiple planes in a video. After determining the parameter information of any plane in the video, the feature points used to determine the plane can be removed, and the remaining feature points can be used to perform random sampling and consistency, so that multiple planes appearing in the video can be estimated. Therefore, the parameter information of multiple planes in the video can be quickly and conveniently determined.
  • FIG. 3 is a schematic flowchart of a plane estimation method provided by Embodiment 3 of the present disclosure. As shown in Figure 3, the plane estimation method provided in this embodiment includes:
  • S310 Acquire multiple video frames of the target video, extract feature points in each video frame, and determine position information of multiple identical feature points in multiple video frames.
  • the homography matrix of a currently determined plane between two adjacent video frames may be used as the homography matrix of the current plane between two adjacent video frames.
  • the method for determining the homography matrix of the current plane between two adjacent video frames may be the same as the method for determining the homography matrix of any plane between two adjacent video frames. For details, please refer to the above description, and details will not be repeated here.
  • the video can contain plane information of multiple planes
  • the features of the homography matrix used to determine the current plane can also be removed point, and continue to execute the random sampling consensus algorithm to determine the parameter information of other planes.
  • the preset number can be set according to experimental values or empirical values. When the number of remaining feature points is less than the preset number, it can be considered that the parameter information of the planes included in the video that can be evaluated has been evaluated. At this point, the removal of the feature points used to determine the homography matrix of the current plane may be stopped, and the execution of the random sampling consensus algorithm may be stopped.
  • S350 Determine parameter information of each plane according to the homography matrix of each plane between two adjacent video frames.
  • the parameter information of the plane can be determined;
  • the parameter information of each plane can be determined;
  • the technical solutions of the embodiments of the present disclosure describe the steps of determining homography matrices of multiple planes in a video. After determining the parameter information of any plane in the video, the feature points used to determine the plane can be removed, and the remaining feature points can be used to perform random sampling and consistency, so that multiple planes appearing in the video can be estimated. Therefore, the parameter information of multiple planes in the video can be quickly and conveniently determined.
  • the plane estimation method provided by the embodiment of the present disclosure belongs to the same disclosed concept as the plane estimation method provided by the above-mentioned embodiment, and the technical details not described in detail in this embodiment can be referred to the above-mentioned embodiment, and the same technical features are described in this embodiment Example has the same effect as in the above-mentioned embodiment.
  • FIG. 4 is a schematic structural diagram of a plane estimation device provided by Embodiment 4 of the present disclosure.
  • the plane estimating apparatus provided in this embodiment is applicable to the situation of performing plane estimation on multiple frames of images, for example, it is applicable to the situation of performing multiple plane estimation on videos.
  • the plane estimation device includes: a location information determination module 410 configured to acquire multiple video frames of the target video, extract feature points in each video frame, and determine the The position information of multiple identical feature points among the feature points in multiple video frames in multiple video frames; the homography matrix determination module 420 is set to be based on a random sampling consensus algorithm, according to multiple identical feature points in multiple video frames The position information in the frame determines the homography matrix of the plane between two adjacent video frames; the plane parameter determination module 430 is configured to determine the parameter information of the plane according to the homography matrix of the plane between two adjacent video frames.
  • the homography matrix determination module includes: an extraction unit configured to circularly extract a preset number of feature points from the plurality of identical feature points as the first feature point, and unextracted The feature points of the feature points are used as the second feature points; the initial matrix determination unit is configured to determine the two adjacent video frames according to the position information of the preset number of first feature points in each adjacent two video frames The initial homography matrix of the plane between; the judging unit is configured to judge each second feature according to the position information of each second feature point in every two adjacent video frames and each initial homography matrix Whether the point matches each initial homography matrix; the interior point determination unit is configured to respond to the result of matching each second feature point with each initial homography matrix, and use each second feature point as an interior point, until the number of cycles reaches the preset number of times, the cycle extraction is stopped, and the target cycle with the largest number of interior points in the cycle of the preset number of times is determined; the final matrix determination unit is set to be based on all the interior points determined in the target cycle.
  • an extraction unit configured to
  • the judging unit includes: a reprojection subunit, configured to be based on the position information of each second feature point in each adjacent two video frames, the video frame in the top order, and The initial homography matrix of the plane between each adjacent two video frames determines the reprojection position of each of the second feature points in the video frames that are sorted lower in each adjacent two video frames Information; an error determining subunit, configured to determine each of the second feature points according to the position information and reprojection position information in the video frames that are ranked lower in each of the two adjacent video frames.
  • the reprojection error of each second feature point between each adjacent two video frames; the judging subunit is configured to, according to the reprojection error of each second feature point at each adjacent two video frames, Judging whether each second feature point matches each initial homography matrix.
  • the judging subunit is configured to: determine the average reprojection error according to the reprojection error of each second feature point between every two adjacent video frames; determine the average reprojection error according to the average reprojection error and A preset threshold is used to determine whether each second feature point matches each initial homography matrix.
  • the homography matrix determination module includes: a removal unit configured to, after determining the homography matrix of the current plane between two adjacent video frames, circulate from the plurality of identical feature points Remove the feature points used to determine the homography matrix of the current plane to obtain the remaining feature points; the matrix determination unit is set to be based on the random sampling consensus algorithm, and determine the adjacent The homography matrices of the planes other than the current plane in the two video frames, until the number of remaining feature points is less than the preset number, the loop removal is stopped.
  • the plane estimation device further includes: a virtual display module, configured to determine the position and posture information of the virtual object according to the parameter information of the plane; according to the position and posture information, display the virtual object in the preset video frame Displayed in association with the plane.
  • a virtual display module configured to determine the position and posture information of the virtual object according to the parameter information of the plane; according to the position and posture information, display the virtual object in the preset video frame Displayed in association with the plane.
  • the plane estimation device provided by the embodiments of the present disclosure can execute the plane estimation method provided by any embodiment of the present disclosure, and has corresponding functional modules for executing the method.
  • FIG. 5 it shows a schematic structural diagram of an electronic device (such as a terminal device or a server in FIG. 5 ) 500 suitable for implementing an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, personal digital assistant (Personal Digital Assistant, PDA), PAD (tablet computer), portable multimedia player (Portable Media Player) , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD portable multimedia player
  • PMP portable multimedia player
  • PMP portable multimedia player
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • the electronic device shown in FIG. 5 is only an example, and should not limit the functions and scope of use of the embodiments of
  • an electronic device 500 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 506 is loaded into the program in the random access memory (Random Access Memory, RAM) 503 to execute various appropriate actions and processes.
  • a processing device such as a central processing unit, a graphics processing unit, etc.
  • RAM Random Access Memory
  • various programs and data necessary for the operation of the electronic device 500 are also stored.
  • the processing device 501, ROM 502, and RAM 503 are connected to each other through a bus 504. Also connected to the bus 504 is an Input/Output (I/O) interface 505 .
  • I/O Input/Output
  • an input device 506 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; including, for example, a liquid crystal display (Liquid Crystal Display, LCD) , an output device 507 such as a speaker, a vibrator, etc.; a storage device 508 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 509.
  • the communication means 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data. While FIG. 5 shows electronic device 500 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 509 , or from storage means 506 , or from ROM 502 .
  • the processing device 501 executes the above-mentioned functions defined in the plane estimation method of the embodiment of the present disclosure.
  • the electronic device provided by the embodiment of the present disclosure belongs to the same disclosed concept as the plane estimation method provided by the above embodiment, and the technical details not described in this embodiment can be referred to the above embodiment, and this embodiment has the same features as the above embodiment Effect.
  • An embodiment of the present disclosure provides a computer storage medium, on which a computer program is stored, and when the program is executed by a processor, the plane estimation method provided in the foregoing embodiments is implemented.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
  • Computer-readable storage media may include, but are not limited to, electrical connections with one or more conductors, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) Memory, EPROM) or flash memory (FLASH), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future-developed network protocols such as Hyper Text Transfer Protocol (Hyper Text Transfer Protocol, HTTP), and can communicate with any form or medium of digital Data communication (eg, communication network) interconnections.
  • Examples of communication networks include local area networks (Local Area Network, LAN), wide area networks (Wide Area Network, WAN), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently existing networks that are known or developed in the future.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires multiple video frames of the target video, and for each feature point in each video frame Extract and determine the position information of multiple identical feature points in multiple video frames among the feature points in multiple video frames; based on the random sampling consensus algorithm, according to the position of multiple identical feature points in multiple video frames Information, determine the homography matrix of the plane between two adjacent video frames; determine the parameter information of the plane according to the homography matrix of the plane between two adjacent video frames.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a LAN or WAN, or it can be connected to an external computer (eg via the Internet using an Internet Service Provider).
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the names of units and modules do not constitute limitations on the units and modules themselves under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (Field Programmable Gate Arrays, FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Parts, ASSP), System on Chip (System on Chip, SOC), Complex Programmable Logic Device (Complex Programmable Logic Device, CPLD) and so on.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard drives, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices , or any suitable combination of the foregoing.
  • Example 1 provides a plane estimation method, the method includes: acquiring multiple video frames of the target video, extracting feature points in each video frame, and determining Position information of multiple identical feature points among the feature points in the multiple video frames in the multiple video frames; based on a random sampling consensus algorithm, according to the multiple identical feature points in the multiple video frames Determine the homography matrix of the plane between two adjacent video frames according to the position information in the two adjacent video frames; determine the parameter information of the plane according to the homography matrix of the plane between two adjacent video frames.
  • Example 2 provides a method for plane estimation, further comprising: in some optional implementations, the consensus algorithm based on random sampling, according to the multiple same features
  • the position information of the point in the plurality of video frames, and determining the homography matrix of the plane between two adjacent video frames includes: cyclically extracting a preset number of feature points from the plurality of identical feature points as the first feature points, and use unextracted feature points as second feature points; according to the position information of the preset number of first feature points in every two adjacent video frames, determine the two adjacent video frames The initial homography matrix of the plane between frames; According to the position information of each second feature point in every two adjacent video frames, and each initial homography matrix, it is judged whether each second feature point is related to each initial homography matrix matching; in response to the result of matching each second feature point with each initial homography matrix, using each second feature point as an interior point until the number of cycles reaches a preset number of times and stops Circular extraction, and determine the target cycle
  • Example 3 provides a plane estimation method, further comprising: in some optional implementation manners, according to each second feature point in every adjacent two Position information in the video frame, and each initial homography matrix, judging whether each second feature point matches each initial homography matrix, including: according to each second feature point in every two adjacent videos In the frame, the position information in the video frame ranked first, and the initial homography matrix of the plane between the two adjacent video frames, determine that each second feature point is in the two adjacent video frames In the video frame, the reprojection position information in the video frame that is sorted later; According to each of the second feature points in each adjacent two video frames, the position information and the position information in the video frame that is sorted later The reprojection position information is used to determine the reprojection error of each second feature point between each adjacent two video frames; according to the reprojection error of each second feature point between each adjacent two video frames A reprojection error, judging whether each second feature point matches each initial homography matrix.
  • Example 4 provides a plane estimation method, further comprising: in some optional implementation manners, the Reprojection errors of two video frames, judging whether each second feature point matches each initial homography matrix, including: reprojection between every adjacent two video frames according to each second feature point Error, determining an average reprojection error; judging whether each second feature point matches each initial homography matrix according to the average reprojection error and a preset threshold.
  • Example 5 provides a plane estimation method, further comprising: in some optional implementation manners, determining the homography of the plane between two adjacent video frames matrix, comprising: after determining the homography matrix of the current plane between two adjacent video frames, cyclically removing the feature points used to determine the homography matrix of the current plane from the plurality of identical feature points to obtain the remaining Feature points; based on the random sampling consensus algorithm, according to the position information of the remaining feature points in the two adjacent video frames, determine the homography matrix of the plane other than the current plane in the two adjacent video frames, until the Stop loop removal when the number of remaining feature points is less than the preset number.
  • Example 6 provides a plane estimation method, further comprising: in some optional implementation manners, after determining the parameter information of the plane, further comprising: according to the The parameter information of the plane is used to determine the position and posture information of the virtual object; according to the position and posture information, the virtual object is displayed in association with the plane in a preset video frame.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例公开了一种平面估计方法、装置、电子设备及存储介质,其中该方法包括:获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在所述多个视频帧中的位置信息;基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;根据相邻两个视频帧间的平面的单应矩阵,确定所述平面的参数信息。

Description

平面估计方法、装置、电子设备及存储介质
本申请要求在2021年07月12日提交中国专利局、申请号为202110784263.6的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及计算机技术领域,例如涉及一种平面估计方法、装置、电子设备及存储介质。
背景技术
相关技术中,多个视频帧中的平面估计可应用于很多场景,例如可应用于三维重建等场景。对多帧图像进行平面估计时,传统方法通常为:先基于多帧的运动恢复结构(Structure from Motion,SfM)技术,得到三维点云数据;再根据三维点云数据进行平面估计。
传统方法的不足之处至少包括:三维点云数据的质量依赖于SfM精度,当三维点云数据质量较差时,导致平面估计效果较差。
发明内容
本公开实施例提供了一种平面估计方法、装置、电子设备及存储介质,能够实现对多视频帧进行平面估计,且平面估计效果较好。
本公开实施例提供了一种平面估计方法,包括:
获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在所述多个视频帧中的位置信息;
基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;
根据相邻两个视频帧间的平面的单应矩阵,确定所述平面的参数信息。
本公开实施例还提供了一种平面估计装置,包括:
位置信息确定模块,设置为获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在所述多个视频帧中的位置信息;
单应矩阵确定模块,设置为基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩 阵;
平面参数确定模块,设置为根据相邻两个视频帧间的平面的单应矩阵,确定所述平面的参数信息。
本公开实施例还提供了一种电子设备,所述电子设备包括:
至少一个处理器;
存储装置,设置为存储至少一个程序;
所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如本公开实施例任一所述的平面估计方法。
本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例任一所述的平面估计方法。
附图说明
结合附图并参考以下具体实施方式。附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例一所提供的一种平面估计方法的流程示意图;
图2为本公开实施例二所提供的一种平面估计方法的流程示意图;
图3为本公开实施例三所提供的一种平面估计方法的流程示意图;
图4为本公开实施例四所提供的一种平面估计装置的结构示意图;
图5为本公开实施例五所提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的多个实施例,然而应当理解的是,本公开可以通过多种形式来实现,不仅限于这里阐述的实施例。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。 术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
实施例一
图1为本公开实施例一所提供的一种平面估计方法的流程示意图。本公开实施例适用于对多帧图像进行平面估计的情形,例如适用于对视频中多视频帧进行多个平面估计的情形。该方法可以由平面估计装置来执行,该装置可以通过软件和/或硬件的形式实现,该装置可配置于电子设备中,例如配置于计算机中。
如图1所示,本实施例提供的平面估计方法,包括:
S110、获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在多个视频帧中的位置信息。
目标视频可以认为是需要确定其中所包含的平面信息的视频。可以从预设存储位置获取目标视频,也可以将实时拍摄的视频作为目标视频。
获取目标视频的多个视频帧的方式,可以包括但不限于:使用ffmpeg等开源程序解析目标视频的每一帧,以获得视频帧;或者,通过预先编写的java程序,每隔预定时间提取目标视频的视频帧等。
可以对获取的视频帧直接进行特征点的提取,也可以先对视频帧进行过滤,再对过滤后的视频帧进行特征点的提取。可以基于视频帧包含的图像内容相似度,进行视频帧过滤(例如,对于相邻的内容相似度大于预设数值的视频帧,只保留其中一张);或者,基于视频帧的间隔时长,进行视频帧过滤(例如,一定时间间隔内,只保留一张视频帧);也可以基于其他方式进行视频帧过滤,具体可根据应用场景进行选择,在此不做穷举。通过过滤视频帧,可以在保证平面估计效果的基础上,保证平面估计效率。
视频帧中的特征点可以包括但不限于,角点和/或局部像素特征点(例如像素最大值点、像素最小值点等),可根据应用场景设置待提取的特征点的种类。 不同种类的待提取的特征点,可以采用不同的提取算法进行提取。例如,当特征点为角点时,可以采用Harris角点检测算法进行角点提取;当特征点为局部像素特征点时,可以采用高斯函数差分(Difference of Gaussian,DoG)算子进行局部像素特征的提取,在此不做穷举。
特征点在视频帧中的位置信息,可以认为是特征点在视频帧中的像素坐标。在确定每帧视频帧的特征点后,可以对相同特征点在每帧视频帧中的位置进行追踪。进而,可以确定每个相同特征点,在每帧视频帧中的位置信息。
可以基于光流算法,例如KLT跟踪算法(Kanade-Lucas-Tomasi Tracking Method),来追踪同一特征点在每帧视频帧中的位置。或者,也可以基于定向快速旋转(Oriented FAST and Rotated BRIEF,ORB)特征原理,来进行特征点匹配,从而确定相同特征点在每个视频帧中的位置信息,在此不做穷举。
在一些可选方式中,可以针对每个相同特征点生成一个轨迹(track),以用于表征该相同特征点在每个视频帧中的位置信息。示例性的,当视频帧为n帧时,特征点1对应的track可以为[x 11,y 11,x 21,y 21,x 31,y 31...x n1,y n1];其中x可表征像素横坐标;y可表征像素纵坐标;x和y的下标中,首个数字可以表征视频帧的帧序号,第二个数字可以表征特征点标号,例如x 21可以表征特征点1在第2帧视频帧中的像素横坐标。
S120、基于随机抽样一致算法,根据多个相同特征点在多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵。
随机抽样一致算法,可以认为是根据一组包含正常数据和异常数据的样本数据集,计算出符合正常数据的数学模型参数的算法。而在本公开实施例中,针对确定相邻两个视频帧间一个平面的单应矩阵的情况,可以将特征点的track作为样本数据集;可以将特征点中属于该平面的特征点的track作为正常数据,将不属于该平面的特征点的track作为异常数据;可以将相邻两个视频帧间该平面的单应矩阵,作为计算出的符合正常数据的数学模型参数。
示例性的,当视频帧为n帧时,平面1对应的数学模型参数可以包括,第1帧视频帧与第2帧视频帧之间该平面的单应矩阵,第2帧视频帧到第3帧视频帧之间该平面的单应矩阵,……,第n-1帧视频帧到第n帧之间该平面的单应矩阵。可以将上述n-1个单应矩阵,作为平面1的符合正常数据的数学模型参数。
平面的单应矩阵,可以认为是平面的透视变换矩阵,可以用于表示从一视图到另一视图中平面的透视变换。单应矩阵中可以定义有采集视频使用的相机的内参(例如相机焦距、镜头畸变等)和外参(例如旋转矩阵、平移矩阵)。可以根据相邻两个视频帧之间相同特征点的位置信息,计算出相邻两个视频帧 之间的平面的单应矩阵中的参数。可以认为,基于track中包含的特征点在每个视频帧中的位置信息,可以计算出每相邻两个视频帧间的平面的单应矩阵。
本公开实施例中,可以将每个相同特征点在每个视频帧中的位置信息,作为随机抽样一致算法的输入。基于随机抽样一致算法对特征点进行随机抽样,以对属于一个平面的特征点进行确定,并基于确定的属于该平面的特征点的track,输出每相邻两个视频帧间该平面的单应矩阵。通过对整个视频直接执行随机抽样一致,获取平面的单应矩阵,能够为确定平面的参数信息奠定基础。
S130、根据相邻两个视频帧间的平面的单应矩阵,确定平面的参数信息。
平面的参数信息可以认为是,平面在空间坐标系内的表达式中的参数。示例性的,平面1在空间坐标系内的表达式为Ax+By+Cz+D=0,则A、B、C和D可以认为是平面的参数信息。
本公开实施例中,可以对相邻两个视频帧间平面的单应矩阵进行分解,得到其中定义的相机内参和外参。进而,可以根据属于平面的特征点在视频帧中的位置信息,以及相机的内参和外参,得到平面在空间坐标系内表达式的参数,以得到平面的参数信息。
可以通过多种分解方式对相邻两个视频帧间平面的单应矩阵进行分解,例如包括:可以通过定向快速旋转-同步定位与建图2(Oriented FAST and Rotated BRIEF-Simultaneous Localization And Mapping 2,ORB-SLAM2)中的Faugeras(1988)的方法实现单应矩阵分解;或者,可以通过OpenCV中decompose Homography Mat函数实现单应矩阵分解,decompose Homography Mat函数可以是用INRIA(2007)的方法实现,在此不做穷举。
通过在多视频帧维度下直接执行随机抽样一致算法,获取多个视频帧中平面的参数信息,能够避免使用SfM算法引入的误差。并且,由于多视频帧中包含更为丰富的平面信息,基于视频帧进行平面估计,不仅能够快速便捷地确定出多个平面的参数信息,而且平面估计效果更佳。
在一些可选的实现方式中,在确定平面的参数信息之后,还包括:根据平面的参数信息,确定虚拟物体的位置姿态信息;根据位置姿态信息,在预设视频帧中将虚拟物体与平面关联显示。
虚拟物体例如可以为虚拟人工智能(Artificial Intelligence,AI)的外观形象,又如可以为可进行交互的虚拟控件等。虚拟物体的位置姿态信息,可以包括但不限于虚拟物体在空间坐标系中的位置和旋转角度等信息。预设视频帧例如可以是包含需要进行关联显示的平面的视频帧,又如可以是预设时间段内的视频帧等。关联显示的方式可以包括但不限于,虚拟物体显示在平面下,悬浮在平 面上等关联的显示方式。
在确定平面的参数信息后,还可以根据平面在空间坐标系的表达式,以及虚拟物体和平面的关联显示的方式,确定虚拟物体在空间坐标系中的位置和旋转角度等信息。例如,若关联显示的方式为虚拟物体显示在一个平面上,则在空间坐标系中,虚拟物体可以在该平面对应的位置和旋转角度基础上竖直正向位移一定数值即可。
在确定虚拟物体的位置姿态信息后,还可以根据虚拟物体在空间坐标系中的位置和旋转角度等信息,以及平面在空间坐标系和预设视频帧中像素坐标系的转化关系,确定虚拟物体在预设视频帧中的像素位置、旋转角度等信息。进而,可以根据虚拟物体在预设视频帧中的像素位置、旋转角度等信息,对虚拟物体进行渲染。由于虚拟物体在空间坐标系中的位置姿态信息,为根据虚拟物体与平面的关联显示方式确定的,对虚拟物体进行渲染可以实现在预设视频帧中将虚拟物体与平面关联显示。
在这些可选的实现方式中,通过根据确定的视频中包含的平面,可以应用于渲染虚拟物体,从而可以实现对视频帧进行增强现实,以提高用户体验。
本公开实施例的技术方案,获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定多个相同特征点在多个视频帧中的位置信息;基于随机抽样一致算法,根据多个相同特征点在相邻两个视频帧中的位置信息,确定相邻两个视频帧间平面的单应矩阵;根据相邻两个视频帧间平面的单应矩阵,确定平面的参数信息。
通过在多视频帧维度下直接执行随机抽样一致算法,获取多个视频帧中平面的参数信息,能够避免使用SfM算法引入的误差。并且,由于多视频帧中包含更为丰富的平面信息,基于视频帧进行平面估计,不仅能够快速便捷地确定出多个平面的参数信息,而且平面估计效果更佳。
实施例二
本公开实施例与上述实施例中所提供的平面估计方法中多个可选方案可以结合。本实施例所提供的平面估计方法,对相邻两个视频帧间平面的单应矩阵的确定步骤进行了描述。通过循环抽取预设数量的第一特征点进行初始单应矩阵的估计,并根据初始单应矩阵确定第二特征点中内点数量,能够迭代出内点数量最多的情况;进而,可以根据内点数量最多情况下多个内点的位置信息,进行该些内点所属平面的单应矩阵的确定。从而能够实现对视频直接执行随机抽样一致,以快速便捷地确定出多帧图像中多个平面的参数信息。
图2为本公开实施例二所提供的一种平面估计方法的流程示意图。如图2 所示,本实施例提供的平面估计方法,包括:
S210、获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定多个相同特征点在多个视频帧中的位置信息。
S220、从所述多个相同特征点中循环抽取预设数量的相同特征点作为第一特征点,并将未被抽取的相同特征点作为第二特征点。
基于随机抽样一致算法,根据多个相同特征点在多个视频帧中的位置信息,确定相邻两个视频帧间平面的单应矩阵,可以包括S220-S250的步骤。由于单应矩阵有8个自由度,为确定单应矩阵中的8个未知参数,则预设数量至少需要4个。可以认为,在每次循环过程中,皆可以随机抽取预设数量的第一特征点。
S230、根据预设数量的第一特征点在多个相邻两个视频帧中每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的初始单应矩阵。
可以根据预设数量的第一特征点的track,确定预设数量的第一特征点在每相邻两个视频帧中的位置信息。进而,可以根据预设数量的第一特征点在每相邻两个视频帧中的位置信息,确定相邻两个视频帧间的平面的初始单应矩阵。
S240、根据每个第二特征点在每相邻两个视频帧中的位置信息,以及每个初始单应矩阵,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
在确定了一组初始单应矩阵后,可以用每个第二特征点来验证该组初始单应矩阵是否最优。例如可以是,若一个第二特征点的track中,该第二特征点在每相邻两个视频帧中的位置信息符合该组初始单应矩阵表征的映射关系,则可以认为该第二特征点与该组中的每个初始单应矩阵匹配。若与每个初始单应矩阵的匹配的第二特征点的数量越多,则可以认为该组初始单应矩阵越优,可以认为该组初始单应矩阵越能够正确的表征一个平面在多个视频帧之间的投影关系。
在一些可选的实现方式中,根据每个第二特征点在每相邻两个视频帧中的位置信息,以及每个初始单应矩阵,判断每个第二特征点是否与每个初始单应矩阵匹配,包括:根据每个第二特征点在每相邻两个视频帧中,排序靠前的视频帧中的位置信息,以及所述每相邻两个视频帧间的平面的初始单应矩阵,确定所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的重投影位置信息;根据所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的位置信息和重投影位置信息,确定所述每个第二特征点在所述每相邻两个视频帧之间的重投影误差;根据所述每个第二特征点在每相邻两个的视频帧的重投影误差,判断所述每个第二特征点是否与所述每个初始单应矩 阵匹配。
首先可以将一个第二特征点在相邻两个视频帧中,排序靠前的视频帧中的位置信息,代入该相邻两个视频帧间平面的初始单应矩阵,得到该第二特征点在该相邻两个视频帧中,排序靠后的视频帧中的重投影位置信息。其次,可以根据该第二特征点在该相邻两个视频帧中,排序靠后的视频帧中的位置信息和重投影位置信息之间的像素距离(例如像素坐标的欧式距离),确定该第二特征点在该相邻两个视频帧之间的重投影误差,以此类推,得到该第二特征点在所有相邻两个视频帧之间的重投影误差。再次,可以采用根据多个重投影误差,计算平均重投影误差、确定重投影误差中位数,或者选择多个重投影误差中最大/小误差的方式,确定最终的误差值。最后,可以根据最终的误差值,判断该第二特征点是否与每个初始单应矩阵匹配,例如若最终的误差值小于设定值,则判断该第二特征点与每个初始单应矩阵匹配。
示例性的,假设视频帧为n帧,第1帧视频帧与第2帧视频帧之间平面的初始单应矩阵为H 1,……,第n-1帧视频帧到第n帧之间平面的初始单应矩阵为H n-1,共确定n-1个初始单应矩阵。特征点1属于第二特征点,且特征点1对应的track可以为[x 11,y 11,x 21,y 21,x 31,y 31...x n1,y n1]。
那么,判断特征点1是否与初始单应矩阵H1-Hn-1匹配,可以包括:针对H1,将特征点1在第1帧视频帧第一帧的位置信息x11,y11代入H1,得到特征点1在第2帧视频帧第二帧中的重投影位置信息x21’,y21’;计算x21,y21和x21’,y21’之间的重投影误差w1(例如两点之间的欧氏距离等);根据上述步骤,适应性计算H2-Hn-1对应的重投影误差w2-wn-1;根据w1-wn-1,确定特征点1是否与初始单应矩阵H1-Hn-1匹配。
根据每个第二特征点在每相邻的两个视频帧之间的重投影误差,判断所述每个第二特征点是否与每个初始单应矩阵匹配,包括:根据每个第二特征点在每相邻两个视频帧之间的重投影误差,确定平均重投影误差;根据平均重投影误差和预设阈值,判断所述每个第二特征点是否与每个初始单应矩阵匹配。例如可以为,当平均重投影误差小于预设阈值时,判断第二特征点与每个初始单应矩阵匹配。其中,预设阈值可以根据实验值或经验值进行设置。
在这些可选的实现方式中,通过根据第二特征点的track和多个初始单应矩阵,确定多个重投影误差,能够实现根据多个重投影误差判断第二特征点是否与每个初始单应矩阵匹配。
S250、若所述每个第二特征点与每个初始单应矩阵匹配,则将所述每个第二特征点作为内点,直至循环次数达到预设次数时停止循环抽取,并确定各多次循环中内点的数量最多的目标循环。
本实施例中,随机抽取的预设数量的第一特征点,可以为属于同一平面上的特征点,也可以为不属于同一平面上的特征点。通常,当随机抽取的预设数量的第一特征点属于同一平面上时,属于该平面上的每个第二特征点可与每个初始单应矩阵相匹配;当随机抽取的预设数量的第一特征点不属于同一平面上时,则基本上仅有少数的第二特征点可以与每个初始单应矩阵相匹配。
循环次数可以根据经验或实验值预先设置。通过进行预设次数次循环,每次循环过程中皆可以抽取预设数量的第一特征点、确定平面的多个初始单应矩阵,并确定作为内点的第二特征点数量,能够基于内点数量最多的循环中全部内点在相邻两个视频帧中的位置信息,确定相邻两个视频帧间平面的初始单应矩阵,从而可得到初始单应矩阵的最优解。
S260、根据目标循环中确定的所有内点在相邻两个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵。
本实施例中,在确定每个初始单应矩阵的最优解后,可以联合所有内点的track,对每个初始单应矩阵进行优化,得到最终的相邻两个视频帧间的平面的单应矩阵。
S270、根据相邻两个视频帧间的平面的单应矩阵,确定平面的参数信息。
本公开实施例的技术方案,对相邻视频帧间的多个平面的单应矩阵的确定步骤进行了描述。通过循环抽取预设数量的第一特征点进行初始单应矩阵的估计,并根据初始单应矩阵确定第二特征点中内点数量,能够迭代出内点数量最多的情况;进而,可以根据内点数量最多情况下每个内点的位置信息,进行该些内点所属平面的单应矩阵的确定。从而能够实现对视频直接执行随机抽样一致,以快速便捷地确定出多帧图像中的平面参数信息。
此外,本公开实施例提供的平面估计方法与上述实施例提供的平面估计方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的效果。
实施例三
本公开实施例与上述实施例中所提供的平面估计方法中各个可选方案可以结合。本实施例所提供的平面估计方法,对视频中多个平面的单应矩阵的确定步骤进行了描述。通过在确定视频中任意一个平面的参数信息后,可以将确定该平面使用的特征点进行去除,并利用剩余的特征点循环执行随机抽样一致,能够实现对视频中出现的多个平面进行估计。从而能够快速便捷地确定出的视频中多个平面的参数信息。
图3为本公开实施例三所提供的一种平面估计方法的流程示意图。如图3 所示,本实施例提供的平面估计方法,包括:
S310、获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定多个相同特征点在多个视频帧中的位置信息。
S320、基于随机抽样一致算法,根据多个相同特征点在多个视频帧中的位置信息,确定相邻两个视频帧间当前平面的单应矩阵。
可以将当前确定的一个平面在相邻两个视频帧间的单应矩阵,作为相邻两个视频帧间当前平面的单应矩阵。确定相邻两个视频帧间当前平面的单应矩阵的方法,可与确定相邻两个视频帧间任一平面的单应矩阵的方法相同,具体可参考上述描述,在此不做赘述。
S330、循环从多个相同特征点中去除用于确定当前平面的单应矩阵的特征点,得到剩余特征点。
本实施例中,由于视频中可以包含多个平面的平面信息,在确定当前平面在相邻两个视频帧间的单应矩阵之后,还可以在去除用于确定当前平面的单应矩阵的特征点,并继续执行随机抽样一致算法,以确定其他平面的参数信息。
S340、基于随机抽样一致算法,根据剩余特征点在相邻两个视频帧中的位置信息,确定相邻两个视频帧中除所述当前平面之外的平面的单应矩阵,直至剩余特征点的数量小于预设数量时停止循环去除。
本实施例中,预设数量可以根据实验值或经验值进行设置。当剩余特征点的数量小于预设数量时,可以认为视频中包含的可进行评估的平面,皆已评估出参数信息。此时,可以停止去除用于确定当前平面的单应矩阵的特征点,并停止执行随机抽样一致算法。
S350、根据相邻两个视频帧间的每个平面的单应矩阵,确定所述每个平面的参数信息。
本实施例中,可以在每确定一个平面在相邻两个视频帧间的单应矩阵时,确定该平面的参数信息;也可以在确定完全部平面的在相邻两个视频帧间的单应矩阵时,确定每个平面的参数信息。
本公开实施例的技术方案,对视频中多个平面的单应矩阵的确定步骤进行了描述。通过在确定视频中任意一个平面的参数信息后,可以将确定该平面使用的特征点进行去除,并利用剩余的特征点循环执行随机抽样一致,能够实现对视频中出现的多个平面进行估计。从而能够快速便捷地确定出的视频中多个平面的参数信息。
此外,本公开实施例提供的平面估计方法与上述实施例提供的平面估计方 法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的效果。
实施例四
图4为本公开实施例四所提供的一种平面估计装置的结构示意图。本实施例提供的平面估计装置适用于对多帧图像进行平面估计的情形,例如适用于对视频进行多个平面估计的情形。
如图4所示,本实施例提供的平面估计装置,包括:位置信息确定模块410,设置为获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在多个视频帧中的位置信息;单应矩阵确定模块420,设置为基于随机抽样一致算法,根据多个相同特征点在多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;平面参数确定模块430,设置为根据相邻两个视频帧间的平面的单应矩阵,确定平面的参数信息。
在一些可选的实现方式中,单应矩阵确定模块,包括:抽取单元,设置为从所述多个相同特征点中循环抽取预设数量的特征点作为第一特征点,并将未被抽取的特征点作为第二特征点;初始矩阵确定单元,设置为根据所述预设数量的第一特征点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的初始单应矩阵;判断单元,设置为根据每个第二特征点在每相邻两个视频帧中的位置信息,以及每个初始单应矩阵,判断所述每个第二特征点是否与每个初始单应矩阵匹配;内点确定单元,设置为响应于所述每个第二特征点与每个初始单应矩阵匹配的结果,将所述每个第二特征点作为内点,直至循环次数达到预设次数时停止循环抽取,并确定所述预设次数的循环中内点的数量最多的目标循环;最终矩阵确定单元,设置为根据目标循环中确定的全部内点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的单应矩阵。
在一些可选的实现方式中,判断单元,包括:重投影子单元,设置为根据每个第二特征点在每相邻两个视频帧中,排序靠前的视频帧中的位置信息,以及所述每相邻两个视频帧间的平面的初始单应矩阵,确定所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的重投影位置信息;误差确定子单元,设置为根据所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的位置信息和重投影位置信息,确定所述每个第二特征点在所述每相邻两个视频帧之间的重投影误差;判断子单元,设置为根据所述每个第二特征点在每相邻两个视频帧的重投影误差,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
在一些可选的实现方式中,判断子单元是设置为:根据每个第二特征点在每相邻两个视频帧之间的重投影误差,确定平均重投影误差;根据平均重投影误差和预设阈值,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
在一些可选的实现方式中,单应矩阵确定模块,包括:去除单元,设置为在确定相邻两个视频帧间的当前平面的单应矩阵之后,循环从所述多个相同特征点中去除用于确定当前平面的单应矩阵的特征点,得到剩余特征点;矩阵确定单元,设置为基于随机抽样一致算法,根据剩余特征点在相邻两个视频帧中的位置信息,确定相邻两个视频帧中除当前平面之外的平面的单应矩阵,直至剩余特征点的数量小于预设数量时停止循环去除。
在一些可选的实现方式中,平面估计装置,还包括:虚拟显示模块,设置为根据平面的参数信息,确定虚拟物体的位置姿态信息;根据位置姿态信息,在预设视频帧中将虚拟物体与平面关联显示。
本公开实施例所提供的平面估计装置,可执行本公开任意实施例所提供的平面估计方法,具备执行方法相应的功能模块。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
实施例五
下面参考图5,其示出了适于用来实现本公开实施例的电子设备(例如图5中的终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、PAD(平板电脑)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从存储装置506加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM503通过总线504彼此相连。输入/输出(Input/Output,I/O)接口505也连接至 总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置506被安装,或者从ROM502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的平面估计方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的平面估计方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的效果。
实施例六
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的平面估计方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存(FLASH)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计 算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(Hyper Text Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定多个视频帧中的特征点中的多个相同特征点在多个视频帧中的位置信息;基于随机抽样一致算法,根据多个相同特征点在多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;根据相邻两个视频帧间的平面的单应矩阵,确定平面的参数信息。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执 行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元、模块的名称在某种情况下并不构成对该单元、模块本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上***(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种平面估计方法,该方法包括:获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在所述多个视频帧中的位置信息;基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;根据相邻两个视频帧间的平面的单应矩阵,确定所述平面的参数信息。
根据本公开的一个或多个实施例,【示例二】提供了一种平面估计方法,还包括:在一些可选的实现方式中,所述基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵,包括:从所述多个相同特征点中循环抽取预设数量的特征点作为第一特征点,并将未被抽取的特征点作为第二特征点;根据所述预设数量的第一特征点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的初始单应矩阵;根据每个第二特征点在每相邻两个视频帧中的位置信息, 以及每个初始单应矩阵,判断所述每个第二特征点是否与每个初始单应矩阵匹配;响应于所述每个第二特征点与每个初始单应矩阵匹配的结果,将所述每个第二特征点作为内点,直至循环次数达到预设次数时停止循环抽取,并确定所述预设次数的循环中所述内点的数量最多的目标循环;根据所述目标循环中确定的全部内点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的单应矩阵。
根据本公开的一个或多个实施例,【示例三】提供了一种平面估计方法,还包括:在一些可选的实现方式中,所述根据每个第二特征点在每相邻两个视频帧中的位置信息,以及每个初始单应矩阵,判断所述每个第二特征点是否与每个初始单应矩阵匹配,包括:根据每个第二特征点在每相邻两个视频帧中,排序靠前的视频帧中的位置信息,以及所述每相邻两个视频帧间的平面的初始单应矩阵,确定所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的重投影位置信息;根据所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的位置信息和所述重投影位置信息,确定所述每个第二特征点在所述每相邻两个视频帧之间的重投影误差;根据所述每个第二特征点在每相邻两个视频帧的重投影误差,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
根据本公开的一个或多个实施例,【示例四】提供了一种平面估计方法,还包括:在一些可选的实现方式中,所述根据所述每个第二特征点在每相邻两个视频帧的重投影误差,判断所述每个第二特征点是否与每个初始单应矩阵匹配,包括:根据每个第二特征点在每相邻两个视频帧之间的重投影误差,确定平均重投影误差;根据所述平均重投影误差和预设阈值,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
根据本公开的一个或多个实施例,【示例五】提供了一种平面估计方法,还包括:在一些可选的实现方式中,所述确定相邻两个视频帧间的平面的单应矩阵,包括:在确定相邻两个视频帧间的当前平面的单应矩阵之后,循环从所述多个相同特征点中去除用于确定所述当前平面的单应矩阵的特征点,得到剩余特征点;基于随机抽样一致算法,根据所述剩余特征点在相邻两个视频帧中的位置信息,确定相邻两个视频帧中除当前平面之外的平面的单应矩阵,直至所述剩余特征点的数量小于预设数量时停止循环去除。
根据本公开的一个或多个实施例,【示例六】提供了一种平面估计方法,还包括:在一些可选的实现方式中,在确定所述平面的参数信息之后,还包括:根据所述平面的参数信息,确定虚拟物体的位置姿态信息;根据所述位置姿态信息,在预设视频帧中将所述虚拟物体与所述平面关联显示。
此外,虽然采用特定次序描绘了多种操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (10)

  1. 一种平面估计方法,包括:
    获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在所述多个视频帧中的位置信息;
    基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;
    根据相邻两个视频帧间的平面的单应矩阵,确定所述平面的参数信息。
  2. 根据权利要求1所述的方法,其中,所述基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵,包括:
    从所述多个相同特征点中循环抽取预设数量的特征点作为第一特征点,并将未被抽取的特征点作为第二特征点;
    根据所述预设数量的第一特征点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的初始单应矩阵;
    根据每个第二特征点在每相邻两个视频帧中的位置信息,以及每个初始单应矩阵,判断所述每个第二特征点是否与每个初始单应矩阵匹配;
    响应于所述每个第二特征点与每个初始单应矩阵匹配的结果,将所述每个第二特征点作为内点,直至循环次数达到预设次数时停止循环抽取,并确定所述预设次数的循环中所述内点的数量最多的目标循环;
    根据所述目标循环中确定的全部内点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的单应矩阵。
  3. 根据权利要求2所述的方法,其中,所述根据每个第二特征点在每相邻两个视频帧中的位置信息,以及所述每个初始单应矩阵,判断所述每个第二特 征点是否与每个初始单应矩阵匹配,包括:
    根据每个第二特征点在每相邻两个视频帧中,排序靠前的视频帧中的位置信息,以及所述每相邻两个视频帧间的平面的初始单应矩阵,确定所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的重投影位置信息;
    根据所述每个第二特征点在所述每相邻两个视频帧中,排序靠后的视频帧中的位置信息和所述重投影位置信息,确定所述每个第二特征点在所述每相邻两个视频帧之间的重投影误差;
    根据所述每个第二特征点在每相邻两个视频帧的重投影误差,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
  4. 根据权利要求3所述的方法,其中,所述根据所述每个第二特征点在每相邻两个视频帧的重投影误差,判断所述每个第二特征点是否与每个初始单应矩阵匹配,包括:
    根据每个第二特征点在每相邻两个视频帧之间的重投影误差,确定平均重投影误差;
    根据所述平均重投影误差和预设阈值,判断所述每个第二特征点是否与每个初始单应矩阵匹配。
  5. 根据权利要求1所述的方法,其中,所述确定相邻两个视频帧间的平面的单应矩阵,包括:
    在确定相邻两个视频帧间的当前平面的单应矩阵之后,循环从所述多个相同特征点中去除用于确定所述当前平面的单应矩阵的特征点,得到剩余特征点;
    基于随机抽样一致算法,根据所述剩余特征点在相邻两个视频帧中的位置信息,确定相邻两个视频帧中除当前平面之外的平面的单应矩阵,直至所述剩余特征点的数量小于预设数量时停止循环去除。
  6. 根据权利要求1-5中任一所述的方法,在确定所述平面的参数信息之后,还包括:
    根据所述平面的参数信息,确定虚拟物体的位置姿态信息;
    根据所述位置姿态信息,在预设视频帧中将所述虚拟物体与所述平面关联显示。
  7. 一种平面估计装置,包括:
    位置信息确定模块,设置为获取目标视频的多个视频帧,对每个视频帧中的特征点进行提取,并确定所述多个视频帧中的特征点中的多个相同特征点在所述多个视频帧中的位置信息;
    单应矩阵确定模块,设置为基于随机抽样一致算法,根据所述多个相同特征点在所述多个视频帧中的位置信息,确定相邻两个视频帧间的平面的单应矩阵;
    平面参数确定模块,设置为根据相邻两个视频帧间的平面的单应矩阵,确定所述平面的参数信息。
  8. 根据权利要求7所述的装置,其中,所述单应矩阵确定模块,包括:
    抽取单元,设置为从所述多个相同特征点中循环抽取预设数量的特征点作为第一特征点,并将未被抽取的特征点作为第二特征点;
    初始矩阵确定单元,设置为根据所述预设数量的第一特征点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的初始单应矩阵;
    判断单元,设置为根据每个第二特征点在每相邻两个视频帧中的位置信息,以及每个初始单应矩阵,判断所述每个第二特征点是否与每个初始单应矩阵匹配;
    内点确定单元,设置为响应于所述每个第二特征点与每个初始单应矩阵匹 配的结果,将所述每个第二特征点作为内点,直至循环次数达到预设次数时停止循环抽取,并确定所述预设次数的循环中所述内点的数量最多的目标循环;
    最终矩阵确定单元,设置为根据所述目标循环中确定的全部内点在每相邻两个视频帧中的位置信息,确定所述每相邻两个视频帧间的平面的单应矩阵。
  9. 一种电子设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一个程序;
    所述至少一个被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-6中任一所述的平面估计方法。
  10. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-6中任一所述的平面估计方法。
PCT/CN2022/099337 2021-07-12 2022-06-17 平面估计方法、装置、电子设备及存储介质 WO2023284479A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110784263.6A CN115619818A (zh) 2021-07-12 2021-07-12 一种平面估计方法、装置、电子设备及存储介质
CN202110784263.6 2021-07-12

Publications (1)

Publication Number Publication Date
WO2023284479A1 true WO2023284479A1 (zh) 2023-01-19

Family

ID=84855936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099337 WO2023284479A1 (zh) 2021-07-12 2022-06-17 平面估计方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115619818A (zh)
WO (1) WO2023284479A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075593A1 (en) * 2016-09-15 2018-03-15 Qualcomm Incorporated Automatic scene calibration method for video analytics
CN109462748A (zh) * 2018-12-21 2019-03-12 福州大学 一种基于单应性矩阵的立体视频颜色校正算法
CN110276751A (zh) * 2019-06-17 2019-09-24 北京字节跳动网络技术有限公司 确定图像参数的方法、装置、电子设备和计算机可读存储介质
CN112598714A (zh) * 2021-03-04 2021-04-02 广州市玄武无线科技股份有限公司 一种基于视频帧单应性变换的静止目标跟踪方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075593A1 (en) * 2016-09-15 2018-03-15 Qualcomm Incorporated Automatic scene calibration method for video analytics
CN109462748A (zh) * 2018-12-21 2019-03-12 福州大学 一种基于单应性矩阵的立体视频颜色校正算法
CN110276751A (zh) * 2019-06-17 2019-09-24 北京字节跳动网络技术有限公司 确定图像参数的方法、装置、电子设备和计算机可读存储介质
CN112598714A (zh) * 2021-03-04 2021-04-02 广州市玄武无线科技股份有限公司 一种基于视频帧单应性变换的静止目标跟踪方法

Also Published As

Publication number Publication date
CN115619818A (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
WO2021082801A1 (zh) 增强现实处理方法及装置、***、存储介质和电子设备
US11417014B2 (en) Method and apparatus for constructing map
CN112733820B (zh) 障碍物信息生成方法、装置、电子设备和计算机可读介质
WO2022028254A1 (zh) 定位模型优化方法、定位方法和定位设备
CN111414879A (zh) 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
WO2023168955A1 (zh) 拾取位姿信息确定方法、装置、设备和计算机可读介质
WO2023207379A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022028253A1 (zh) 定位模型优化方法、定位方法和定位设备以及存储介质
CN110717467A (zh) 头部姿势的估计方法、装置、设备及存储介质
CN109816791B (zh) 用于生成信息的方法和装置
WO2023138441A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023284479A1 (zh) 平面估计方法、装置、电子设备及存储介质
CN113963000B (zh) 图像分割方法、装置、电子设备及程序产品
CN115937290A (zh) 一种图像深度估计方法、装置、电子设备及存储介质
CN111915532B (zh) 图像追踪方法、装置、电子设备及计算机可读介质
CN112308809B (zh) 一种图像合成方法、装置、计算机设备及存储介质
CN113168706A (zh) 视频流的帧中的对象位置确定
WO2022194061A1 (zh) 目标跟踪方法、装置、设备及介质
WO2023125360A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112668474B (zh) 平面生成方法和装置、存储介质和电子设备
CN114049417B (zh) 虚拟角色图像的生成方法、装置、可读介质及电子设备
WO2023216918A1 (zh) 渲染图像的方法、装置、电子设备及存储介质
WO2024036764A1 (zh) 一种图像处理方法、装置、设备及介质
CN113808050B (zh) 3d点云的去噪方法、装置、设备及存储介质
WO2022194157A1 (zh) 一种目标跟踪方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.04.2024)