WO2019119597A1 - 移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件 - Google Patents

移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件 Download PDF

Info

Publication number
WO2019119597A1
WO2019119597A1 PCT/CN2018/073444 CN2018073444W WO2019119597A1 WO 2019119597 A1 WO2019119597 A1 WO 2019119597A1 CN 2018073444 W CN2018073444 W CN 2018073444W WO 2019119597 A1 WO2019119597 A1 WO 2019119597A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile terminal
lens
matrix
state
time
Prior art date
Application number
PCT/CN2018/073444
Other languages
English (en)
French (fr)
Inventor
陈聪
姜文杰
刘靖康
彭文学
王亦敏
尹书田
Original Assignee
深圳岚锋创视网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201711408354.XA external-priority patent/CN107948488A/zh
Priority claimed from CN201810037000.7A external-priority patent/CN108337411B/zh
Application filed by 深圳岚锋创视网络科技有限公司 filed Critical 深圳岚锋创视网络科技有限公司
Publication of WO2019119597A1 publication Critical patent/WO2019119597A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof

Definitions

  • the invention belongs to the field of video shooting, and in particular relates to a method and a lens assembly for realizing plane shooting, panoramic shooting, and a combination of a mobile terminal and a lens assembly.
  • panoramic pictures and videos are taken by a dedicated panoramic camera.
  • the panoramic camera is composed of multiple lenses.
  • the panoramic camera stores the pictures and videos taken by multiple lenses separately, and then exports them to a computer for synthesizing panoramic pictures and videos.
  • the cost of a dedicated panoramic camera is high, and fewer people specifically buy a panoramic camera for occasional panoramic pictures and videos. Therefore, it is highly desirable to provide a lens assembly that is low in cost and that can be used with a mobile terminal to achieve wide-angle flat video shooting or panoramic shooting.
  • An object of the present invention is to provide a method and a lens assembly for a plane shooting, a panoramic shooting, and a lens assembly, which are intended to solve the problem of shooting a wide-angle flat video or a dedicated panoramic camera through a professional camera with a wide-angle lens. Shooting panoramic images and videos is a costly issue.
  • the present invention provides a lens assembly including a mounting bracket, a front lens, and a rear lens, wherein the front and back sides of the mounting bracket are provided with through holes, and the front lens is embedded in the mounting bracket
  • the front through hole, the rear lens is embedded in the through hole of the back of the mounting bracket
  • the mounting bracket is detachably sleeved outside the camera area of the mobile terminal, and the front lens covers the mobile terminal when the lens assembly is mounted to the mobile terminal
  • Front camera, rear lens covers the rear camera of the mobile terminal, front lens and rear lens are wide-angle lens or fisheye lens; control the front camera or rear of the mobile terminal through the plane shooting application installed in the mobile terminal
  • the camera is used for shooting, and the front lens or the rear lens is used for plane video shooting, or the front camera and the rear camera of the mobile terminal are simultaneously controlled by the panoramic shooting application installed in the mobile terminal, and the camera is matched.
  • the front and rear lenses are used for panoramic shooting.
  • the mobile terminal is a mobile phone or a tablet; the lens component does not block the display screen of the mobile terminal.
  • the front lens is coaxial or substantially parallel to the optical axis of the front camera of the mobile terminal
  • the rear lens is coaxial or substantially parallel to the optical axis of the rear camera of the mobile terminal.
  • the front lens is embedded in the through hole of the front surface of the mounting bracket through the front lens sleeve
  • the rear lens is embedded into the through hole of the back surface of the mounting bracket through the rear lens sleeve.
  • front lens sleeve and the rear lens sleeve are integrated with the mounting bracket, or the front lens sleeve and the rear lens sleeve are respectively detachably fixed to the mounting bracket, or The front lens sleeve and the rear lens sleeve are fixedly coupled to the mounting bracket, respectively.
  • the mounting bracket has an annular side wall forming a hollow cavity, the shape of the cavity matching the camera area of the mobile terminal.
  • a front surface of the mounting bracket is further provided with a through hole corresponding to a position of a horn of the mobile terminal.
  • the present invention further provides a method for implementing planar shooting by a mobile terminal and a lens assembly, the method comprising:
  • the mobile terminal launches a plane shooting application installed in the mobile terminal, the mobile terminal being mounted with the lens assembly as described above;
  • the mobile terminal acquires a current state time stamp, an acceleration count value, and an angular velocity value of the gyroscope in the mobile terminal in real time;
  • the mobile terminal uses the extended Kalman filter combined with the acceleration count value and the angular velocity value to estimate the amount of rotation of the mobile terminal to the world coordinate system;
  • the mobile terminal synchronizes the gyroscope time stamp with the time stamp of the planar video frame
  • the mobile terminal performs quaternion interpolation on the state of the gyroscope to obtain a rotation matrix corresponding to the plane video frame;
  • the mobile terminal rotates the planar video frame according to the current rotation matrix to generate a stable planar video frame.
  • the present invention further provides a method for implementing planar shooting by using a mobile terminal and a lens assembly, the method comprising:
  • the mobile terminal launches a plane shooting application installed in the mobile terminal, the mobile terminal being mounted with the lens assembly as described above;
  • the mobile terminal acquires the current state time stamp, the acceleration count value, and the angular velocity value of the mobile terminal in real time;
  • the mobile terminal estimates the rotation vector of the current state by using the extended Kalman filter combined with the acceleration count value and the angular velocity value;
  • the mobile terminal calculates a current rotation matrix by using a Rodrigue rotation formula according to a rotation vector of the current state
  • the mobile terminal rotates the planar video frame according to the current rotation matrix to generate a stable planar video frame.
  • the present invention further provides a method for implementing panoramic shooting by using a mobile terminal and a lens assembly, the method comprising:
  • the mobile terminal launches a panoramic shooting application installed in the mobile terminal, the mobile terminal being mounted with the lens assembly as described above;
  • the mobile terminal pre-establishes a spherical panoramic mathematical model, wherein the spherical panoramic mathematical model is centered on the center of the spatial position of the front camera and the rear camera of the mobile terminal, and can be collected by the front camera and the rear camera.
  • the spherical crowns obtained by the angles of view are combined into a spherical surface;
  • the front camera controlling the mobile terminal collects an image through the front lens of the lens assembly, and controls the rear camera of the mobile terminal to collect an image through the rear lens of the lens assembly;
  • the two images are mapped into the spherical panoramic mathematical model
  • the overlapping regions of the two images in the spherical panoramic mathematical model are fused to splicing the two images into a spherical panoramic image covering all the viewing angles of the horizontal and vertical directions.
  • the lens assembly since the lens assembly includes a mounting bracket, a front lens, and a rear lens, when the lens assembly is mounted to the mobile terminal, the front lens covers the front camera of the mobile terminal, and the rear lens covers the rear camera of the mobile terminal. Both the front lens and the rear lens are wide-angle lenses or fisheye lenses. Therefore, the front camera or the rear camera of the mobile terminal is controlled by the plane shooting application installed in the mobile terminal to perform shooting, and the front camera or the rear lens is used for plane video shooting, which is low in cost and does not require a professional camera.
  • the front camera and the rear camera of the mobile terminal are simultaneously controlled by the panoramic shooting application installed in the mobile terminal, and the front lens and the rear lens can be used for panoramic shooting, and the cost is low, and a dedicated panoramic camera is not required. .
  • the state of the gyroscope is interpolated to obtain the rotation matrix of the corresponding planar video frame, Can get a more accurate rotation matrix.
  • a planar video frame is then rotated according to the current rotation matrix to generate a stable planar video frame. Therefore, the planar video frame that can finally stabilize the jitter is robust to large noise scenes and most motion scenes.
  • the estimated angle of the acceleration count value is susceptible to interference (such as walking, walking, running, etc.)
  • the accumulated error of the angular velocity becomes larger and larger as time passes.
  • the rotation vector of the current state is estimated, and Rodrigue rotation is performed according to the rotation vector of the current state. The formula calculates the current rotation matrix and then rotates the planar video frame, thus ultimately stabilizing the dithered planar video frame.
  • the method for implementing the panoramic shooting by the mobile terminal and the lens assembly of the present invention can control the front camera of the mobile terminal to collect an image through the front lens of the lens assembly, and simultaneously
  • the rear camera controlling the mobile terminal collects an image through the rear lens of the lens assembly, and then maps the two images to the sphere according to the correspondence between the lens imaging height of the front lens and the rear lens and the angle of the field of view.
  • the panoramic mathematical model the overlapping regions of the two images in the spherical panoramic mathematical model are fused, thereby splicing the two images into a spherical panoramic image covering all the viewing angles of the horizontal and vertical directions.
  • the invention can collect the complete panoramic image at one time, can record the panoramic video, can obtain the panoramic image more completely and efficiently, and greatly simplifies the user operation process and reduces the cost of the panoramic shooting; in addition, based on the spherical panoramic mathematical model Therefore, the information collection capability of the camera is maximized, so that it has the ability to save the scene on the spot.
  • FIG. 1 is a front elevational view of a lens assembly according to an embodiment of the present invention.
  • FIG. 2 is a rear elevational view of a lens assembly according to an embodiment of the present invention.
  • FIG 3 is a right side view of a lens assembly according to an embodiment of the present invention.
  • FIG. 4 is a top plan view of a lens assembly according to an embodiment of the present invention.
  • FIG. 5 is a bottom view of a lens assembly according to an embodiment of the present invention.
  • Figure 6 is a cross-sectional view of a lens assembly according to an embodiment of the present invention.
  • FIG. 7 is a front view of a lens assembly and a mobile terminal according to an embodiment of the present invention.
  • FIG. 8 is a rear view of a lens assembly and a mobile terminal according to an embodiment of the present invention.
  • FIG. 9 is a top plan view of a lens assembly and a mobile terminal according to an embodiment of the present invention.
  • FIG. 10 is a right side view of the lens assembly and the mobile terminal provided by the embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a second lens unit for performing panoramic shooting with a mobile terminal (Iphone 6, Iphone 6S, Iphone 7, Iphone 8) according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of a third lens assembly for implementing panoramic shooting with a mobile terminal (Iphone 6Plus, Iphone 6S Plus, Iphone 7Plus, Iphone 8Plus) according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a fourth lens unit for performing panoramic shooting with a mobile terminal (Iphone X) according to an embodiment of the present invention.
  • FIG. 14 is a flowchart of a method for implementing planar shooting by using a mobile terminal and a lens assembly according to an embodiment of the present invention.
  • FIG. 15 is a flowchart of S103 in a method for implementing plane shooting by using a mobile terminal and a lens assembly according to an embodiment of the present invention.
  • FIG. 16 is a flowchart of a method for implementing planar shooting by a mobile terminal and a lens assembly according to another embodiment of the present invention.
  • FIG. 17 is a flowchart of S203 in a method for implementing planar shooting by a mobile terminal and a lens assembly according to another embodiment of the present invention.
  • FIG. 18 is a flowchart of a method for implementing panoramic shooting by using a mobile terminal and a lens assembly according to an embodiment of the present invention.
  • the lens assembly 100 of the embodiment of the present invention includes a mounting bracket 101 , a front lens 102 , and a rear lens 103 .
  • the front and back sides of the mounting bracket 101 are provided with through holes (not shown).
  • the front lens 102 is embedded in the through hole of the front surface of the mounting bracket 101
  • the rear lens 103 is embedded in the through hole of the back surface of the mounting bracket 101
  • the mounting bracket 101 is detachably sleeved outside the camera area of the mobile terminal 200.
  • the front lens 102 is coaxial or substantially parallel with the optical axis of the front camera of the mobile terminal 200.
  • the rear lens 103 is coaxial or substantially parallel to the optical axis of the rear camera of the mobile terminal 200.
  • the front lens 102 is embedded in the through hole of the front surface of the mounting bracket 101 through the front lens sleeve 104, and the rear lens 103 is inserted into the through hole of the rear surface of the mounting bracket 101 through the rear lens sleeve 105.
  • the front lens sleeve 104 and the rear lens sleeve 105 may be integral with the mounting bracket 101, or the front lens sleeve 104 and the rear lens sleeve 105 may be detachably fixed to the mounting bracket, respectively, or The lens barrel 104 and the rear lens barrel 105 are fixedly coupled to the mounting bracket 101, respectively.
  • the mounting bracket 101 has an annular side wall 106 forming a hollow cavity, the shape of which matches the camera area of the mobile terminal 200, so that the mounting bracket 101 can be fixed to the camera area of the mobile terminal 200.
  • the mounting bracket 101 can have a top wall or no top wall, shown as a mounting bracket without a top wall (see Figure 10). As shown in FIGS. 6 and 10, since the top of the mobile terminal is generally curved, in order to better fit the side wall 106 of the mounting bracket 101 with the mobile terminal 200, the front and back side walls 106 of the mounting bracket 101 are oriented. The middle of the top portion extends a predetermined distance to form the fixing portion 107, so that the mounting bracket 101 does not easily slip. Referring to FIG. 5, the bottom end of the side wall 106 of the mounting bracket 101 mating with the side of the mobile terminal 200 is provided with a protrusion 108 to better secure the mounting bracket 101 to the mobile terminal 200. Referring to FIG. 7 , the front surface of the mounting bracket 101 is further provided with a through hole 109 corresponding to the horn position of the mobile terminal 200 , so that the lens assembly 100 does not affect the horn playing effect of the mobile terminal.
  • the mobile terminal 200 may be a mobile phone, a tablet computer, or the like.
  • Both the front lens 102 and the rear lens 103 may be wide-angle lenses or fisheye lenses (ie, super wide-angle lenses).
  • FIG. 10 is only a lens assembly used in conjunction with one of the mobile terminals.
  • the structure of the lens assembly provided by the embodiment of the present invention can also be adapted. Modified as shown in Figures 11, 12 and 13.
  • the front camera or the rear camera of the mobile terminal can be controlled by starting a plane shooting application installed in the mobile terminal. Plane shooting with the front or rear lens. Since the front camera and the rear camera are covered by the front lens and the rear lens, the images captured by the front camera and the rear camera are actually pre-positioned.
  • the flat video frames captured by the lens and rear lens, because the front and rear lenses are wide-angle lenses or fisheye lenses, can achieve a 180-degree viewing angle.
  • the front camera and the rear position of the mobile terminal can be controlled simultaneously by starting the panoramic shooting application installed in the mobile terminal.
  • the camera is shooting. Since the front camera and the rear camera are covered by the front lens and the rear lens, respectively, the images captured by the front camera and the rear camera are actually images captured by the front lens and the rear lens.
  • the front and rear lenses are wide-angle lenses or fisheye lenses that can achieve a 180-degree viewing angle, so two images captured by the front lens, rear lens, front camera, and rear camera; two images can be mapped
  • the overlapping regions of the two images in the spherical panoramic mathematical model are fused, thereby splicing the two images into a spherical panoramic image covering all the viewing angles of the horizontal and vertical directions.
  • an embodiment of the present invention further provides a method for implementing a plane shooting by using a mobile terminal and a lens component, where the method includes:
  • the mobile terminal starts a plane shooting application installed in the mobile terminal, where the mobile terminal is installed with a lens assembly according to an embodiment of the present invention
  • the method may further include the following steps:
  • the mobile terminal receives an instruction selected by the user to activate the anti-shake function.
  • the mobile terminal acquires, in real time, a current state timestamp, an acceleration count value, and an angular velocity value of the gyroscope in the mobile terminal.
  • the real-time acquisition of the acceleration count value of the gyroscope in the mobile terminal may specifically be: reading the triaxial acceleration count value by using the gravity sensor.
  • the real-time acquisition of the angular velocity value of the gyroscope in the mobile terminal may specifically be: reading the triaxial angular velocity value by using the angular velocity sensor.
  • the acceleration count value is denoised by low-pass filtering. Specifically, the following steps may be included:
  • d i represents the acceleration count value at the i-th moment
  • R i is the relative rotation amount of the ith ith frame video.
  • ⁇ i represents the angular velocity value at the i- th moment
  • d' i-1 represents the filtered acceleration count value at the i-1th time
  • represents the smoothing factor.
  • f c represents the cutoff frequency of the low pass filter
  • Rc represents the time constant
  • ⁇ t represents the sampling time interval of the gyroscope data.
  • the mobile terminal uses the extended Kalman filter to combine the acceleration count value and the angular velocity value to estimate the rotation amount of the mobile terminal to the world coordinate system;
  • Extended Kalman filtering linearizes the nonlinear system and then performs Kalman filtering.
  • Kalman filtering is a highly efficient recursive filter that estimates the state of a dynamic system from a series of measurements that do not completely contain noise. .
  • S103 may specifically include the following steps:
  • ⁇ ( ⁇ k ) exp( ⁇ [ ⁇ k ⁇ t] ⁇ ), where ⁇ k is the angular velocity value at the Kth moment, and ⁇ t represents the sampling time interval of the gyro data.
  • Q k is a covariance matrix of state noise
  • is the smoothing factor of the amount of acceleration change
  • is the influence factor of the acceleration mode length
  • h is the observation function
  • h(q,v) q ⁇ g+v k
  • g is the gravity vector in the world coordinate system
  • q is the state quantity, ie the amount of rotation from the world coordinate system to the gyro coordinate system
  • v k is Measuring noise
  • the front camera controlling the mobile terminal collects a plane video frame via a front lens of the lens component, or controls a rear camera of the mobile terminal to collect a plane video frame via a rear lens of the lens component;
  • the mobile terminal synchronizes the gyroscope timestamp with a timestamp of the plane video frame.
  • S105 may specifically be:
  • the mobile terminal synchronizes the timestamp of the gyroscope with the time stamp of the planar video frame such that t k ⁇ t j > t k-1 , where t j is the time stamp of the planar video frame and t k is the time stamp of the Kth frame of the gyroscope. t k-1 is the time stamp of the K-1 frame of the gyroscope.
  • the mobile terminal performs quaternion interpolation on the state of the gyroscope to obtain a rotation matrix corresponding to the plane video frame.
  • S106 may specifically include the following steps:
  • the mobile terminal calculates the relative amount of rotation of the neighboring gyroscope time stamp, Where r k is the relative amount of rotation at the Kth moment, with The state posterior estimator for the kth and k-1th moments, that is, the amount of rotation of the world coordinate system to the gyro coordinate system;
  • the mobile terminal calculates a rotation matrix of the jth frame video in the plane video frame
  • the mobile terminal rotates the planar video frame according to the current rotation matrix to generate a stable planar video frame.
  • S107 may specifically include the following steps:
  • the mobile terminal maps grid points on the latitude and longitude two-dimensional image to spherical coordinates
  • the mobile terminal traverses all points on the unit ball, and uses the current rotation matrix to rotate all points on the unit ball to generate a stable planar video frame;
  • another embodiment of the present invention further provides a method for implementing a plane shooting by a mobile terminal and a lens assembly, where the method includes:
  • the mobile terminal starts a plane shooting application installed in the mobile terminal, where the mobile terminal is installed with a lens assembly according to an embodiment of the present invention
  • the method may further include the following steps:
  • the mobile terminal receives an instruction selected by the user to activate the anti-shake function.
  • the mobile terminal acquires, in real time, a current state timestamp, an acceleration count value, and an angular velocity value of the mobile terminal.
  • the acceleration count value of the mobile terminal can be obtained in real time by using a gravity sensor to read the three-axis acceleration count value.
  • the real-time acquisition of the angular velocity value of the mobile terminal may specifically be: reading the triaxial angular velocity value by using the angular velocity sensor.
  • the acceleration count value and the angular velocity value are denoised by low-pass filtering. Specifically, the following steps may be included:
  • the mobile terminal uses the extended Kalman filter to combine the acceleration count value and the angular velocity value to estimate a rotation vector of the current state.
  • Extended Kalman filtering linearizes the nonlinear system and then performs Kalman filtering.
  • Kalman filtering is a highly efficient recursive filter that estimates the state of a dynamic system from a series of measurements that do not completely contain noise. .
  • S203 may specifically include the following steps:
  • S2031 may specifically include the following steps:
  • the mobile terminal initializes an initial state transition matrix, an initial prediction covariance matrix, and an initial observation matrix, wherein the initial state transition matrix Initial prediction covariance matrix Initial observation matrix
  • the mobile terminal calculates the state transition matrix at time k Computing observation information matrix
  • x k-1 represents the state estimation of the mobile terminal at time k-1
  • x k represents the state estimation of the mobile terminal at time k
  • f denotes a state equation function
  • x denotes the state of the mobile terminal, that is, a rotation angle in three axial directions
  • h denotes an observation equation function
  • x k-2 represents the state of the mobile terminal at time k-2
  • u k-1 represents the angular velocity value at time k-1
  • w k-1 represents the process noise at time k-1
  • Indicates that the k-2 time is used to predict the estimated state of the mobile terminal at time k-1
  • x k-1 represents the state of the mobile terminal at time k-1
  • u k represents the angular velocity value at time k
  • w k represents the kth Process noise at the moment
  • the mobile terminal projects the vertical downward gravity acceleration under the reference frame coordinate system to the rigid body coordinate system, and passes the formula Calculate the forecasting margin
  • z k is the acceleration count value after noise reduction processing using low-pass filtering at time k
  • g represents the vertical downward gravity vector in the reference coordinate system
  • g [0, 0, -9.81] T
  • v k is expressed as a measurement error.
  • the mobile terminal using the estimation error covariance on a state of the matrix P k-1
  • the S2032 may specifically use a formula.
  • the calculated state prediction estimates a covariance matrix P k
  • the mobile terminal calculates an optimal Kalman gain matrix Kk of the current state by using the estimated current covariance matrix Pk
  • S2033 may specifically include the following steps:
  • R represents the noise covariance matrix
  • ⁇ 2 represents the noise variance
  • H k represents the observational information Jacobian matrix at time k. Indicates the transpose of H k .
  • S2034 may specifically include the following steps:
  • the update state estimate obtains the rotation vector of the current state obtained by merging the acceleration count value and the angular velocity value at time k.
  • k (IK k ⁇ H k )P k
  • the mobile terminal calculates a current rotation matrix by using a Rodrigue rotation formula according to a rotation vector of the current state.
  • the Rodrigue rotation formula is a calculation formula for calculating a new vector obtained by rotating a given angle around a rotation axis in a three-dimensional space. This formula uses the original vector, the rotation axis and their cross product as the frame to represent the vector after the rotation.
  • the front camera that controls the mobile terminal collects a plane video frame through a front lens of the lens component, or controls a rear camera of the mobile terminal to collect a plane video frame via a rear lens of the lens component;
  • the mobile terminal rotates the planar video frame according to the current rotation matrix to generate a stable planar video frame.
  • S206 may specifically include the following steps:
  • the lens assembly since the lens assembly includes a mounting bracket, a front lens, and a rear lens, when the lens assembly is mounted to the mobile terminal, the front lens covers the front camera of the mobile terminal, and the rear lens covers the rear camera of the mobile terminal.
  • the front lens and the rear lens are wide-angle lenses or fisheye lenses; therefore, the front camera or the rear camera of the mobile terminal is controlled by a plane shooting application installed in the mobile terminal to cooperate with the front lens or the rear lens.
  • the lens achieves flat video shooting at a low cost and does not require a professional camera.
  • the state of the gyroscope is interpolated to obtain the rotation matrix of the corresponding planar video frame, Can get a more accurate rotation matrix.
  • a planar video frame is then rotated according to the current rotation matrix to generate a stable planar video frame. Therefore, the planar video frame that can finally stabilize jitter is robust to large noise scenes and most motion scenes.
  • the estimated angle of the acceleration count value is susceptible to interference (such as walking, walking, running, etc.)
  • the accumulated error of the angular velocity becomes larger and larger as time passes.
  • the rotation vector of the current state is estimated, and Rodrigue rotation is performed according to the rotation vector of the current state. The formula calculates the current rotation matrix and then rotates the planar video frame, thus ultimately stabilizing the dithered planar video frame.
  • an embodiment of the present invention further provides a method for implementing panoramic shooting by using a mobile terminal and a lens assembly, where the method includes:
  • the mobile terminal starts a panoramic shooting application installed in the mobile terminal, and the mobile terminal is installed with the lens component provided by the embodiment of the present invention
  • S302 The mobile terminal pre-establishes a spherical panoramic mathematical model, wherein the spherical panoramic mathematical model uses a center symmetry point of a spatial position of the front camera and the rear camera of the mobile terminal as a center of the ball, and the front camera and the rear camera are The spherical crowns obtained by the acquired angles of view are combined into a spherical surface.
  • the front camera that controls the mobile terminal collects an image through the front lens of the lens component, and controls the rear camera of the mobile terminal to collect an image through the rear lens of the lens component;
  • S304 may specifically include the following steps:
  • Each view angle position is filled with pixel data of the corresponding position on the spherical panoramic mathematical model.
  • S305 may specifically be:
  • the pixel value of the point with the largest gray value is taken as the fused pixel value, thereby splicing the two images into a spherical surface covering all the viewing angles of the horizontal and vertical directions.
  • Panoramic image In the overlapping area of the two images in the spherical panoramic mathematical model, the pixel value of the point with the largest gray value is taken as the fused pixel value, thereby splicing the two images into a spherical surface covering all the viewing angles of the horizontal and vertical directions.
  • the method may further include the step of anti-shake the spherical panoramic image, and specifically may include the following steps:
  • the mobile terminal acquires a current state time stamp, an acceleration count value, and an angular velocity value of the gyroscope in the mobile terminal in real time;
  • the mobile terminal uses the extended Kalman filter combined with the acceleration count value and the angular velocity value to estimate the amount of rotation of the mobile terminal to the world coordinate system;
  • the mobile terminal synchronizes the gyroscope time stamp with the time stamp of the spherical panoramic image
  • the mobile terminal performs quaternion interpolation on the state of the gyroscope to obtain a rotation matrix corresponding to the spherical panoramic image;
  • the mobile terminal rotates the spherical panoramic image according to the current rotation matrix to generate a stable spherical panoramic image.
  • the step of performing anti-shake on the spherical panoramic image may further include the following steps:
  • the mobile terminal acquires the current state time stamp, the acceleration count value, and the angular velocity value of the mobile terminal in real time;
  • the mobile terminal estimates the rotation vector of the current state by using the extended Kalman filter combined with the acceleration count value and the angular velocity value;
  • the mobile terminal calculates the current rotation matrix by the Rodrigue rotation formula according to the rotation vector of the current state
  • the mobile terminal rotates the spherical panoramic image according to the current rotation matrix to generate a stable spherical panoramic image.
  • the front camera of the mobile terminal can be controlled to collect an image through the front lens of the lens assembly, and the rear camera of the mobile terminal is controlled to pass through the rear lens of the lens assembly.
  • the invention can collect the complete panoramic image at one time, can record the panoramic video, can obtain the panoramic image more completely and efficiently, and greatly simplifies the user operation process and reduces the cost of the panoramic shooting; in addition, based on the spherical panoramic mathematical model Therefore, the information collection capability of the camera is maximized, so that it has the ability to save the scene on the spot.
  • the lens assembly since the lens assembly includes a mounting bracket, a front lens, and a rear lens, when the lens assembly is mounted to the mobile terminal, the front lens covers the front camera of the mobile terminal, and the rear lens covers the rear camera of the mobile terminal.
  • Both the front lens and the rear lens are wide-angle lenses or fisheye lenses; therefore, the front camera and the rear camera of the mobile terminal are simultaneously controlled by the panoramic shooting application installed in the mobile terminal, and the front lens and the rear are matched.
  • the lens can achieve panoramic shooting at a low cost and does not require a dedicated panoramic camera.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明适用于视频拍摄领域,提供了一种移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件。所述镜头组件包括安装支架、前置镜头和后置镜头,其中,安装支架的正面和背面均设有通孔,前置镜头嵌入至安装支架的正面的通孔,后置镜头嵌入至安装支架的背面的通孔,安装支架可拆卸地套设在移动终端的摄像头区域的外面,镜头组件安装至移动终端时,前置镜头覆盖移动终端的前置摄像头,后置镜头覆盖移动终端的后置摄像头,前置镜头和后置镜头均是广角镜头或鱼眼镜头;通过安装在移动终端中的平面拍摄应用程序控制移动终端的前置摄像头或后置摄像头进行拍摄,配合所述前置镜头或后置镜头实现平面视频拍摄,成本低,不需要专业的相机。

Description

移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件 技术领域
本发明属于视频拍摄领域,尤其涉及一种移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件。
背景技术
目前,广角的平面视频都是通过专业相机配合广角的镜头来拍摄的。然而,专业相机的成本很高,比较少人为了偶尔拍摄广角的平面视频而专门去买专业相机。目前,全景图片和视频都是通过专用的全景相机来拍摄的,全景相机由多个镜头组成,全景相机将多个镜头拍摄的图片和视频分别存储下来,再导出到电脑进行合成全景图片和视频。然而,专用的全景相机的成本很高,比较少人为了偶尔拍摄全景图片和视频而专门去买全景相机。因此,非常有必要提供一种成本低,且能与移动终端配合实现广角平面视频拍摄或全景拍摄的镜头组件。
技术问题
本发明的目的在于提供一种移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件,旨在解决现有技术通过专业相机配合广角的镜头拍摄广角的平面视频或者通过专用的全景相机拍摄全景图片和视频,成本很高的问题。
技术解决方案
第一方面,本发明提供了一种镜头组件,所述镜头组件包括安装支架、前置镜头和后置镜头,其中,安装支架的正面和背面均设有通孔,前置镜头嵌入至安装支架的正面的通孔,后置镜头嵌入至安装支架的背面的通孔,安装支架可拆卸地套设在移动终端的摄像头区域的外面,镜头组件安装至移动终端时,前置镜头覆盖移动终端的前置摄像头,后置镜头覆盖移动终端的后置摄像头,前置镜头和后置镜头均是广角镜头或鱼眼镜头;通过安装在移动终端中的平面拍摄应用程序控制移动终端的前置摄像头或后置摄像头进行拍摄,配合所述前置镜头或后置镜头实现平面视频拍摄,或者,通过安装在移动终端中的全景拍摄应用程序同时控制移动终端的前置摄像头和后置摄像头进行拍摄,配合所述前置镜头和后置镜头实现全景拍摄。
进一步地,所述移动终端是手机或平板电脑;所述镜头组件不遮挡移动终端的显示屏。
进一步地,所述前置镜头与移动终端的前置摄像头的光轴共轴或大致平行,所述后置镜头与移动终端的后置摄像头的光轴共轴或大致平行。
进一步地,所述前置镜头通过前置镜头套筒嵌入至安装支架的正面的通孔,后置镜头通过后置镜头套筒嵌入至安装支架的背面的通孔。
进一步地,所述前置镜头套筒和后置镜头套筒与安装支架是一体的,或者,所述前置镜头套筒和后置镜头套筒分别可拆卸地固定于安装支架上,或者,前置镜头套筒和后置镜头套筒分别与安装支架固定连接。
进一步地,所述安装支架具有环状的侧壁,形成中空的腔体,腔体的形状与移动终端的摄像头区域匹配。
进一步地,所述安装支架的正面还开设有与移动终端的喇叭位置对应的通孔。
第二方面,本发明还提供了一种移动终端与镜头组件配合实现平面拍摄的方法,所述方法包括:
移动终端启动安装在移动终端中的平面拍摄应用程序,所述移动终端安装有如上述的镜头组件;
移动终端实时获取移动终端中的陀螺仪的当前状态时间戳、加速度计数值和角速度数值;
移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计得到移动终端到世界坐标系的旋转量;
控制移动终端的前置摄像头经由镜头组件的前置镜头采集平面视频帧,或者控制移动终端的后置摄像头经由镜头组件的后置镜头采集平面视频帧;
移动终端同步陀螺仪时间戳与平面视频帧的时间戳;
移动终端对陀螺仪的状态进行四元数插值获取对应平面视频帧的旋转矩阵;
移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。
第三方面,本发明还提供了一种移动终端与镜头组件配合实现平面拍摄的方法,所述方法包括:
移动终端启动安装在移动终端中的平面拍摄应用程序,所述移动终端安装有如上述的镜头组件;
移动终端实时获取移动终端的当前状态时间戳、加速度计数值和角速度数值;
移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量;
移动终端根据当前状态的旋转向量通过罗德里格旋转公式计算得到当前的旋转矩阵;
控制移动终端的前置摄像头经由镜头组件的前置镜头采集平面视频帧,或者控制移动终端的后置摄像头经由镜头组件的后置镜头采集平面视频帧;
移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。
第四方面,本发明还提供了一种移动终端与镜头组件配合实现全景拍摄的方法,所述方法包括:
移动终端启动安装在移动终端中的全景拍摄应用程序,所述移动终端安装有如上述的镜头组件;
移动终端预先建立球形全景数学模型,所述球形全景数学模型以移动终端的前置摄像头和后置摄像头的空间位置的中心对称点为球心,以所述前置摄像头和后置摄像头所能够采集到的视场角分别得到的球冠组合成球面;
控制移动终端的前置摄像头经由镜头组件的前置镜头采集一幅图像,同时控制移动终端的后置摄像头经由镜头组件的后置镜头采集一幅图像;
根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,将两幅图像映射到球形全景数学模型中;
对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。
有益效果
在本发明中,由于镜头组件包括安装支架、前置镜头和后置镜头,镜头组件安装至移动终端时,前置镜头覆盖移动终端的前置摄像头,后置镜头覆盖移动终端的后置摄像头,前置镜头和后置镜头均是广角镜头或鱼眼镜头。因此通过安装在移动终端中的平面拍摄应用程序控制移动终端的前置摄像头或后置摄像头进行拍摄,配合所述前置镜头或后置镜头实现平面视频拍摄,成本低,不需要专业的相机。通过安装在移动终端中的全景拍摄应用程序同时控制移动终端的前置摄像头和后置摄像头进行拍摄,配合所述前置镜头和后置镜头能实现全景拍摄,成本低,不需要专用的全景相机。
由于移动终端安装有本发明提供的镜头组件,又由于本发明的移动终端与镜头组件配合实现平面拍摄的方法中,对陀螺仪的状态进行四元数插值获取对应平面视频帧的旋转矩阵,因此能得到更为精确的旋转矩阵。然后根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。因此最终能稳定抖动的平面视频帧,对大噪声场景和大部分运动场景都有很强的 鲁棒性。
另外,因为加速度计数值估计出的角度,容易受到干扰(如行走,徒步,奔跑等),随着时间的累积,角速度的累积误差会越来越大。在本发明的移动终端与镜头组件配合实现平面拍摄的方法中,由于利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量,并根据当前状态的旋转向量通过罗德里格旋转公式计算到当前的旋转矩阵,然后旋转平面视频帧,因此最终能稳定抖动的平面视频帧。
由于移动终端安装有本发明实施例提供的镜头组件,因此本发明的移动终端与镜头组件配合实现全景拍摄的方法可以控制移动终端的前置摄像头经由镜头组件的前置镜头采集一幅图像,同时控制移动终端的后置摄像头经由镜头组件的后置镜头采集一幅图像,然后根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,将两幅图像映射到球形全景数学模型中,对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。本发明可以一次性采集完整全景图像,可以录制全景视频,能够更完整、更高效地获取全景图像,且极大简化了用户操作流程,降低全景拍摄成本;另外,由于以球形全景数学模型为基础,因此最大限度地扩大了拍摄装置的信息采集能力,使其具备完整的保存场景现场的能力。
附图说明
图1是本发明实施例提供的镜头组件的主视图。
图2是本发明实施例提供的镜头组件的后视图。
图3是本发明实施例提供的镜头组件的右视图。
图4是本发明实施例提供的镜头组件的俯视图。
图5是本发明实施例提供的镜头组件的仰视图。
图6是本发明实施例提供的镜头组件的剖视图。
图7是本发明实施例提供的镜头组件与移动终端配合的主视图。
图8是本发明实施例提供的镜头组件与移动终端配合的后视图。
图9是本发明实施例提供的镜头组件与移动终端配合的俯视图。
图10是本发明实施例提供的镜头组件与移动终端配合的右视图。
图11是本实用新型实施例提供的第二种配合移动终端实现全景拍摄的镜头组件与移动终端(Iphone 6,Iphone 6S,Iphone 7,Iphone 8)配合的示意图。
图12是本实用新型实施例提供的第三种配合移动终端实现全景拍摄的镜头组件与移动终端(Iphone 6Plus,Iphone 6S Plus,Iphone 7Plus,Iphone 8Plus)配合的示意图。
图13是本实用新型实施例提供的第四种配合移动终端实现全景拍摄的镜头组件与移动终端(Iphone X)配合的示意图。
图14是本发明实施例提供的移动终端与镜头组件配合实现平面拍摄的方法的流程图。
图15是本发明实施例提供的移动终端与镜头组件配合实现平面拍摄的方法中的S103的流程图。
图16是本发明另一实施例提供的移动终端与镜头组件配合实现平面拍摄的方法的流程图。
图17是本发明另一实施例提供的移动终端与镜头组件配合实现平面拍摄的方法中的S203的流程图。
图18是本发明实施例提供的移动终端与镜头组件配合实现全景拍摄的方法的流程图。
本发明的实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
请参阅图1至图10,本发明实施例提供的镜头组件100包括安装支架101、前置镜头102和后置镜头103,其中,安装支架101的正面和背面均设有通孔(图未示),前置镜头102嵌入至安装支架101的正面的通孔,后置镜头103嵌入至安装支架101的背面的通孔,安装支架101可拆卸地套设在移动终端200的摄像头区域的外面,镜头组件100安装至移动终端200时,前置镜头102覆盖移动终端200的前置摄像头,后置镜头103覆盖移动终端200的后置摄像头,镜头组件100不遮挡移动终端200的显示屏。
在本发明实施例中,为了使拍摄的效果较佳,前置镜头102与移动终端200的前置摄像头的光轴共轴或大致平行。后置镜头103与移动终端200的后置摄像头的光轴共轴或大致平行。
在本发明实施例中,前置镜头102通过前置镜头套筒104嵌入至安装支架101的正面的通孔,后置镜头103通过后置镜头套筒105嵌入至安装支架101的背面的通孔。前置镜头套筒104和后置镜头套筒105与安装支架101可以是一体的,或者,前置镜头套筒104和后置镜头套筒105分别可拆卸地固定于安装支架上,或者,前置镜头套筒104和后置镜头套筒105分别与安装支架101固定连接。安装支架101具有环状的侧壁106,形成中空的腔体,腔体的形状与移动终端200的摄像头区域匹配,使安装支架101可固定在移动终端200的摄像头区域。
安装支架101可以具有顶壁也可以没有顶壁,图中所示为没有顶壁的安装支架(请参阅图10)。如图6和图10所示,由于移动终端的顶部通常是弧形的,为了使安装支架101的侧壁106与移动终端200更好的配合,安装支架101的正面和背面的侧壁106向顶部的中间延伸预定距离,形成固定部107,使安装支架101不容易滑落。请参阅图5,安装支架101与移动终端200的侧面配合的侧壁106的底端设有凸起108,使安装支架101更好地固定在移动终端200。请参阅图7,安装支架101的正面还开设有与移动终端200的喇叭位置对应的通孔109,使镜头组件100不影响移动终端的喇叭播放效果。
在本发明实施例中,移动终端200可以是手机、平板电脑等。前置镜头102和后置镜头103均可以是广角镜头或鱼眼镜头(即超广角镜头)。
图1至图10仅是与其中一款移动终端配合使用的镜头组件,对于不同的移动终端,尤其是摄像头位置不同的移动终端,本发明实施例提供的镜头组件的结构也可以作适应性的修改,如图11、12和13所示。
当将本发明实施例提供的镜头组件固定到移动终端后,当需要拍摄平面视频时,可以通过启动安装在移动终端中的平面拍摄应用程序控制移动终端的前置摄像头或后置摄像头进行拍摄,配合所述前置镜头或后置镜头实现平面拍摄,由于前置摄像头和后置摄像头分别被前置镜头和后置镜头覆盖,因此前置摄像头和后置摄像头采集的图像实际上是经前置镜头和后置镜头采集的平面视频帧,由于前置镜头和后置镜头是广角镜头或鱼眼镜头,因此能达到180度的视角。
当将本发明实施例提供的镜头组件固定到移动终端后,当需要拍摄全景图片或视频时,可以通过启动安装在移动终端中的全景拍摄应用程序,同时控制移动终端的前置摄像头和后置摄像头进行拍摄,由于前置摄像头和后置摄像头分别被前置镜头和后置镜头覆盖,因此前置摄像头和后置摄像头采集的图像实际上是经前置镜头和后置镜头采集的图像,由于前置镜头和后置镜头是广角镜头或鱼眼镜头,能达到180度的视角,因此通过前置镜头、后置镜头、前置摄像头和后置摄像头采集的两幅图像;可以将两幅图像映射到球形全景数学模型中,对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。
请参阅图14,本发明实施例还提供了一种移动终端与镜头组件配合实现平面拍摄的方法,所述方法包括:
S101、移动终端启动安装在移动终端中的平面拍摄应用程序,所述移动终端安装有如本发明实施例提供的镜头组件;
在本发明实施例中,S101之后,所述方法还可以包括以下步骤:
移动终端接收用户选择的启动防抖功能的指令。
S102、移动终端实时获取移动终端中的陀螺仪的当前状态时间戳、加速度计数值和角速度数值;
在本发明实施例中,
实时获取移动终端中的陀螺仪的加速度计数值具体可以是:利用重力感应器读取三轴加速度计数值。
实时获取移动终端中的陀螺仪的角速度数值具体可以是:利用角速度感应器读取三轴角速度数值。
在本发明实施例中,S102之后还可以包括以下步骤:
利用低通滤波对加速度计数值进行降噪处理。具体可以包括以下步骤:
通过公式d’ i=α·d i+(1-α)·R i·d’ i-1对加速度计数值进行低通滤波降噪处理,其中,d’ i表示第i时刻经过低通滤波后的加速度计数值,d i表示第i时刻的加速度计数值,R i为陀螺仪第i帧视频的相对旋转量,
Figure PCTCN2018073444-appb-000001
ω i表示第i时刻的角速度数值,d’ i-1表示第i-1时刻时滤波后的加速度计数值,α表示平滑因子,
Figure PCTCN2018073444-appb-000002
其中f c表示低通滤波的截止频率,Rc表示时间常数,Δt表示陀螺仪数据的采样时间间隔。
S103、移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计得到移动终端到世界坐标系的旋转量;
扩展卡尔曼滤波是将非线性***线性化,然后进行卡尔曼滤波,卡尔曼滤波是一种高效率的递归滤波器,它能够从一系列的不完全包含噪声的测量中,估计动态***的状态。
请参阅图15,在本发明实施例中,S103具体可以包括以下步骤:
S1031、初始状态旋转量
Figure PCTCN2018073444-appb-000003
其中,d 0为初始测得的加速度数值,g为世界坐标系重力矢量;初始过程协方差
Figure PCTCN2018073444-appb-000004
S1032、利用角速度数值ω k计算第K时刻的状态转移矩阵Φ(ω k);
Φ(ω k)=exp(-[ω k·Δt] ×),其中ω k是第K时刻的角速度数值,Δt表示陀螺仪数据的采样时间间隔。
S1033、计算状态噪声的协方差矩阵Q k,更新状态旋转先验估计量
Figure PCTCN2018073444-appb-000005
和过程协方差先验估计矩阵
Figure PCTCN2018073444-appb-000006
Figure PCTCN2018073444-appb-000007
Q k为状态噪声的协方差矩阵;
Figure PCTCN2018073444-appb-000008
其中,
Figure PCTCN2018073444-appb-000009
是第K-1时刻的状态旋转后验估计量;
Figure PCTCN2018073444-appb-000010
其中,
Figure PCTCN2018073444-appb-000011
是第K-1时刻的过程协方差后验估计矩阵;
S1034、由加速度数值d k更新观测量的噪声方差矩阵R k,计算观测转移雅克比矩阵H k,计算当前观测量和估计观测量误差e k
Figure PCTCN2018073444-appb-000012
其中,
Figure PCTCN2018073444-appb-000013
Figure PCTCN2018073444-appb-000014
α为加速度变化量的平滑因子,β为加速度模长的影响因子;
Figure PCTCN2018073444-appb-000015
其中h为观察函数,h(q,v)=q·g+v k,g世界坐标系下的重力矢量,q为状态量,即世界坐标系到陀螺仪坐标系的旋转量,v k为测量噪声;
Figure PCTCN2018073444-appb-000016
S1035、更新第k时刻的最优卡尔曼增益矩阵K k
Figure PCTCN2018073444-appb-000017
S1036、根据最优卡尔曼增益矩阵K k和观测量误差e k更新移动终端到世界坐标系的旋转后验估计量
Figure PCTCN2018073444-appb-000018
和过程协方差后验估计矩阵
Figure PCTCN2018073444-appb-000019
Figure PCTCN2018073444-appb-000020
Figure PCTCN2018073444-appb-000021
S104、控制移动终端的前置摄像头经由镜头组件的前置镜头采集平面视频帧,或者控制移动终端的后置摄像头经由镜头组件的后置镜头采集平面视频帧;
S105、移动终端同步陀螺仪时间戳与平面视频帧的时间戳;
在本发明实施例中,S105具体可以为:
移动终端同步陀螺仪时间戳与平面视频帧的时间戳,使t k≥t j>t k-1,其中t j是平面视频帧的时间戳,t k为陀螺仪第K帧的时间戳,t k-1为陀螺仪第K-1帧的时间戳。
S106、移动终端对陀螺仪的状态进行四元数插值获取对应平面视频帧的旋转矩阵;
在本发明实施例中,S106具体可以包括以下步骤:
移动终端计算邻近陀螺仪时间戳的相对旋转量,
Figure PCTCN2018073444-appb-000022
其中,r k为第K时刻的相对旋转量,
Figure PCTCN2018073444-appb-000023
Figure PCTCN2018073444-appb-000024
为第k和k-1时刻的状态后验估计量,即世界坐标系到陀螺仪坐标系的旋转量;
移动终端进行四元数插值获取平面视频帧到第k帧的相对旋转量,R j=γ·I+(1-γ)·r k,其中,R j为第k帧的相对旋转量,
Figure PCTCN2018073444-appb-000025
移动终端计算平面视频帧中第j帧视频的旋转矩阵
Figure PCTCN2018073444-appb-000026
S107、移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。
在本发明实施例中,S107具体可以包括以下步骤:
移动终端把经纬度二维图像上的栅格点映射到球面坐标;
移动终端遍历单位球上的所有点,利用当前的旋转矩阵对单位球上的所有点进行旋转,生成稳定的平面视频帧;
其中,利用当前的旋转矩阵对单位球上的所有点进行旋转具体采用以下的公式:
Figure PCTCN2018073444-appb-000027
其中,[x,y,z] T表示单位圆旋转之前的球面坐标,[x new,y new,z new] T表示旋转后的球面坐标,Q j表示当前的旋转矩阵,t表示位移向量,t=[0,0,0] T
请参阅图16,本发明另一实施例还提供了一种移动终端与镜头组件配合实现平面拍摄的方法,所述方法包括:
S201、移动终端启动安装在移动终端中的平面拍摄应用程序,所述移动终端安装有如本发明实施例提供的镜头组件;
在本发明实施例中,S201之后,所述方法还可以包括以下步骤:
移动终端接收用户选择的启动防抖功能的指令。
S202、移动终端实时获取移动终端的当前状态时间戳、加速度计数值和角速度数值;
在本发明实施例中,
实时获取移动终端的加速度计数值具体可以是:利用重力感应器读取三轴加速度计数值。
实时获取移动终端的角速度数值具体可以是:利用角速度感应器读取三轴角速度数值。
在本发明实施例中,S202之后还可以包括以下步骤:
利用低通滤波对加速度计数值和角速度数值进行降噪处理。具体可以包括以下步骤:
通过公式d’ i=α·d i+(1-α)·d’ i-1分别对加速度计数值和角速度数值进行低通滤波降噪处理,其中,d i表示第i时刻的加速度计数值或角速度数值;d’ i表示第i时刻经过低通滤波后的加速度计数值或角速度数值;d’ i-1表示第i-1时刻时滤波后的加速度计数值或角速度数值;α表示平滑因子,
Figure PCTCN2018073444-appb-000028
其中f c表示低通滤波的截止频率,Rc表示时间常数,Δt表示采样时间间隔。
S203、移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量;
扩展卡尔曼滤波是将非线性***线性化,然后进行卡尔曼滤波,卡尔曼滤波是一种高效率的递归滤波器,它能够从一系列的不完全包含噪声的测量中,估计动态***的状态。
请参阅图17,在本发明实施例中,S203具体可以包括以下步骤:
S2031、移动终端利用角速度数值计算k时刻的状态转移矩阵F k;利用加速度计数值,结合参考坐标系下重力矢量g和上一状态的旋转矩阵计算当前时刻预测余量
Figure PCTCN2018073444-appb-000029
在本发明实施例中,S2031具体可以包括以下步骤:
移动终端对初始状态转移矩阵、初始预测协方差矩阵和初始观测矩阵进行初始化,其中,初始状态转移矩阵
Figure PCTCN2018073444-appb-000030
初始预测协方差矩阵
Figure PCTCN2018073444-appb-000031
初始 观测矩阵
Figure PCTCN2018073444-appb-000032
移动终端计算k时刻的状态转移矩阵
Figure PCTCN2018073444-appb-000033
计算观测信息矩阵
Figure PCTCN2018073444-appb-000034
其中,x k-1表示k-1时刻的移动终端的状态估计,x k表示k时刻的移动终端的状态估计,
Figure PCTCN2018073444-appb-000035
表示偏微分符号,f表示状态方程函数,x表示移动终端的状态,即三个轴方向上的旋转角度,h表示观测方程函数,
Figure PCTCN2018073444-appb-000036
x k-2表示第k-2时刻的移动终端的状态,u k-1表示第k-1时刻的角速度数值,w k-1表示k-1时刻的过程噪声,
Figure PCTCN2018073444-appb-000037
表示利用k-2时刻来预测第k-1时刻移动终端的估计状态,x k-1表示第k-1时刻的移动终端的状态,u k表示第k时刻的角速度数值,w k表示第k时刻的过程噪声,
Figure PCTCN2018073444-appb-000038
表示利用k-1时刻来预测第k时刻移动终端的估计状态,x k-2=[X k-2,Y k-2,Z k-2] T,其中,X k-2,Y k-2,Z k-2表示第k-2时刻参考系坐标系在X轴,Y轴,Z轴上的旋转角度,x k-1=[X k-1,Y k-1,Z k-1] T,其中,X k-1,Y k-1,Z k-1表示第k-1时刻参考系坐标系在X轴,Y轴,Z轴上的旋转角度,T表示转置;
移动终端把参考系坐标系下的垂直向下的重力加速度投影到刚体坐标系下,通过公式
Figure PCTCN2018073444-appb-000039
计算预测余量
Figure PCTCN2018073444-appb-000040
其中,z k为k时刻利用低通滤波进行降噪处理后的加速度计数值,H k是观测信息矩阵,表示观测方程z k=h(x k,g,v k)使用当前估计状态计算的雅可比(Jacobian)矩阵,其中,g表示参考坐标系下的垂直向下的重力矢量,g=[0,0,-9.81] T,v k表示为测量误差。
S2032、移动终端利用上一状态的估计误差协方差矩阵P k-1|k-1、当前状态的状态转移矩阵F k和过程噪声Q估计当前状态的误差协方差矩阵P k|k-1
在本发明实施例中,S2032具体可以利用公式
Figure PCTCN2018073444-appb-000041
计算出的状态预测估计协方差矩阵P k|k-1,其中,P k-1|k-1表示k-1时刻状态的估计协方差矩阵,Q k表示过程噪声的协方差矩阵,
Figure PCTCN2018073444-appb-000042
dt表示陀螺仪数据的采样间隔时间,F k表示k时刻的状态转移矩阵,
Figure PCTCN2018073444-appb-000043
表示F k的转置。
S2033、移动终端利用估计的当前状态的误差协方差矩阵P k|k-1、观测矩阵H k和噪声方差矩阵R计算当前状态的最优卡尔曼增益矩阵K k
在本发明实施例中,S2033具体可以包括以下步骤:
利用状态预测估计协方差矩阵P k|k-1来计算k时刻的最优卡尔曼增益矩阵K k
Figure PCTCN2018073444-appb-000044
R表示噪声协方差矩阵,
Figure PCTCN2018073444-appb-000045
σ 2表示噪声方差,一般地σ=0.75,H k表示k时刻的观测信息雅克比矩阵,
Figure PCTCN2018073444-appb-000046
表示H k的转置。
S2034、移动终端根据当前状态的最优卡尔曼增益矩阵K k和当前时刻预测余量
Figure PCTCN2018073444-appb-000047
更新 当前状态估计旋转向量
Figure PCTCN2018073444-appb-000048
在本发明实施例中,S2034具体可以包括以下步骤:
更新状态估计得到k时刻通过融合加速度计数值和角速度数值得到的当前状态的旋转向量
Figure PCTCN2018073444-appb-000049
更新估计协方差矩阵P k|k,P k|k=(I-K k·H k)P k|k-1,其中I是单位矩阵,P k|k就是下一时刻需要的估计误差协方差矩阵P k-1|k-1
S204、移动终端根据当前状态的旋转向量通过罗德里格旋转公式计算得到当前的旋转矩阵;
罗德里格旋转公式是计算三维空间中,一个向量绕旋转轴旋转给定角度以后得到的新向量的计算公式。这个公式使用原向量,旋转轴及它们叉积作为标架表示出旋转以后的向量。
S205、控制移动终端的前置摄像头经由镜头组件的前置镜头采集平面视频帧,或者控制移动终端的后置摄像头经由镜头组件的后置镜头采集平面视频帧;
S206、移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。
在本发明实施例中,S206具体可以包括以下步骤:
把经纬图像上的点映射到球型图像的点;
遍历单位球上的所有点,利用当前的旋转矩阵对单位球上的所有点进行旋转,生成稳定的平面视频帧。
其中,利用当前的旋转矩阵对单位球上的所有点进行旋转具体可以采用以下的公式:
Figure PCTCN2018073444-appb-000050
其中,x,y,z表示单位圆旋转之前的球面坐标,x new,y new,z new表示旋转后的球面坐标,M k表示当前的旋转矩阵,t表示位移向量,t=[0,0,0] T
在本发明中,由于镜头组件包括安装支架、前置镜头和后置镜头,镜头组件安装至移动终端时,前置镜头覆盖移动终端的前置摄像头,后置镜头覆盖移动终端的后置摄像头,前置镜头和后置镜头均是广角镜头或鱼眼镜头;因此通过安装在移动终端中的平面拍摄应用程序控制移动终端的前置摄像头或后置摄像头进行拍摄,配合所述前置镜头或后置镜头实现平面视频拍摄,成本低,不需要专业的相机。
由于移动终端安装有本发明提供的镜头组件,又由于本发明的移动终端与镜头组件配合实现平面拍摄的方法中,对陀螺仪的状态进行四元数插值获取对应平面视频帧的旋转矩阵,因此能得到更为精确的旋转矩阵。然后根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。因此最终能稳定抖动的平面视频帧,对大噪声场景和大部分运动场景都有很强的鲁棒性。
另外,因为加速度计数值估计出的角度,容易受到干扰(如行走,徒步,奔跑等),随着时间的累积,角速度的累积误差会越来越大。在本发明的移动终端与镜头组件配合实现平面拍摄的方法中,由于利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量,并根据当前状态的旋转向量通过罗德里格旋转公式计算到当前的旋转矩阵,然后旋转平面视频帧,因此最终能稳定抖动的平面视频帧。
请参阅图18,本发明实施例还提供了一种移动终端与镜头组件配合实现全景拍摄的方法,所述方法包括:
S301、移动终端启动安装在移动终端中的全景拍摄应用程序,所述移动终端安装有本发明实施例提供的镜头组件;
S302、移动终端预先建立球形全景数学模型,所述球形全景数学模型以移动终端的前置摄像头和后置摄像头的空间位置的中心对称点为球心,以所述前置摄像头和后置摄像头所能够采集到的视场角分别得到的球冠组合成球面。
S303、控制移动终端的前置摄像头经由镜头组件的前置镜头采集一幅图像,同时控制移动终端的后置摄像头经由镜头组件的后置镜头采集一幅图像;
S304、根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,将两幅图像映射到球形全景数学模型中;
在本发明实施例中,S304具体可以包括以下步骤:
将两幅图像中每个位置的像素点根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,转化为球形全景数学模型中对应的视场角位置;
结合每个位置的像素点在图像中相对于圆心的角度ω,在任意半径的球形全景数学模型上唯一确定像素点在原始图像中的位置;
在球形全景数学模型上将每个视场角位置填充对应位置的像素数据。
S305、对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。
在本发明实施例中,S305具体可以为:
在球形全景数学模型中的两幅图像的重叠区域,取灰度值最大的点的像素值作为融合后的像素值,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。
所述方法还可以包括对球面全景图像进行防抖的步骤,具体可以包括以下步骤:
移动终端实时获取移动终端中的陀螺仪的当前状态时间戳、加速度计数值和角速度数值;
移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计得到移动终端到世界坐标系的旋转量;
移动终端同步陀螺仪时间戳与球面全景图像的时间戳;
移动终端对陀螺仪的状态进行四元数插值获取对应球面全景图像的旋转矩阵;
移动终端根据当前的旋转矩阵旋转球面全景图像,生成稳定的球面全景图像。
所述对球面全景图像进行防抖的步骤,具体还可以是包括以下步骤:
移动终端实时获取移动终端的当前状态时间戳、加速度计数值和角速度数值;
移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量;
移动终端根据当前状态的旋转向量通过罗德里格旋转公式计算到当前的旋转矩阵;
移动终端根据当前的旋转矩阵旋转球面全景图像,生成稳定的球面全景图像。
由于移动终端安装有本发明实施例提供的镜头组件,因此可以控制移动终端的前置摄像头经由镜头组件的前置镜头采集一幅图像,同时控制移动终端的后置摄像头经由镜头组件的后置镜头采集一幅图像,然后根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,将两幅图像映射到球形全景数学模型中,对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。本发明可以一次性采集完整全景图像,可以录制全景视频,能够更完整、更高效地获取全景图像,且极大简化了用户操作流程,降低全景拍摄成本;另外,由于以球形全景数学模型为基础,因此最大限度地扩大了拍摄装置的信息采集能力,使其具备完整的保存场景现场的能力。
在本发明中,由于镜头组件包括安装支架、前置镜头和后置镜头,镜头组件安装至移动终端时,前置镜头覆盖移动终端的前置摄像头,后置镜头覆盖移动终端的后置摄像头,前置镜头和后置镜头均是广角镜头或鱼眼镜头;因此通过安装在移动终端中的全景拍摄应用程序同时控制移动终端的前置摄像头和后置摄像头进行拍摄,配合所述前置镜头和后置镜头能实现全景拍摄,成本低,不需要专用的全景相机。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (18)

  1. 一种镜头组件,其特征在于,所述镜头组件包括安装支架、前置镜头和后置镜头,其中,安装支架的正面和背面均设有通孔,前置镜头嵌入至安装支架的正面的通孔,后置镜头嵌入至安装支架的背面的通孔,安装支架可拆卸地套设在移动终端的摄像头区域的外面,镜头组件安装至移动终端时,前置镜头覆盖移动终端的前置摄像头,后置镜头覆盖移动终端的后置摄像头,前置镜头和后置镜头均是广角镜头或鱼眼镜头;通过安装在移动终端中的平面拍摄应用程序控制移动终端的前置摄像头或后置摄像头进行拍摄,配合所述前置镜头或后置镜头实现平面视频拍摄,或者,通过安装在移动终端中的全景拍摄应用程序同时控制移动终端的前置摄像头和后置摄像头进行拍摄,配合所述前置镜头和后置镜头实现全景拍摄。
  2. 如权利要求1所述的镜头组件,其特征在于,所述移动终端是手机或平板电脑;所述镜头组件不遮挡移动终端的显示屏。
  3. 如权利要求1所述的镜头组件,其特征在于,所述前置镜头与移动终端的前置摄像头的光轴共轴或大致平行,所述后置镜头与移动终端的后置摄像头的光轴共轴或大致平行。
  4. 如权利要求1所述的镜头组件,其特征在于,所述前置镜头通过前置镜头套筒嵌入至安装支架的正面的通孔,后置镜头通过后置镜头套筒嵌入至安装支架的背面的通孔。
  5. 如权利要求1所述的镜头组件,其特征在于,所述前置镜头套筒和后置镜头套筒与安装支架是一体的,或者,所述前置镜头套筒和后置镜头套筒分别可拆卸地固定于安装支架上,或者,前置镜头套筒和后置镜头套筒分别与安装支架固定连接。
  6. 如权利要求1所述的镜头组件,其特征在于,所述安装支架具有环状的侧壁,形成中空的腔体,腔体的形状与移动终端的摄像头区域匹配。
  7. 如权利要求1所述的镜头组件,其特征在于,所述安装支架的正面还开设有与移动终端的喇叭位置对应的通孔。
  8. 一种移动终端与镜头组件配合实现平面拍摄的方法,其特征在于,所述方法包括:
    移动终端启动安装在移动终端中的平面拍摄应用程序,所述移动终端安装有如权利要求1至7任一项所述的镜头组件;
    移动终端实时获取移动终端中的陀螺仪的当前状态时间戳、加速度计数值和角速度数值;
    移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计得到移动终端到世界坐标系的旋转量;
    控制移动终端的前置摄像头经由镜头组件的前置镜头采集平面视频帧,或者控制移动终端的后置摄像头经由镜头组件的后置镜头采集平面视频帧;
    移动终端同步陀螺仪时间戳与平面视频帧的时间戳;
    移动终端对陀螺仪的状态进行四元数插值获取对应平面视频帧的旋转矩阵;
    移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。
  9. 如权利要求8所述的方法,其特征在于,所述移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计得到移动终端到世界坐标系的旋转量具体包括:
    S1031、初始状态旋转量
    Figure PCTCN2018073444-appb-100001
    其中,d 0为初始测得的加速度数值,g为世界坐标系重力矢量;初始过程协方差
    Figure PCTCN2018073444-appb-100002
    S1032、利用角速度数值ω k计算第K时刻的状态转移矩阵Φ(ω k);
    Φ(ω k)=exp(-[ω k·Δt] ×),其中ω k是第K时刻的角速度数值,Δt表示陀螺仪数据的采样时间间隔;
    S1033、计算状态噪声的协方差矩阵Q k,更新状态旋转先验估计量
    Figure PCTCN2018073444-appb-100003
    和过程协方差先验估计矩阵
    Figure PCTCN2018073444-appb-100004
    Figure PCTCN2018073444-appb-100005
    Q k为状态噪声的协方差矩阵;
    Figure PCTCN2018073444-appb-100006
    其中,
    Figure PCTCN2018073444-appb-100007
    是第K-1时刻的状态旋转后验估计量;
    Figure PCTCN2018073444-appb-100008
    其中,
    Figure PCTCN2018073444-appb-100009
    是第K-1时刻的过程协方差后验估计矩阵;
    S1034、由加速度数值d k更新观测量的噪声方差矩阵R k,计算观测转移雅克比矩阵H k,计算当前观测量和估计观测量误差e k
    Figure PCTCN2018073444-appb-100010
    其中,
    Figure PCTCN2018073444-appb-100011
    Figure PCTCN2018073444-appb-100012
    α为加速度变化量的平滑因子,β为加速度模长的影响因子;
    Figure PCTCN2018073444-appb-100013
    其中h为观察函数,h(q,v)=q·g+v k,g世界坐标系下的重力矢量,q为状态量,即世界坐标系到陀螺仪坐标系的旋转量,v k为测量噪声;
    Figure PCTCN2018073444-appb-100014
    S1035、更新第k时刻的最优卡尔曼增益矩阵K k
    Figure PCTCN2018073444-appb-100015
    S1036、根据最优卡尔曼增益矩阵K k和观测量误差e k更新移动终端到世界坐标系的旋转后验估计量
    Figure PCTCN2018073444-appb-100016
    和过程协方差后验估计矩阵
    Figure PCTCN2018073444-appb-100017
    Figure PCTCN2018073444-appb-100018
    Figure PCTCN2018073444-appb-100019
  10. 如权利要求9所述的方法,其特征在于,所述移动终端同步陀螺仪时间戳与平面视频帧的时间戳具体为:
    移动终端同步陀螺仪时间戳与平面视频帧的时间戳,使t k≥t j>t k-1,其中t j是平面视频帧的时间戳,t k为陀螺仪第K帧的时间戳,t k-1为陀螺仪第K-1帧的时间戳。
  11. 如权利要求10所述的方法,其特征在于,所述移动终端对陀螺仪的状态进行四元数插值获取对应平面视频帧的旋转矩阵具体包括:
    移动终端计算邻近陀螺仪时间戳的相对旋转量,
    Figure PCTCN2018073444-appb-100020
    其中,r k为第K时刻的相对旋转量,
    Figure PCTCN2018073444-appb-100021
    Figure PCTCN2018073444-appb-100022
    为第k和k-1时刻的状态后验估计量,即世界坐标系到陀螺仪坐标系的旋转量;
    移动终端进行四元数插值获取平面视频帧到第k帧的相对旋转量,R j=γ·I+(1-γ)·r k,其中,R j为第k帧的相对旋转量,
    Figure PCTCN2018073444-appb-100023
    移动终端计算平面视频帧中第j帧视频的旋转矩阵
    Figure PCTCN2018073444-appb-100024
  12. 如权利要求11所述的方法,其特征在于,所述移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧具体包括:
    移动终端把经纬度二维图像上的栅格点映射到球面坐标;
    移动终端遍历单位球上的所有点,利用当前的旋转矩阵对单位球上的所有点进行旋转,生成稳定的平面视频帧;
    其中,利用当前的旋转矩阵对单位球上的所有点进行旋转具体采用以下的公式:
    Figure PCTCN2018073444-appb-100025
    其中,[x,y,z] T表示单位圆旋转之前的球面坐标,[x new,y new,z new] T表示旋转后的球面坐标,Q j表示当前的旋转矩阵,t表示位移向量,t=[0,0,0] T
  13. 一种移动终端与镜头组件配合实现平面拍摄的方法,其特征在于,所述方法包括:
    移动终端启动安装在移动终端中的平面拍摄应用程序,所述移动终端安装有如权利要求1至7任一项所述的镜头组件;
    移动终端实时获取移动终端的当前状态时间戳、加速度计数值和角速度数值;
    移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量;
    移动终端根据当前状态的旋转向量通过罗德里格旋转公式计算得到当前的旋转矩阵;
    控制移动终端的前置摄像头经由镜头组件的前置镜头采集平面视频帧,或者控制移动终端的后置摄像头经由镜头组件的后置镜头采集平面视频帧;
    移动终端根据当前的旋转矩阵旋转平面视频帧,生成稳定的平面视频帧。
  14. 如权利要求13所述的方法,其特征在于,所述移动终端利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计当前状态的旋转向量具体包括:
    移动终端利用角速度数值计算k时刻的状态转移矩阵F k;利用加速度计数值,结合参考坐标系下重力矢量g和上一状态的旋转矩阵计算当前时刻预测余量
    Figure PCTCN2018073444-appb-100026
    移动终端利用上一状态的估计误差协方差矩阵P k-1|k-1、当前状态的状态转移矩阵F k和过程噪声Q估计当前状态的误差协方差矩阵P k|k-1
    移动终端利用估计的当前状态的误差协方差矩阵P k|k-1、观测矩阵H k和噪声方差矩阵R计算当前状态的最优卡尔曼增益矩阵K k
    移动终端根据当前状态的最优卡尔曼增益矩阵K k和当前时刻预测余量
    Figure PCTCN2018073444-appb-100027
    更新当前状态估计旋转向量
    Figure PCTCN2018073444-appb-100028
  15. 如权利要求14所述的方法,其特征在于,所述移动终端利用角速度数值计算k时刻的状态转移矩阵F k;利用加速度计数值,结合参考坐标系下重力矢量g和上一状态的旋转矩阵计算当前时刻预测余量
    Figure PCTCN2018073444-appb-100029
    具体包括以下步骤:
    移动终端对初始状态转移矩阵、初始预测协方差矩阵和初始观测矩阵进行初始化,其中,初始状态转移矩阵
    Figure PCTCN2018073444-appb-100030
    初始预测协方差矩阵
    Figure PCTCN2018073444-appb-100031
    初始 观测矩阵
    Figure PCTCN2018073444-appb-100032
    移动终端计算k时刻的状态转移矩阵
    Figure PCTCN2018073444-appb-100033
    计算观测信息矩阵
    Figure PCTCN2018073444-appb-100034
    其中,x k-1表示k-1时刻的移动终端的状态估计,x k表示k时刻的移动终端的状态估计,
    Figure PCTCN2018073444-appb-100035
    表示偏微分符号,f表示状态方程函数,x表示移动终端的状态,即三个轴方向上的旋转角度,h表示观测方程函数,
    Figure PCTCN2018073444-appb-100036
    x k-2表示第k-2时刻的移动终端的状态,u k-1表示第k-1时刻的角速度数值,w k-1表示k-1时刻的过程噪声,
    Figure PCTCN2018073444-appb-100037
    表示利用k-2时刻来预测第k-1时刻移动终端的估计状态,x k-1表示第k-1时刻的移动终端的状态,u k表示第k时刻的角速度数值,w k表示第k时刻的过程噪声,
    Figure PCTCN2018073444-appb-100038
    表示利用k-1时刻来预测第k时刻移动终端的估计状态,x k-2=[X k-2,Y k-2,Z k-2] T,其中,X k-2,Y k-2,Z k-2表示第k-2时刻参考系坐标系在X轴,Y轴,Z轴上的旋转角度,x k-1=[X k-1,Y k-1,Z k-1] T,其中,X k-1,Y k-1,Z k-1表示第k-1时刻参考系坐标系在X轴,Y轴,Z轴上的旋转角度,T表示转置;
    移动终端把参考系坐标系下的垂直向下的重力加速度投影到刚体坐标系下,通过公式
    Figure PCTCN2018073444-appb-100039
    计算预测余量
    Figure PCTCN2018073444-appb-100040
    其中,z k为k时刻利用低通滤波进行降噪处理后的加速度计数值,H k是观测信息矩阵,表示观测方程z k=h(x k,g,v k)使用当前估计状态计算的雅可比矩阵,其中,g表示参考坐标系下的垂直向下的重力矢量,g=[0,0,-9.81] T,v k表示为测量误差;
    所述利用上一状态的估计误差协方差矩阵P k-1|k-1、当前状态的状态转移矩阵F k和过程噪声Q估计当前状态的误差协方差矩阵P k|k-1具体为:
    利用公式
    Figure PCTCN2018073444-appb-100041
    计算出的状态预测估计协方差矩阵P k|k-1,其中,P k-1|k-1表示k-1时刻状态的估计协方差矩阵,Q k表示过程噪声的协方差矩阵,
    Figure PCTCN2018073444-appb-100042
    dt表示陀螺仪数据的采样间隔时间,F k表示k时刻的状态转移矩阵,
    Figure PCTCN2018073444-appb-100043
    表示F k的转置;
    所述利用估计的当前状态的误差协方差矩阵P k|k-1、观测矩阵H k和噪声方差矩阵R计算当前状态的最优卡尔曼增益矩阵K k具体包括以下步骤:
    利用状态预测估计协方差矩阵P k|k-1来计算k时刻的最优卡尔曼增益矩阵K k
    Figure PCTCN2018073444-appb-100044
    R表示噪声协方差矩阵,
    Figure PCTCN2018073444-appb-100045
    σ 2表示噪声方差,一般地σ=0.75,H k表示k时刻的观测信息雅克比矩阵,
    Figure PCTCN2018073444-appb-100046
    表示H k的转置;
    所述根据当前状态的最优卡尔曼增益矩阵K k和当前时刻预测余量
    Figure PCTCN2018073444-appb-100047
    更新当前状态估计旋转向量
    Figure PCTCN2018073444-appb-100048
    具体包括以下步骤:
    更新状态估计得到k时刻通过融合加速度计数值和角速度数值得到的当前状态的旋转向量
    Figure PCTCN2018073444-appb-100049
    更新估计协方差矩阵P k|k,P k|k=(I-K k·H k)P k|k-1,其中I是单位矩阵,P k|k就是下一时刻需要的估计误差协方差矩阵P k-1|k-1
  16. 一种移动终端与镜头组件配合实现全景拍摄的方法,其特征在于,所述方法包括:
    移动终端启动安装在移动终端中的全景拍摄应用程序,所述移动终端安装有如权利要求1至7任一项所述的镜头组件;
    移动终端预先建立球形全景数学模型,所述球形全景数学模型以移动终端的前置摄像头和后置摄像头的空间位置的中心对称点为球心,以所述前置摄像头和后置摄像头所能够采集到的视场角分别得到的球冠组合成球面;
    控制移动终端的前置摄像头经由镜头组件的前置镜头采集一幅图像,同时控制移动终端的后置摄像头经由镜头组件的后置镜头采集一幅图像;
    根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,将两幅图像映射到球形全景数学模型中;
    对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。
  17. 如权利要求16所述的方法,其特征在于,所述根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,将两幅图像映射到球形全景数学模型中具体包括:
    将两幅图像中每个位置的像素点根据前置镜头和后置镜头的镜头成像高度和视场角大小之间的对应关系,转化为球形全景数学模型中对应的视场角位置;
    结合每个位置的像素点在图像中相对于圆心的角度,在任意半径的球形全景数学模型上唯一确定像素点在原始图像中的位置;
    在球形全景数学模型上将每个视场角位置填充对应位置的像素数据。
  18. 如权利要求17所述的方法,其特征在于,所述对球形全景数学模型中的两幅图像的重叠区域做融合,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像具体为:
    在球形全景数学模型中的两幅图像的重叠区域,取灰度值最大的点的像素值作为融合后的像素值,从而将两幅图像拼接成覆盖了水平和竖直方向所有视角方位的球面全景图像。
PCT/CN2018/073444 2017-12-22 2018-01-19 移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件 WO2019119597A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201711408354.XA CN107948488A (zh) 2017-12-22 2017-12-22 移动终端与镜头组件配合实现全景拍摄的方法和镜头组件
CN201711408354.X 2017-12-22
CN201810037000.7 2018-01-15
CN201810037000.7A CN108337411B (zh) 2018-01-15 2018-01-15 移动终端与镜头组件配合实现平面拍摄的方法和镜头组件

Publications (1)

Publication Number Publication Date
WO2019119597A1 true WO2019119597A1 (zh) 2019-06-27

Family

ID=66992474

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073444 WO2019119597A1 (zh) 2017-12-22 2018-01-19 移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件

Country Status (1)

Country Link
WO (1) WO2019119597A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110609973A (zh) * 2019-08-27 2019-12-24 广东艾科技术股份有限公司 一种用于流量测量的卡尔曼滤波方法
CN112396639A (zh) * 2019-08-19 2021-02-23 虹软科技股份有限公司 图像对齐方法
US20210287262A1 (en) * 2020-03-16 2021-09-16 Lyft, Inc. Aligning provider-device axes with transportation-vehicle axes to generate driving-event scores
CN115278086A (zh) * 2022-08-01 2022-11-01 安徽睿极智能科技有限公司 一种陀螺仪电子防抖方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120076487A1 (en) * 2010-09-29 2012-03-29 Hon Hai Precision Industry Co., Ltd. Portable electronic device
CN105744132A (zh) * 2016-03-23 2016-07-06 捷开通讯(深圳)有限公司 全景图像拍摄的光学镜头配件
CN106210547A (zh) * 2016-09-05 2016-12-07 广东欧珀移动通信有限公司 一种全景拍摄的方法、装置及***
CN106550192A (zh) * 2016-10-31 2017-03-29 深圳晨芯时代科技有限公司 一种虚拟现实拍摄及显示的方法、***
CN106934772A (zh) * 2017-03-02 2017-07-07 深圳岚锋创视网络科技有限公司 一种全景图像或视频的水平校准方法、***及便携式终端
CN107040694A (zh) * 2017-04-07 2017-08-11 深圳岚锋创视网络科技有限公司 一种全景视频防抖的方法、***及便携式终端
CN107274340A (zh) * 2016-04-08 2017-10-20 北京岚锋创视网络科技有限公司 一种全景图像生成方法及装置
CN206712914U (zh) * 2017-05-03 2017-12-05 深圳市百康兴电子科技有限公司 720度全景拍摄装置
CN206759520U (zh) * 2017-04-21 2017-12-15 深圳市百康兴电子科技有限公司 一种双目摄像头及具有其的手机

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120076487A1 (en) * 2010-09-29 2012-03-29 Hon Hai Precision Industry Co., Ltd. Portable electronic device
CN105744132A (zh) * 2016-03-23 2016-07-06 捷开通讯(深圳)有限公司 全景图像拍摄的光学镜头配件
CN107274340A (zh) * 2016-04-08 2017-10-20 北京岚锋创视网络科技有限公司 一种全景图像生成方法及装置
CN106210547A (zh) * 2016-09-05 2016-12-07 广东欧珀移动通信有限公司 一种全景拍摄的方法、装置及***
CN106550192A (zh) * 2016-10-31 2017-03-29 深圳晨芯时代科技有限公司 一种虚拟现实拍摄及显示的方法、***
CN106934772A (zh) * 2017-03-02 2017-07-07 深圳岚锋创视网络科技有限公司 一种全景图像或视频的水平校准方法、***及便携式终端
CN107040694A (zh) * 2017-04-07 2017-08-11 深圳岚锋创视网络科技有限公司 一种全景视频防抖的方法、***及便携式终端
CN206759520U (zh) * 2017-04-21 2017-12-15 深圳市百康兴电子科技有限公司 一种双目摄像头及具有其的手机
CN206712914U (zh) * 2017-05-03 2017-12-05 深圳市百康兴电子科技有限公司 720度全景拍摄装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396639A (zh) * 2019-08-19 2021-02-23 虹软科技股份有限公司 图像对齐方法
CN110609973A (zh) * 2019-08-27 2019-12-24 广东艾科技术股份有限公司 一种用于流量测量的卡尔曼滤波方法
CN110609973B (zh) * 2019-08-27 2023-09-29 广东艾科技术股份有限公司 一种用于流量测量的卡尔曼滤波方法
US20210287262A1 (en) * 2020-03-16 2021-09-16 Lyft, Inc. Aligning provider-device axes with transportation-vehicle axes to generate driving-event scores
CN115278086A (zh) * 2022-08-01 2022-11-01 安徽睿极智能科技有限公司 一种陀螺仪电子防抖方法
CN115278086B (zh) * 2022-08-01 2024-02-02 安徽睿极智能科技有限公司 一种陀螺仪电子防抖方法

Similar Documents

Publication Publication Date Title
CN107801014B (zh) 一种全景视频防抖的方法、装置及便携式终端
WO2019119597A1 (zh) 移动终端与镜头组件配合实现平面拍摄、全景拍摄的方法和镜头组件
CN105678748B (zh) 三维监控***中基于三维重构的交互式标定方法和装置
WO2020087846A1 (zh) 基于迭代扩展卡尔曼滤波融合惯性与单目视觉的导航方法
Karpenko et al. Digital video stabilization and rolling shutter correction using gyroscopes
CN108846867A (zh) 一种基于多目全景惯导的slam***
JP5769813B2 (ja) 画像生成装置および画像生成方法
CN107833237B (zh) 用于模糊视频中的虚拟对象的方法和设备
WO2018184423A1 (zh) 一种全景视频防抖的方法、***及便携式终端
US8964040B2 (en) High dynamic range image registration using motion sensor data
KR20150013709A (ko) 컴퓨터 생성된 3d 객체들 및 필름 카메라로부터의 비디오 공급을 실시간으로 믹싱 또는 합성하기 위한 시스템
CN108154533A (zh) 一种位置姿态确定方法、装置及电子设备
CN110139031B (zh) 一种基于惯性感知的视频防抖***及其工作方法
CN106525003A (zh) 一种基于双目视觉的姿态测量方法
JPWO2013069048A1 (ja) 画像生成装置および画像生成方法
CN106791360A (zh) 生成全景视频的方法及装置
TW200937348A (en) Calibration method for image capturing device
CN109040525B (zh) 图像处理方法、装置、计算机可读介质及电子设备
CN111899276A (zh) 一种基于双目事件相机的slam方法及***
CN112270702A (zh) 体积测量方法及装置、计算机可读介质和电子设备
WO2020038720A1 (en) Apparatus, method and computer program for detecting the form of a deformable object
CN109688327B (zh) 一种全景视频防抖的方法、装置及便携式终端
Nyqvist et al. A high-performance tracking system based on camera and IMU
JP2021165763A (ja) 情報処理装置、情報処理方法及びプログラム
CN109462717A (zh) 电子稳像方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18891480

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18891480

Country of ref document: EP

Kind code of ref document: A1