CN109922258B - Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium - Google Patents

Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium Download PDF

Info

Publication number
CN109922258B
CN109922258B CN201910147723.7A CN201910147723A CN109922258B CN 109922258 B CN109922258 B CN 109922258B CN 201910147723 A CN201910147723 A CN 201910147723A CN 109922258 B CN109922258 B CN 109922258B
Authority
CN
China
Prior art keywords
image
determining
affine transformation
moving object
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910147723.7A
Other languages
Chinese (zh)
Other versions
CN109922258A (en
Inventor
范锦昌
邓丹
冯昊
谭深
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Fabu Technology Co Ltd
Original Assignee
Hangzhou Fabu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Fabu Technology Co Ltd filed Critical Hangzhou Fabu Technology Co Ltd
Priority to CN201910147723.7A priority Critical patent/CN109922258B/en
Publication of CN109922258A publication Critical patent/CN109922258A/en
Application granted granted Critical
Publication of CN109922258B publication Critical patent/CN109922258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The invention provides an electronic image stabilization method, an electronic image stabilization device and a readable storage medium of a vehicle-mounted camera, wherein a moving object identification area of a first reference image acquired at the last moment is determined, and a plurality of characteristic points of an area outside the moving object identification area of the first reference image are determined; acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment; determining a first affine transformation matrix according to each feature point and each corresponding reference point; determining a compensation parameter according to the difference value of the observed quantity and the state quantity in the first affine transformation matrix; and determining a second affine transformation matrix according to the compensation parameters, and transforming a second reference image by using the second affine transformation matrix to obtain a stable image at the current moment, so that the influence of the moving object on the motion parameter estimation is avoided by removing the characteristic points in the moving object identification region where the moving object in the image is located, and the image stabilization accuracy is improved.

Description

Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium
Technical Field
The present invention relates to digital image processing technologies, and in particular, to an electronic image stabilization method and apparatus for a vehicle-mounted camera, and a readable storage medium.
Background
With the rapid development of the driving assistance technology and the unmanned technology, the vehicle-mounted camera with an image processing function becomes an indispensable part of the technical composition thereof. Generally, the image captured by the vehicle-mounted camera shakes due to the jolt generated when the vehicle runs, and the accuracy of the positioning result obtained based on the image in the auxiliary driving and the unmanned driving is further influenced. Therefore, how to ensure the stability of the image captured by the vehicle-mounted camera becomes important.
Electronic stabilization is a method of estimating and compensating for the jitter of successive frames of a camera image. In the existing electronic image stabilization method, an angular point in a reference frame is generally used as a feature point, and a corresponding feature point in a current frame is searched out, so that the motion between two frames is estimated, a jitter component is obtained through kalman filtering, and the current frame is compensated accordingly.
However, since a large number of moving objects such as pedestrians and vehicles are present in an image captured by the vehicle-mounted camera, the conventional electronic image stabilization method needs to use the feature points in the reference frame as a basis and compare and calculate the corresponding feature points in the current frame, and is not well applicable to images with a large number of moving objects, which causes the problem of reliability when the conventional electronic image stabilization method processes the images of the type described above.
Disclosure of Invention
In view of the above-mentioned technical problems, the present invention provides an electronic image stabilization method and apparatus for a vehicle-mounted camera, and a readable storage medium.
In one aspect, the invention provides an electronic image stabilization method for a vehicle-mounted camera, which includes:
determining a moving object identification region of a first reference image acquired at the previous moment, and determining a plurality of feature points of a region outside the moving object identification region of the first reference image; the moving object identification area is an area where a moving object in the image is located;
acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment;
determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the corresponding reference points;
accumulating values of all elements in the first affine transformation matrix at all moments to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities;
and determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment.
In an alternative embodiment, the determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of each feature point and the pixel coordinates of each corresponding reference point includes:
adjusting the pixel coordinates of each reference point according to the traveling speed and the steering angular speed of the vehicle;
and determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the reference points after adjustment.
In an alternative embodiment, the adjusting the pixel coordinates of the reference points according to the traveling speed and the steering angular speed of the vehicle respectively includes:
respectively adjusting the pixel coordinates of each reference point by adopting a formula (1);
Figure BDA0001980559850000021
wherein x and y represent a pixel abscissa and a pixel ordinate of the reference point, respectively; x 'and y' respectively represent a pixel abscissa and a pixel ordinate after reference point adjustment; v and ω represent the traveling speed and the steering angular speed of the vehicle at the present time, respectively; x is the number ofvAnd yvRespectively representing the difference between the pixel abscissa and the pixel ordinate of the same corresponding pixel point on two images at adjacent moments when the vehicle moves stably at a unit advancing speed; x is the number ofωThe difference between the pixel coordinates of the same corresponding pixel point on the two images at adjacent moments along the horizontal axis direction is expressed when the vehicle moves steadily at the unit steering angular velocity.
In an alternative embodiment, the determining the moving object identification region of the first reference image acquired at the previous time includes:
generating a first reference binary image according to the determined moving object identification region of the first reference image, wherein the pixel value of the first reference binary image in the moving object identification region is 0, and the pixel value in the non-moving object identification region is 1;
correspondingly, the determining a plurality of feature points of a region outside the moving object identification region of the first reference image includes:
determining a Shi-Tomasi corner point of a first reference image;
and determining the pixel value of each Shi-Tomasi corner point corresponding to the first reference binary image, and reserving the Shi-Tomasi corner point with the corresponding pixel value of 1 as the plurality of feature points.
In an alternative embodiment, the determining the moving object identification region of the first reference image acquired at the previous time includes:
determining a moving object identification region of a stable image at the last moment through a deep learning algorithm;
generating a binary stable image at the previous moment according to the moving object identification area of the stable image at the previous moment; the pixel value corresponding to the moving object identification area in the binary stable image is 0, and the pixel value corresponding to the non-moving object identification area is 1;
and carrying out affine transformation on the binary stable image according to an inverse matrix of a second affine transformation matrix at the previous moment to obtain the first reference binary image.
In an optional implementation manner, the acquiring, from the second reference image acquired at the current time, a reference point corresponding to each feature point includes:
and determining the pixel coordinates of the characteristic points in the second reference image by adopting a pyramid iteration Lucas-Kanade algorithm, and obtaining corresponding reference points.
In an alternative embodiment, the determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of each feature point and the pixel coordinates of each corresponding reference point includes:
and processing the pixel coordinates of each characteristic point and the corresponding pixel coordinates of each reference point by adopting a RANSAC algorithm, and calculating to obtain the first affine transformation matrix.
In an optional implementation manner, the processing, by using the RANSAC algorithm, the pixel coordinates of each feature point and the pixel coordinates of each corresponding reference point, and calculating to obtain the first affine transformation matrix includes:
randomly selecting a preset number of feature points as feature points to be processed, and taking reference points corresponding to the feature points to be processed as reference points to be processed;
judging whether each feature point to be processed and each reference point to be processed meet a preset position relationship;
if so, processing each feature point to be processed and each to-be-processed by utilizing an icvGetTMatrix function to obtain a third affine transformation matrix with constraint; otherwise, returning to the step of executing the step of randomly selecting the preset number of feature points as the feature points to be processed;
performing affine transformation on the remaining feature points except the feature point to be processed in each feature point by using the third affine transformation matrix, and calculating the Euler distance between each remaining feature point after affine transformation and the corresponding reference point;
determining the number of the residual characteristic points of which the Euler distance is smaller than a preset distance threshold, and judging whether the number is larger than or equal to a preset percentage of the number of all the characteristic points;
if so, processing the residual feature points by using the icvGetTMatrix function to obtain the first affine transformation matrix; otherwise, returning to the step of randomly selecting the preset number of feature points as the feature points to be processed.
In an optional implementation manner, the filtering the observed quantity to obtain a state quantity, and determining a compensation parameter according to a difference between the state quantity and the observed quantity includes:
and determining the optimal estimation of the state quantity at the current moment by using a standard recursion formula of Kalman filtering, and determining the compensation parameter of the second reference image according to the difference value of the optimal estimation of the state quantity at the current moment and the observed quantity at the current moment.
On the other hand, the invention provides an electronic image stabilizing device of a vehicle-mounted camera, which comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for determining a moving object identification region of a first reference image acquired at the previous moment and determining a plurality of characteristic points of a region outside the moving object identification region of the first reference image; the moving object identification area is an area where a moving object in the image is located; the reference point acquisition module is also used for acquiring a reference point corresponding to each feature point in a second reference image acquired at the current moment;
an estimating unit configured to determine a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of each feature point and the corresponding pixel coordinates of each reference point;
the filtering unit is used for accumulating values of all elements in the first affine transformation matrix at all times to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities;
and the compensation unit is used for determining a second affine transformation matrix according to the compensation parameters and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment.
In another aspect, the present invention provides an electronic image stabilization device for a vehicle-mounted camera, including: a memory, a processor, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of the preceding claims.
In a final aspect, the invention provides a readable storage medium, characterized in that a computer program is stored thereon, which computer program is processed to be executed to implement the method as described in any of the previous items.
The invention provides an electronic image stabilization method, an electronic image stabilization device and a readable storage medium of a vehicle-mounted camera, wherein a moving object identification area of a first reference image acquired at the last moment is determined, and a plurality of characteristic points of an area outside the moving object identification area of the first reference image are determined; the moving object identification area is an area where a moving object in the image is located; acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment; determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the corresponding reference points; accumulating values of all elements in the first affine transformation matrix at all moments to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities; and determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment, so that the influence of the moving object on the motion parameter estimation is avoided by removing the characteristic points in the moving object identification area where the moving object in the image is located, and the image stabilization accuracy is improved.
Drawings
FIG. 1 is a schematic diagram of a network architecture on which the present invention is based;
fig. 2 is a schematic flowchart of an electronic image stabilization method for a vehicle-mounted camera according to an exemplary embodiment of the present invention;
fig. 3 is a schematic flowchart of an electronic image stabilization method for a vehicle-mounted camera according to a second embodiment of the present invention;
fig. 4 is a schematic diagram illustrating adjustment of pixel coordinates of a reference point in a second reference image in the electronic image stabilization method for the vehicle-mounted camera according to the second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic image stabilizing device of a vehicle-mounted camera according to a third embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic image stabilizing apparatus of a vehicle-mounted camera according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the examples of the present invention will be clearly and completely described below with reference to the accompanying drawings in the examples of the present invention.
With the rapid development of the driving assistance technology and the unmanned technology, the vehicle-mounted camera with an image processing function becomes an indispensable part of the technical composition thereof. Generally, the image captured by the vehicle-mounted camera shakes due to the jolt generated when the vehicle runs, and the accuracy of the positioning result obtained based on the image in the auxiliary driving and the unmanned driving is further influenced. Therefore, how to ensure the stability of the image captured by the vehicle-mounted camera becomes important.
Electronic stabilization is a method of estimating and compensating for the jitter of successive frames of a camera image. In the existing electronic image stabilization method, an angular point in a reference frame is generally used as a feature point, and a corresponding feature point in a current frame is searched out, so that the motion between two frames is estimated, a jitter component is obtained through kalman filtering, and the current frame is compensated accordingly.
However, since a large number of moving objects such as pedestrians and vehicles may be found in an image captured by the vehicle-mounted camera, the conventional electronic image stabilization method needs to use the feature points in the reference frame as a basis and compare and calculate the corresponding feature points in the current frame. For example, the invention patent with application number 201110178881.2 discloses an "electronic image stabilization method based on feature matching", which takes the corner points in the reference frame as feature points, searches out the corresponding feature points in the current frame, thereby estimating the motion between two frames, and obtains the jitter component through kalman filtering, thereby compensating the current frame; the invention patent with the application number of 201710563620.X discloses an electronic image stabilization method based on improved KLT and Kalman filtering, angular points are detected by using a Shi-Tomasi algorithm, angular point matching is obtained through an optical flow algorithm, motion estimation is carried out by using a least square method, intentional motion and unintentional jitter are obtained through a Kalman filter, and finally the motion of an image is compensated through an image radiation transformation model. However, when the two existing electronic image stabilization methods perform image stabilization processing by using feature points including angular points, the influence of the feature points on image stabilization parameters by a moving object is not considered, which causes a problem in image stabilization reliability.
In order to solve the problems, the invention provides an electronic image stabilizing method and device for a vehicle-mounted camera and a readable storage medium. Fig. 1 is a schematic diagram of a network architecture on which the present invention is based, and as shown in fig. 1, the network architecture on which the present invention is based may include a vehicle 1, a vehicle-mounted camera 2, and an electronic image stabilization device 3 of the vehicle-mounted camera; wherein a network may be used to provide a medium for communication links between the vehicle 1, the onboard camera 2 and the electronic image stabilization device 3 of the onboard camera, the network may comprise various connection types, such as wired, wireless communication links or fiber optic cables, etc.
It should be noted that the vehicle-mounted camera 2 is mounted on the vehicle 1, and may interact with a vehicle-mounted computer of the vehicle 1 to obtain corresponding driving information of the vehicle 1 or a trigger control instruction from a user, and implement an auxiliary driving function and/or an unmanned driving function of the vehicle 1 in cooperation with the vehicle-mounted computer 1.
The onboard camera 2 is implemented on hardware, and includes, but is not limited to, an electronic device with a shooting function, such as a smart phone, a tablet computer, a laptop portable computer, a portable shooting device, an electronic navigation device, a car recorder, and the like.
The electronic image stabilization device 3 of the vehicle-mounted camera can be implemented by hardware or software, and can interact with the vehicle-mounted camera 2 through the network to perform electronic image stabilization processing on the image acquired by the vehicle-mounted camera.
When the electronic image stabilization device 3 of the vehicle-mounted camera is hardware, the electronic image stabilization device includes, but is not limited to, an electronic device with a logical operation processing function, such as a smart phone, a tablet computer, a laptop computer, and the like. When the electronic image stabilization device 3 of the vehicle-mounted camera is software, it can be installed in the electronic equipment listed above, and particularly, when it is software, the software form of the electronic image stabilization device 3 of the vehicle-mounted camera can be software installed in the vehicle-mounted camera 2, and also can be software installed in a vehicle-mounted computer of the vehicle 1. Furthermore, the electronic image stabilization device 3 of the onboard camera may also be implemented as a plurality of software or software modules (for example to provide distributed services), or as a single software or software module. And is not particularly limited herein.
For convenience of illustration, the electronic image stabilization device 3 of the onboard camera in fig. 1 is in the form of software, and is software installed in the onboard camera 2.
The execution subject of the electronic image stabilization method of the vehicle-mounted camera provided by the present example is an electronic image stabilization device of the vehicle-mounted camera, such as the electronic image stabilization device 2 of the vehicle-mounted camera shown in fig. 1.
The method aims to solve the problem of image stabilization reliability caused by the fact that the influence of a feature point of a moving object on image stabilization parameters is not considered when the image stabilization processing is performed by using the feature point including an angular point in the existing electronic image stabilization method. The scheme provided by the example further comprises a moving object identification region for determining a region where the moving object is located in the image, and a mode for removing the corner points in the moving object identification region, so that the reliability of the compensation parameters and the second affine transformation matrix obtained subsequently based on the corner points is improved, and the reliability of the obtained stable image is further improved.
Fig. 2 is a schematic flowchart of a flow of an electronic image stabilization method of a vehicle-mounted camera according to an example of the present invention, and as shown in fig. 2, the electronic image stabilization method of the vehicle-mounted camera includes:
step 101, determining a moving object identification region of a first reference image acquired at the previous moment, and determining a plurality of feature points of a region outside the moving object identification region of the first reference image.
Firstly, an electronic image stabilizing device of a vehicle-mounted camera determines a moving object identification region of a first reference image acquired at the last moment and determines a plurality of characteristic points of a region outside the moving object identification region of the first reference image; the moving object identification region is a region where a moving object in the image is located, and correspondingly, a region outside the moving object identification region is a region where a non-moving object in the image is located. The moving object is an object with a moving function in the image, such as a pedestrian, a traveling motor vehicle, a non-motor vehicle and the like. The moving object identification region of the first reference image can be realized by the existing image identification algorithm.
In this example, in order to determine a plurality of feature points located in a region outside the moving object identification region of the first reference image, after determining the moving object identification region in which the first reference image is obtained, a step of generating a first reference binary image in which the pixel value of the first reference binary image in the moving object identification region is 0 and the pixel value in the non-moving object identification region is 1 may be further included. Subsequently, the Shi-Tomasi corner of the first reference image may be determined; and determining the pixel value of each Shi-Tomasi corner point corresponding to the first reference binary image, and reserving the Shi-Tomasi corner point with the corresponding pixel value of 1 as the plurality of feature points.
Specifically, for the reliability of the electronic image stabilization, to avoid the influence of the moving object on the image stabilization parameters of the electronic image stabilization, the present example may specifically obtain all the Shi-Tomasi corner points in the first reference image based on the goodfeaturs totrack function in the openCV. Subsequently, the value of each Shi-Tomasi corner point is sequentially determined in the first reference binary image by the needle so as to judge whether the obtained Shi-Tomasi corner point corresponds to the moving object identification area of the first reference image. If the value is 0, it is equivalent to
The Shi-Tomasi corner point corresponds to a moving object identification region of the first reference image, and at this time, the Shi-Tomasi corner point is not reserved; if the value is 1, the Shi-Tomasi corner point corresponds to the non-moving object identification area of the first reference image, and at this time, the Shi-Tomasi corner point is reserved. In the foregoing manner, a plurality of feature points located in a region outside the moving object recognition region of the first reference image can be obtained.
Of course, in order to further improve the image stabilization efficiency, in this example, the moving object identification region of the stable image at the previous time and the inverse matrix of the second affine transformation matrix obtained at the previous time may also be directly used to directly obtain the first reference binary image, so as to reduce the time consumption for image stabilization. Specifically, firstly, a moving object identification region of a stable image at the previous moment is determined through a deep learning algorithm, and then a binary stable image at the previous moment is generated based on the moving object identification region of the stable image at the previous moment, wherein in the binary stable image, the value of the moving object identification region occupied by the moving object is 0, and the values of the rest regions are 1; and determining an inverse matrix of the second affine transformation matrix at the previous moment, and carrying out affine transformation on the binary stable image by using the inverse matrix to obtain a first reference binary image of the first reference image. By adopting the processing mode, the total time consumption of the electronic image stabilization algorithm is reduced from 90ms to 5ms, and the electronic image stabilization efficiency is effectively improved.
And 102, acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment.
And 103, determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the corresponding pixel coordinates of the reference points.
Specifically, in steps 102 and 103, firstly, a pyramid iteration Lucas-Kanade algorithm may be adopted to determine pixel coordinates of each feature point in the second reference image, and obtain a corresponding reference point. The electronic image stabilization device of the vehicle-mounted camera can perform the pyramid iterative Lucas-Kanade algorithm by using a calcOpticalFlowPyrLK function in the openCV so as to obtain corresponding reference points of the feature points in the second reference image.
Subsequently, a first affine transformation matrix between the first reference image and the second reference image may be determined.
The first affine transformation matrix describes a transformation relation when the first reference image is transformed to the second reference image, namely, a transformation relation between two adjacent time images.
Exemplarily, the first affine transformation matrix T1 tSpecifically, the affine transformation matrix with constraints is composed of four motion parameters: lateral displacement tx, longitudinal displacement ty, rotation angle α, and scaling factor k.
That is to say that the first and second electrodes,
Figure BDA0001980559850000091
wherein the first affine transformation matrix is obtainable using the RANSAC algorithm.
Exemplarily, the processing of the pixel coordinates of each feature point and the pixel coordinates of each corresponding reference point by using a RANSAC algorithm to obtain the first affine transformation matrix by calculation includes:
step 1031, randomly selecting a preset number of feature points as feature points to be processed, and taking reference points corresponding to the feature points to be processed as reference points to be processed. Illustratively, the predetermined number may be 3.
And 1032, judging whether each feature point to be processed and each reference point to be processed meet a preset position relationship.
If yes, go to step 1033; if not, return to step 1031.
Illustratively, the preset positional relationship includes a distance relationship and a relative positional relationship, that is, when there is an unexpected positional relationship, that is, when the distance between the feature points to be processed is too close, the distance between the reference points to be processed is too close, the feature points to be processed are located on the same straight line, and the reference points to be processed are located on the same straight line, step 1033 is executed; when the foregoing four positional relationships occur, the process returns to step 1031.
And step 1033, processing each feature point to be processed and each feature point to be processed by using an icvGetTMatrix function to obtain a third affine transformation matrix with constraint.
Step 1034, performing affine transformation on the remaining feature points except for the feature point to be processed in each feature point by using the third affine transformation matrix, and calculating the euler distances between each remaining feature point after affine transformation and the corresponding reference point.
Step 1035 determines the number of remaining feature points for which the euler distance is less than the preset distance threshold, and determines whether the number is greater than or equal to a preset percentage of the number of all feature points.
If yes, go to step 1036; if not, return to step 1031.
Illustratively, the distance threshold is 3 and the preset percentage is 80%.
Step 1036, processing the remaining feature points by using the icvGetTMatrix function to obtain the first affine transformation matrix.
And 104, accumulating values of all elements in the first affine transformation matrix at all times to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities.
Wherein, the observed quantity is obtained by calculation according to the following formula (2):
Figure BDA0001980559850000101
wherein StAnd St-1Respectively representing the observed quantity at the current time and the observed quantity at the previous time. In other words, the observed quantity at the current time may be described as the sum of the observed quantity at the previous time and the four parameters in the first affine transformation matrix obtained at the current time.
Subsequently, an optimal estimate of the state quantity at the present time may be determined using the standard recurrence formula of kalman filtering, during which the corresponding parameters in the standard recurrence formula of kalman filtering may be determined based on the following state equation (formula (3)) and observation equation (formula (4)):
xt=I4×4xt-1+Q2I4×4formula (3)
St=I4×4xt+R2I4×4Formula (4)
Wherein xtAnd xt-1Respectively used for representing the state quantity of the current moment and the state quantity of the previous moment; i is4x4Is an identity matrix, R2And Q2Respectively, for representing the state noise covariance and the observed noise covariance. Further, in the present example, the observed noise covariance may be obtained by recording an observed quantity over a period of time and calculating a variance of the observed quantity over the period of time; and the state noise covariance may be chosen to be 10-4. Of course, the observed noise covariance and the state noise covariance may also be determined empirically, and are not limited herein.
Compensation parameter A of second reference imagetCan be represented by formula (5):
Figure BDA0001980559850000111
wherein xtThe method comprises the following steps of performing optimal estimation on a state quantity at the current moment, namely performing the optimal estimation on the state quantity at the current moment after Kalman filtering; stIs the observed quantity at the current moment. That is, the compensation parameter of the second reference image is determined in equation (5) from the difference between the optimal estimate of the state quantity at the present time and the observed quantity at the present time.
And 105, determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment.
Specifically, the second affine transformation matrix represents a transformation relationship between the second reference image acquired at the current time and the stable image at the current time.
The electronic image stabilization device of the vehicle-mounted camera finally determines a second affine transformation matrix according to the compensation parameters
Figure BDA0001980559850000112
The second affine transformation matrix can be expressed as
Figure BDA0001980559850000113
And finally, transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment.
The coordinates (Xi, Yi) of each pixel point on the second reference image at the current moment can be converted into new coordinates (Xi, Yi) according to the following formula (6), and then the stable image at the current moment is obtained.
Figure BDA0001980559850000114
The first electronic image stabilization method for the vehicle-mounted camera adopts the steps of determining a moving object identification region of a first reference image acquired at the previous moment, and determining a plurality of feature points of a region outside the moving object identification region of the first reference image; the moving object identification area is an area where a moving object in the image is located; acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment; determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the corresponding reference points; accumulating values of all elements in the first affine transformation matrix at all moments to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities; and determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment, so that the influence of the moving object on the motion parameter estimation is avoided by removing the characteristic points in the moving object identification region where the moving object in the image is located, and the image stabilization accuracy is improved.
On the basis of the example of fig. 2, in order to further improve the accuracy of compensation and the success rate of compensation in electronic image stabilization, fig. 3 is a schematic flow chart of an electronic image stabilization method of a vehicle-mounted camera according to a second example of the present invention, and as shown in fig. 3, the electronic image stabilization method of the vehicle-mounted camera includes:
step 201, determining a moving object identification region of a first reference image acquired at the previous moment, and determining a plurality of feature points of a region outside the moving object identification region of the first reference image.
And the moving object identification area is an area where a moving object in the image is located.
Step 202, acquiring reference points corresponding to the feature points from a second reference image acquired at the current moment.
And step 203, respectively adjusting the pixel coordinates of the reference points according to the traveling speed and the steering angle speed of the vehicle.
And 204, determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the reference points after adjustment.
And 205, accumulating values of each element in the first affine transformation matrix at each moment to obtain an observed quantity, filtering the observed quantity to obtain a state quantity, and determining a compensation parameter according to a difference value between the state quantity and the observed quantity.
And step 206, determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment.
Specifically, steps 201, 202, 205, and 206 in the second example are similar to the steps in the first example and the implementation manners of steps 101, 102, 104, and 105 in the first example, and are not repeated herein.
In the second example, the pixel coordinates of the respective reference points utilized in the first affine transformation matrix between the first reference image and the second reference image are determined from the pixel coordinates of the respective feature points and the pixel coordinates of the respective reference points, which are pixel coordinates after the pixel coordinates of the respective reference points are adjusted in accordance with the traveling speed and the steering angular speed of the vehicle.
That is, after acquiring the reference points corresponding to the feature points in the second reference image acquired at the current moment, the electronic image stabilization method of the vehicle-mounted camera adjusts the pixel coordinates of the reference points according to the traveling speed and the steering angular speed of the vehicle,
fig. 4 is a schematic diagram illustrating adjustment of pixel coordinates of reference points in a second reference image in the electronic image stabilization method for a vehicle-mounted camera according to the second embodiment of the present invention, where as shown in fig. 4, the pixel coordinates of the reference points may be adjusted by using formula (1);
Figure BDA0001980559850000131
wherein x and y represent a pixel abscissa and a pixel ordinate of the reference point, respectively; x 'and y' respectively represent a pixel abscissa and a pixel ordinate after reference point adjustment; v and ω respectively indicate that the vehicle is inA travel speed and a steering angle speed at the present time; x is the number ofvAnd yvRespectively representing the difference between the pixel abscissa and the pixel ordinate of the same corresponding pixel point on two images at adjacent moments when the vehicle moves stably at a unit advancing speed; x is the number ofωThe difference between the pixel coordinates of the same corresponding pixel point on the two images at adjacent moments along the horizontal axis direction is expressed when the vehicle moves steadily at the unit steering angular velocity.
Furthermore, xv,yvAnd xωAre constants measured in advance, and in the embodiment, v and ω are directly measured by using the vehicle-mounted GPS and the IMU, so as to ensure real-time performance and accuracy.
Subsequently, a first affine transformation matrix between the first reference image and the second reference image will be determined from the pixel coordinates of the respective feature points and the adjusted pixel coordinates of the respective reference points.
That is, when the first affine transformation matrix is obtained by using the RANSAC algorithm, the pixel coordinates of the reference point based on which are all adjusted pixel coordinates, the following steps are adopted:
step 2041, randomly selecting a preset number of feature points as feature points to be processed, and using reference points corresponding to the feature points to be processed as reference points to be processed. Illustratively, the predetermined number may be 3.
Step 2042, judging whether each feature point to be processed and each reference point to be processed meet a preset position relationship.
If yes, go to step 2043; if not, return to step 2041.
Illustratively, the preset positional relationship includes a distance relationship and a relative positional relationship, that is, when there is an unexpected positional relationship, that is, when the distance between the feature points to be processed is too close, the distance between the reference points to be processed is too close, the feature points to be processed are located on the same straight line, and the reference points to be processed are located on the same straight line, step 2043 is executed; when the foregoing four positional relationships occur, the process returns to step 2041.
Step 2043, processing each feature point to be processed and each to-be-processed by using an icvGetTMatrix function to obtain a third affine transformation matrix with constraint.
Step 2044, using the third affine transformation matrix, performing affine transformation on the remaining feature points in each feature point except for the feature point to be processed, and calculating euler distances between each remaining feature point after affine transformation and the corresponding reference point.
Step 2045, determine the number of remaining feature points whose euler distance is less than a preset distance threshold, and determine whether the number is greater than or equal to a preset percentage of the number of all feature points.
If yes, go to step 2046; if not, return to step 2041.
Illustratively, the distance threshold is 3 and the preset percentage is 80%.
Step 2046, processing the residual feature points by using the icvGetTMatrix function to obtain the first affine transformation matrix.
According to the electronic image stabilization method of the vehicle-mounted camera provided by the second example of the invention, on the basis of the first example, the pixel coordinate change of the reference point caused by the active motion of the vehicle is compensated in advance according to the characteristic that the pixel coordinate change of each point on the image is linearly related to the advancing speed and the steering angular speed of the vehicle in the active motion process of the vehicle, so that the weight of a jitter component in the pixel coordinate deviation between the characteristic point and the reference point is larger, and the success rate and the accuracy of subsequent calculation of compensation parameters are greatly improved.
Fig. 5 is a schematic structural diagram of an electronic image stabilizing device of a vehicle-mounted camera according to a third example of the present invention, and as shown in fig. 5, the electronic image stabilizing device of the vehicle-mounted camera includes:
the device comprises an acquisition unit 10, a processing unit and a processing unit, wherein the acquisition unit is used for determining a moving object identification region of a first reference image acquired at the previous moment and determining a plurality of characteristic points of a region outside the moving object identification region of the first reference image; the moving object identification area is an area where a moving object in the image is located; the reference point acquisition module is also used for acquiring a reference point corresponding to each feature point in a second reference image acquired at the current moment;
an estimating unit 20 configured to determine a first affine transformation matrix between the first reference image and the second reference image based on the pixel coordinates of each feature point and the pixel coordinates of each corresponding reference point;
the filtering unit 30 is configured to accumulate values of each element in the first affine transformation matrix at each time to obtain an observed quantity, filter the observed quantity to obtain a state quantity, and determine a compensation parameter according to a difference between the state quantity and the observed quantity;
and the compensation unit 40 is configured to determine a second affine transformation matrix according to the compensation parameters, and transform the second reference image by using the second affine transformation matrix to obtain a stable image at the current time.
Optionally, the obtaining unit 10 is specifically configured to generate a first reference binary image according to the determined moving object identification region of the first reference image, where a pixel value of the first reference binary image in the moving object identification region is 0, and a pixel value of the first reference binary image in the non-moving object identification region is 1; and also for determining the Shi-Tomasi corner of the first reference image; and determining the pixel value of each Shi-Tomasi corner point corresponding to the first reference binary image, and reserving the Shi-Tomasi corner point with the corresponding pixel value of 1 as the plurality of feature points.
Optionally, the obtaining unit 10 is further configured to determine a moving object identification region of the stable image at the previous time through a deep learning algorithm; generating a binary stable image at the previous moment according to the moving object identification area of the stable image at the previous moment; the pixel value corresponding to the moving object identification area in the binary stable image is 0, and the pixel value corresponding to the non-moving object identification area is 1; and carrying out affine transformation on the binary stable image according to an inverse matrix of a second affine transformation matrix at the previous moment to obtain the first reference binary image.
Optionally, the obtaining unit 10 is further configured to determine pixel coordinates of each feature point in the second reference image by using a pyramid iteration Lucas-Kanade algorithm, and obtain a corresponding reference point.
The estimating unit 20 is specifically configured to process the pixel coordinates of each feature point and the pixel coordinates of each corresponding reference point by using a RANSAC algorithm, and calculate to obtain the first affine transformation matrix.
The estimating unit 20 is specifically configured to randomly select a preset number of feature points as feature points to be processed, and use a reference point corresponding to the feature points to be processed as a reference point to be processed; judging whether each feature point to be processed and each reference point to be processed meet a preset position relationship; if so, processing each feature point to be processed and each to-be-processed by utilizing an icvGetTMatrix function to obtain a third affine transformation matrix with constraint; otherwise, returning to the step of executing the step of randomly selecting the preset number of feature points as the feature points to be processed; performing affine transformation on the remaining feature points except the feature point to be processed in each feature point by using the third affine transformation matrix, and calculating the Euler distance between each remaining feature point after affine transformation and the corresponding reference point; determining the number of the residual characteristic points of which the Euler distance is smaller than a preset distance threshold, and judging whether the number is larger than or equal to a preset percentage of the number of all the characteristic points; if so, processing the residual feature points by using the icvGetTMatrix function to obtain the first affine transformation matrix; otherwise, returning to the step of randomly selecting the preset number of feature points as the feature points to be processed.
An estimating unit 20, further configured to adjust pixel coordinates of each reference point according to a traveling speed and a steering angular velocity of the vehicle, respectively; and determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the reference points after adjustment.
An estimating unit 20, configured to respectively adjust the pixel coordinates of each reference point by using formula (1);
Figure BDA0001980559850000161
wherein x and y represent a pixel abscissa and a pixel ordinate of the reference point, respectively; x 'and y' represent the abscissa and the abscissa of the pixel after the reference point adjustment, respectivelyA pixel ordinate; v and ω represent the traveling speed and the steering angular speed of the vehicle at the present time, respectively; x is the number ofvAnd yvRespectively representing the difference between the pixel abscissa and the pixel ordinate of the same corresponding pixel point on two images at adjacent moments when the vehicle moves stably at a unit advancing speed; x is the number ofωThe difference between the pixel coordinates of the same corresponding pixel point on the two images at adjacent moments along the horizontal axis direction is expressed when the vehicle moves steadily at the unit steering angular velocity.
The filtering unit 30 is specifically configured to determine an optimal estimation of the state quantity at the current time by using a standard recurrence formula of kalman filtering, and determine a compensation parameter of the second reference image according to a difference between the optimal estimation of the state quantity at the current time and the observed quantity at the current time.
The electronic image stabilizing device of the vehicle-mounted camera determines a moving object identification area of a first reference image acquired at the previous moment, and determines a plurality of characteristic points of an area outside the moving object identification area of the first reference image; the moving object identification area is an area where a moving object in the image is located; acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment; determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the corresponding reference points; accumulating values of all elements in the first affine transformation matrix at all moments to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities; and determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment, so that the influence of the moving object on the motion parameter estimation is avoided by removing the characteristic points in the moving object identification region where the moving object in the image is located, and the image stabilization accuracy is improved.
Fig. 6 is a schematic diagram of a hardware structure of an electronic image stabilizing device of a vehicle-mounted camera according to a fourth embodiment of the present invention; as shown in fig. 6, the electronic image stabilization device of the vehicle-mounted camera includes:
a memory 41, a processor 42 and a computer program stored on the memory 41 and executable on the processor 42, the processor 42 executing the above exemplified method when running the computer program.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method examples may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the above-described method examples; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and corresponding beneficial effects of the system described above may refer to the corresponding process in the foregoing method example, and are not described herein again.
Finally, the present invention also provides a readable storage medium comprising a computer program stored thereon, the computer program being processed to execute to implement the method of any of the above examples.
Finally, it should be noted that: the above examples are only for illustrating the technical solution of the present invention, and not for limiting the same; while the invention has been described in detail with reference to the foregoing examples, those skilled in the art will appreciate that: the technical solutions described in the foregoing examples can still be modified, or some or all of the technical features can be equivalently replaced; such modifications or substitutions do not depart from the scope of the exemplary embodiments of the present invention.

Claims (10)

1. An electronic image stabilization method for a vehicle-mounted camera is characterized by comprising the following steps:
determining a moving object identification region of a first reference image acquired at the previous moment, and determining a plurality of feature points of a region outside the moving object identification region of the first reference image; the moving object identification area is an area where a moving object in the image is located;
acquiring reference points corresponding to the characteristic points from a second reference image acquired at the current moment;
determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the corresponding reference points;
accumulating values of all elements in the first affine transformation matrix at all moments to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities;
determining a second affine transformation matrix according to the compensation parameters, and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment;
the determining of the first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the feature points and the pixel coordinates of the corresponding reference points includes:
adjusting the pixel coordinates of each reference point according to the traveling speed and the steering angular speed of the vehicle;
determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the reference points after adjustment;
the adjusting the pixel coordinates of the reference points according to the traveling speed and the steering angular speed of the vehicle comprises:
respectively adjusting the pixel coordinates of each reference point by adopting a formula (1);
Figure FDA0002680474550000011
wherein x and y represent a pixel abscissa and a pixel ordinate of the reference point, respectively; x 'and y' respectively represent a pixel abscissa and a pixel ordinate after reference point adjustment; v and ω represent the traveling speed and the steering angular speed of the vehicle at the present time, respectively; x is the number ofvAnd yvRespectively representing the difference between the pixel abscissa and the pixel ordinate of the same corresponding pixel point on two images at adjacent moments when the vehicle moves stably at a unit advancing speed; x is the number ofωThe difference between the pixel coordinates of the same corresponding pixel point on the two images at adjacent moments along the horizontal axis direction is expressed when the vehicle moves steadily at the unit steering angular velocity.
2. The electronic image stabilization method of claim 1, wherein the determining the moving object identification region of the first reference image acquired at the previous time comprises:
generating a first reference binary image according to the determined moving object identification region of the first reference image, wherein the pixel value of the first reference binary image in the moving object identification region is 0, and the pixel value in the non-moving object identification region is 1;
correspondingly, the determining a plurality of feature points of a region outside the moving object identification region of the first reference image includes:
determining a Shi-Tomasi corner point of a first reference image;
and determining the pixel value of each Shi-Tomasi corner point corresponding to the first reference binary image, and reserving the Shi-Tomasi corner point with the corresponding pixel value of 1 as the plurality of feature points.
3. The electronic image stabilization method according to claim 2, wherein the determining of the moving object identification region of the first reference image acquired at the previous time comprises:
determining a moving object identification region of a stable image at the last moment through a deep learning algorithm;
generating a binary stable image at the previous moment according to the moving object identification area of the stable image at the previous moment; the pixel value corresponding to the moving object identification area in the binary stable image is 0, and the pixel value corresponding to the non-moving object identification area is 1;
and carrying out affine transformation on the binary stable image according to an inverse matrix of a second affine transformation matrix at the previous moment to obtain the first reference binary image.
4. The electronic image stabilization method according to claim 1, wherein the obtaining of the reference point corresponding to each feature point in the second reference image acquired at the current time comprises:
and determining the pixel coordinates of the characteristic points in the second reference image by adopting a pyramid iteration Lucas-Kanade algorithm, and obtaining corresponding reference points.
5. The electronic image stabilization method according to claim 1, wherein determining the first affine transformation matrix between the first reference image and the second reference image based on the pixel coordinates of the respective feature points and the pixel coordinates of the respective reference points comprises:
and processing the pixel coordinates of each characteristic point and the corresponding pixel coordinates of each reference point by adopting a RANSAC algorithm, and calculating to obtain the first affine transformation matrix.
6. The electronic image stabilization method according to claim 5, wherein the processing pixel coordinates of each feature point and pixel coordinates of each corresponding reference point by using a RANSAC algorithm to obtain the first affine transformation matrix by calculation comprises:
randomly selecting a preset number of feature points as feature points to be processed, and taking reference points corresponding to the feature points to be processed as reference points to be processed;
judging whether each feature point to be processed and each reference point to be processed meet a preset position relationship;
if so, processing each feature point to be processed and each to-be-processed by utilizing an icvGetTMatrix function to obtain a third affine transformation matrix with constraint; otherwise, returning to the step of executing the step of randomly selecting the preset number of feature points as the feature points to be processed;
performing affine transformation on the remaining feature points except the feature point to be processed in each feature point by using the third affine transformation matrix, and calculating the Euler distance between each remaining feature point after affine transformation and the corresponding reference point;
determining the number of the residual characteristic points of which the Euler distance is smaller than a preset distance threshold, and judging whether the number is larger than or equal to a preset percentage of the number of all the characteristic points;
if so, processing the residual feature points by using the icvGetTMatrix function to obtain the first affine transformation matrix; otherwise, returning to the step of randomly selecting the preset number of feature points as the feature points to be processed.
7. The electronic image stabilization method according to claim 1, wherein the filtering the observed quantity to obtain a state quantity, and determining a compensation parameter according to a difference between the state quantity and the observed quantity comprises:
and determining the optimal estimation of the state quantity at the current moment by using a standard recursion formula of Kalman filtering, and determining the compensation parameter of the second reference image according to the difference value of the optimal estimation of the state quantity at the current moment and the observed quantity at the current moment.
8. An electronic image stabilization device of a vehicle-mounted camera, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for determining a moving object identification region of a first reference image acquired at the previous moment and determining a plurality of characteristic points of a region outside the moving object identification region of the first reference image; the moving object identification area is an area where a moving object in the image is located; the reference point acquisition module is also used for acquiring a reference point corresponding to each feature point in a second reference image acquired at the current moment;
an estimating unit configured to determine a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of each feature point and the corresponding pixel coordinates of each reference point;
the filtering unit is used for accumulating values of all elements in the first affine transformation matrix at all times to obtain observed quantities, filtering the observed quantities to obtain state quantities, and determining compensation parameters according to difference values of the state quantities and the observed quantities;
the compensation unit is used for determining a second affine transformation matrix according to the compensation parameters and transforming the second reference image by using the second affine transformation matrix to obtain a stable image at the current moment;
the estimation unit is specifically configured to:
adjusting the pixel coordinates of each reference point according to the traveling speed and the steering angular speed of the vehicle;
determining a first affine transformation matrix between the first reference image and the second reference image according to the pixel coordinates of the characteristic points and the pixel coordinates of the reference points after adjustment;
the estimating unit 20 is specifically configured to: respectively adjusting the pixel coordinates of each reference point by adopting a formula (1);
Figure FDA0002680474550000041
wherein x and y represent a pixel abscissa and a pixel ordinate of the reference point, respectively; x 'and y' respectively represent a pixel abscissa and a pixel ordinate after reference point adjustment; v and ω represent the traveling speed and the steering angular speed of the vehicle at the present time, respectively; x is the number ofvAnd yvRespectively representing the difference between the pixel abscissa and the pixel ordinate of the same corresponding pixel point on two images at adjacent moments when the vehicle moves stably at a unit advancing speed; x is the number ofωThe difference between the pixel coordinates of the same corresponding pixel point on the two images at adjacent moments along the horizontal axis direction is expressed when the vehicle moves steadily at the unit steering angular velocity.
9. An electronic image stabilization device of a vehicle-mounted camera, comprising: a memory, a processor, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-7.
10. A readable storage medium, having stored thereon a computer program which is processed to execute to implement the method according to any one of claims 1-7.
CN201910147723.7A 2019-02-27 2019-02-27 Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium Active CN109922258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910147723.7A CN109922258B (en) 2019-02-27 2019-02-27 Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910147723.7A CN109922258B (en) 2019-02-27 2019-02-27 Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium

Publications (2)

Publication Number Publication Date
CN109922258A CN109922258A (en) 2019-06-21
CN109922258B true CN109922258B (en) 2020-11-03

Family

ID=66962505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910147723.7A Active CN109922258B (en) 2019-02-27 2019-02-27 Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium

Country Status (1)

Country Link
CN (1) CN109922258B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401796B (en) * 2019-07-05 2020-09-29 浙江大华技术股份有限公司 Jitter compensation method and device of image acquisition device
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN110602393B (en) * 2019-09-04 2020-06-05 南京博润智能科技有限公司 Video anti-shake method based on image content understanding
CN110611767B (en) * 2019-09-25 2021-08-10 北京迈格威科技有限公司 Image processing method and device and electronic equipment
CN112884634A (en) * 2021-01-20 2021-06-01 四川中科友成科技有限公司 Image and video processing system based on OPENCV
CN113418455B (en) * 2021-05-24 2023-05-09 深圳亦芯智能视觉技术有限公司 Roadbed displacement monitoring method and device based on image vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383899A (en) * 2008-09-28 2009-03-11 北京航空航天大学 Video image stabilizing method for space based platform hovering
CN103402045A (en) * 2013-08-20 2013-11-20 长沙超创电子科技有限公司 Image de-spin and stabilization method based on subarea matching and affine model
CN103426182A (en) * 2013-07-09 2013-12-04 西安电子科技大学 Electronic image stabilization method based on visual attention mechanism
JP2014229971A (en) * 2013-05-20 2014-12-08 株式会社朋栄 Rolling shutter distortion correction and video image stabilization processing method
CN107222662A (en) * 2017-07-12 2017-09-29 中国科学院上海技术物理研究所 A kind of electronic image stabilization method based on improved KLT and Kalman filtering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383899A (en) * 2008-09-28 2009-03-11 北京航空航天大学 Video image stabilizing method for space based platform hovering
JP2014229971A (en) * 2013-05-20 2014-12-08 株式会社朋栄 Rolling shutter distortion correction and video image stabilization processing method
CN103426182A (en) * 2013-07-09 2013-12-04 西安电子科技大学 Electronic image stabilization method based on visual attention mechanism
CN103402045A (en) * 2013-08-20 2013-11-20 长沙超创电子科技有限公司 Image de-spin and stabilization method based on subarea matching and affine model
CN107222662A (en) * 2017-07-12 2017-09-29 中国科学院上海技术物理研究所 A kind of electronic image stabilization method based on improved KLT and Kalman filtering

Also Published As

Publication number Publication date
CN109922258A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109922258B (en) Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium
EP2757527B1 (en) System and method for distorted camera image correction
CN104144282B (en) A kind of fast digital digital image stabilization method suitable for robot for space vision system
KR100985805B1 (en) Apparatus and method for image stabilization using adaptive Kalman filter
WO2019084804A1 (en) Visual odometry and implementation method therefor
JP7212486B2 (en) position estimator
CN113409200B (en) System and method for image deblurring in a vehicle
CN115867940A (en) Monocular depth surveillance from 3D bounding boxes
CN106550187A (en) For the apparatus and method of image stabilization
WO2019156072A1 (en) Attitude estimating device
JP2015088092A (en) Movement amount estimation device and movement amount estimation method
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN113639782A (en) External parameter calibration method and device for vehicle-mounted sensor, equipment and medium
KR20140029794A (en) Image stabilization method and system using curve lane model
KR101806453B1 (en) Moving object detecting apparatus for unmanned aerial vehicle collision avoidance and method thereof
JP6488226B2 (en) Runway parameter estimation apparatus and program
JP7019431B2 (en) Camera calibration device, camera calibration method, and program
CN114037977B (en) Road vanishing point detection method, device, equipment and storage medium
CN116703979A (en) Target tracking method, device, terminal and storage medium
Noda et al. Road image update using in-vehicle camera images and aerial image
CN112612289B (en) Trajectory tracking control method, mobile robot, control device, and storage medium
CN114245102A (en) Vehicle-mounted camera shake identification method and device and computer readable storage medium
Cai et al. Robust motion estimation for camcorders mounted in mobile platforms
CN110599542A (en) Method and device for local mapping of adaptive VSLAM (virtual local area model) facing to geometric area
Zhang et al. A fast video stabilization algorithm with unexpected motion prediction strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant