CN103400380A - Single camera underwater target three-dimensional trace stimulation method merged with image matrix offset - Google Patents
Single camera underwater target three-dimensional trace stimulation method merged with image matrix offset Download PDFInfo
- Publication number
- CN103400380A CN103400380A CN2013103171197A CN201310317119A CN103400380A CN 103400380 A CN103400380 A CN 103400380A CN 2013103171197 A CN2013103171197 A CN 2013103171197A CN 201310317119 A CN201310317119 A CN 201310317119A CN 103400380 A CN103400380 A CN 103400380A
- Authority
- CN
- China
- Prior art keywords
- camera
- target
- submarine target
- underwater
- single camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a single camera underwater target three-dimensional trace stimulation method merged with image matrix offset. Under the condition of single camera imaging, based on the Bayes tracking frame, through combination with underwater scene depth information and camera motion offset vector information, the three-dimensional motion trace of an underwater target is simulated. Under the Bayes tracking frame, target tracking is performed on an underwater video, the center position parameters of the target are output, the underwater scene depth information is calculated with dark channel prior algorithm, and meanwhile SURF (Speeded Up Robost Features) features of background points in adjacent frames are calculated for feature matching so as to obtain camera motion offset vectors of the adjacent frames, and finally the target position information, the underwater scene depth information and the camera motion offset vectors are combined to simulate the three-dimensional motion trace of the underwater target. According to the method, the three-dimensional motion trace of the underwater target can be really and reliably stimulated in a single camera video, and the computational efficiency is high.
Description
Technical field
The present invention relates to a kind of single camera submarine target 3 D motion trace analogy method of fused images matrix skew, belong to the machine vision technique field.
Background technology
In recent years, develop rapidly along with science and technology, adopt the Digital Video of digital technology and the principle of digital camera successfully to be applied to manufacturing and designing of Underwater Camera, underwater camera, be widely applied in the investigation of deep-sea science and coastal ocean exploitation.In machine vision, the application of higher level need to be located the position of target in every two field picture under water, and target following wherein gordian technique exactly, last result be can simulated target movement locus.
The existing minority algorithm that can realize that the submarine target three-dimensional track is simulated all adopts the multiple-camera stereovision technique, and requires very harsh camera calibration.Cause hardware complexity and the computation complexity of algorithm higher, be difficult to meet the needs of conventional application.In addition, being widely used of mobile camera brought new challenge., along with the movement of video camera, be not only the object of motion, and skew has all occurred in the position array of whole video image.Therefore, in this case, common Method for Underwater Target Tracking is no longer applicable, but need to the objective course deviation that cause due to the camera motion skew be compensated.
Based on the problems referred to above, a kind of by obtaining the camera motion offset vector between adjacent video frames, and it is necessary with the exploitation of the method for simulation submarine target 3 D motion trace as important parameter.
Summary of the invention
Goal of the invention: technical matters to be solved by this invention is to provide a kind of submarine target 3 D motion trace analogy method, by the combining target positional information, depth information of scene and camera motion offset vector information realize the simulation to the 3 D motion trace of submarine target under water.
Summary of the invention: for solving the problems of the technologies described above, the technical solution adopted in the present invention is:
The single camera submarine target three-dimensional track analogy method of fused images matrix skew, comprise the steps:
Under the single camera image-forming condition, based on the tracking of Bayes's filter frame realization to submarine target, output submarine target position coordinates, utilize the first checking method of dark primary to calculate depth information of scene under water, calculate simultaneously the SURF feature of background dot in consecutive frame, carry out images match based on this feature, obtain the camera motion offset vector, finally by camera motion offset vector correction submarine target position coordinates, export real submarine target center position coordinate, again in conjunction with depth information of scene under water, the 3 D motion trace of simulation submarine target.
Wherein, computation process based on the camera motion offset vector of SURF feature is: the SURF unique point of calculating image background in consecutive frame, each unique point is constructed its proper vector, then adopt Euclidean distance to carry out similarity measurement to proper vector, obtain distance set, setting threshold, carry out characteristic matching, finally the unique point of all couplings in consecutive frame is subtracted each other respectively, obtain one apart from difference set, calculate again its mean value, just obtained the motion excursion vector of video camera
beneficial effect: the present invention is a kind of analogy method of carrying out first the submarine target 3 D motion trace based on single camera, the method itself in the situation that common monocular video, can be true, realize reliably submarine target is carried out the 3 D motion trace simulation, submarine target 3 D motion trace analogy method of the present invention, significantly reduced the complexity that tracker hardware is built, and do not need loaded down with trivial details camera calibration, the computation complexity of algorithm also significantly reduces, therefore this method can be loaded in this underwater video system more widely, Technique Popularizing significantly improves.
Description of drawings
Fig. 1 is the process flow diagram of submarine target 3 D motion trace analogy method of the present invention;
Fig. 2 calculates the process flow diagram of camera motion offset vector in submarine target 3 D motion trace analogy method of the present invention;
Fig. 3 is 9*9 square frame filter template.
Embodiment
Below in conjunction with specific embodiment, further illustrate the present invention, should understand these embodiment only is not used in and limits the scope of the invention for explanation the present invention, after having read the present invention, those skilled in the art all fall within the application's claims limited range to the modification of the various equivalent form of values of the present invention.
As shown in Figure 1, the single camera submarine target three-dimensional track analogy method of fused images matrix skew, comprise the steps:
under the single camera image-forming condition, based on the tracking of Bayes's filter frame realization to submarine target, output submarine target position coordinates, utilize the first checking method of dark primary to calculate depth information of scene under water, it is the distance between submarine target and background and video camera, calculate simultaneously the SURF feature of background dot in consecutive frame, carry out images match based on this feature, obtain the camera motion offset vector, by camera motion offset vector correction submarine target position coordinates, export real submarine target center position, again in conjunction with depth information of scene under water, the 3 D motion trace of simulation submarine target.
Wherein, the computation process based on the camera motion offset vector of SURF feature mainly comprises following three steps: feature point detection, Feature Descriptor generate, the coupling of adjacent video frames unique point and the calculating of camera motion offset vector.
Application SURF feature is carried out in feature point detection, and the SURF algorithm adopts the Hessian matrix to carry out extract minutiae:
L(x,σ)=G(σ)*I(x),
Wherein, σ is yardstick, and g (σ) is two-dimensional Gaussian function, and L (x, σ) is the convolution of G (σ) and integral image.
In the SURF algorithm, replace second order Gauss filtering with square frame filtering (box filters) is approximate, the square frame wave filter of 9*9 as shown in Figure 3, the scale factor σ of corresponding second order Gauss filtering=1.2.On original image, form the image pyramid of different scale by enlarging the square frame filter size, and use integral image to accelerate image convolution, further solve and obtain the Hessian determinant of a matrix:
detH=D
xxD
yy-(0.9D
xy)
2
To the extreme point that the Hessian matrix detects, 8 consecutive point of each extreme point and unified yardstick thereof and each 9 points of its two yardstick in up and down, the three-dimensional neighborhood of a 3*3*3 of formation.Remaining 26 point in each extreme point and three-dimensional field are compared, while only having value when extreme point greater than all 26 consecutive point, just with this extreme point as the candidate feature point.Obtain carrying out interpolation arithmetic after candidate feature point in metric space, obtain stable characteristic point position and place scale-value.
, in order to guarantee rotational invariance, at first obtain the unique point direction.Structure is take unique point as the center of circle, 6s (s is the yardstick of unique point) for the neighborhood of radius at the little wave response of the Haar of x and y direction, and give these responses with different Gauss's weight coefficients, make the closer to the response contribution of unique point larger, then the x in 60 ° and y direction Haar small echo response addition are formed a local direction vector, travel through whole border circular areas, selecting finally long vector direction is this unique point principal direction.
After selected unique point direction, centered by unique point, structure length is 20 square field, the subregion that this window neighborhood is divided into 4*4, calculate the horizontal direction of 5*5 sampled point and the little wave response of Haar of vertical direction for each zone, be denoted as respectively dx and dy, and with Gauss's window function, to response, give weight coefficient.Obtain the vector of a four-dimension: V=(∑ d
x, ∑ d
y, ∑ | d
x|, ∑ | d
x|).16 sub regions of each unique point have just been formed the description vectors of 64 dimensions,, in the normalized of carrying out vector, formed the descriptor of unique point.
The similarity measurement of two proper vectors adopts Euclidean distance to calculate:
In formula: X
ikK element of i unique point character pair vector in expression former frame image, X
jkK element of j unique point character pair vector in a two field picture after being, N is the dimension of proper vector.
For the unique point characteristic of correspondence of former frame image vector, calculate the Euclidean distance of each proper vector in all the unique point characteristics of correspondence vector set of it and a rear two field picture, obtain distance set, then adjust the distance and gather the sequence of carrying out from small to large.Set a threshold value, when the ratio of minimum Eustachian distance and time minimum Eustachian distance during less than the threshold value set, think that these two unique points mate.Threshold value is chosen less, and a number of pairs is fewer, but more stable.The unique point of supposing the coupling that t frame and t-1 frame obtain is
With
Both for comprising x, y, the three-dimensional coordinate vector of z.Finally the unique point of all couplings is subtracted each other respectively, obtains a set, then calculate its mean value, just obtained the offset vector of camera:
Be that initial coordinate is and, as reference, namely supposes δ with the image coordinate in the first frame
1=(0,0,0), obtained the motion excursion vector of camera by following formula, thereby revise the coordinate position of t frame, and correction formula is as follows:
Final in conjunction with depth information of scene and submarine target center position coordinates under water just can export in image coordinate system can the simulated target movement tendency three-dimensional track.
Claims (2)
1. the single camera submarine target three-dimensional track analogy method of a fused images matrix skew, is characterized in that: comprise the steps:
Under the single camera image-forming condition, based on the tracking of Bayes's filter frame realization to submarine target, output submarine target position coordinates, utilize the first checking method of dark primary to calculate depth information of scene under water, calculate simultaneously the SURF feature of background dot in consecutive frame, carry out images match based on this feature, obtain the camera motion offset vector, finally by camera motion offset vector correction submarine target position coordinates, export real submarine target center position coordinate, again in conjunction with depth information of scene under water, the 3 D motion trace of simulation submarine target.
2. the single camera submarine target three-dimensional track analogy method of fused images matrix according to claim 1 skew, it is characterized in that: the computation process based on the camera motion offset vector of SURF feature is: the SURF unique point of calculating image background in consecutive frame, each unique point is constructed its proper vector, then adopt Euclidean distance to carry out similarity measurement to proper vector, obtain distance set, setting threshold, carry out characteristic matching, finally the unique point of all couplings in consecutive frame is subtracted each other respectively, obtain one apart from difference set, calculate again its mean value, just obtained the motion excursion vector of video camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310317119.7A CN103400380B (en) | 2013-07-25 | 2013-07-25 | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310317119.7A CN103400380B (en) | 2013-07-25 | 2013-07-25 | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103400380A true CN103400380A (en) | 2013-11-20 |
CN103400380B CN103400380B (en) | 2016-11-23 |
Family
ID=49563992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310317119.7A Expired - Fee Related CN103400380B (en) | 2013-07-25 | 2013-07-25 | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103400380B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108184096A (en) * | 2018-01-08 | 2018-06-19 | 北京艾恩斯网络科技有限公司 | Run skating area full view monitoring device, system and method in a kind of airport |
CN108280386A (en) * | 2017-01-05 | 2018-07-13 | 浙江宇视科技有限公司 | Monitoring scene detection method and device |
CN110659547A (en) * | 2018-06-29 | 2020-01-07 | 比亚迪股份有限公司 | Object recognition method, device, vehicle and computer-readable storage medium |
CN114092523A (en) * | 2021-12-20 | 2022-02-25 | 常州星宇车灯股份有限公司 | Matrix reading lamp with hand tracking function through lamplight and control method of matrix reading lamp |
CN114245096A (en) * | 2021-12-08 | 2022-03-25 | 安徽新华传媒股份有限公司 | Intelligent photographic 3D simulation imaging system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448936A (en) * | 1994-08-23 | 1995-09-12 | Hughes Aircraft Company | Destruction of underwater objects |
CN102592290A (en) * | 2012-02-16 | 2012-07-18 | 浙江大学 | Method for detecting moving target region aiming at underwater microscopic video |
CN102622764A (en) * | 2012-02-23 | 2012-08-01 | 大连民族学院 | Target tracking method on basis of movable camera platform |
-
2013
- 2013-07-25 CN CN201310317119.7A patent/CN103400380B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448936A (en) * | 1994-08-23 | 1995-09-12 | Hughes Aircraft Company | Destruction of underwater objects |
CN102592290A (en) * | 2012-02-16 | 2012-07-18 | 浙江大学 | Method for detecting moving target region aiming at underwater microscopic video |
CN102622764A (en) * | 2012-02-23 | 2012-08-01 | 大连民族学院 | Target tracking method on basis of movable camera platform |
Non-Patent Citations (1)
Title |
---|
蔡荣太等: "《视频目标跟踪算法综述》", 《视频应用与工程》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280386A (en) * | 2017-01-05 | 2018-07-13 | 浙江宇视科技有限公司 | Monitoring scene detection method and device |
CN108280386B (en) * | 2017-01-05 | 2020-08-28 | 浙江宇视科技有限公司 | Monitoring scene detection method and device |
CN108184096A (en) * | 2018-01-08 | 2018-06-19 | 北京艾恩斯网络科技有限公司 | Run skating area full view monitoring device, system and method in a kind of airport |
CN110659547A (en) * | 2018-06-29 | 2020-01-07 | 比亚迪股份有限公司 | Object recognition method, device, vehicle and computer-readable storage medium |
CN110659547B (en) * | 2018-06-29 | 2023-07-14 | 比亚迪股份有限公司 | Object recognition method, device, vehicle and computer-readable storage medium |
CN114245096A (en) * | 2021-12-08 | 2022-03-25 | 安徽新华传媒股份有限公司 | Intelligent photographic 3D simulation imaging system |
CN114245096B (en) * | 2021-12-08 | 2023-09-15 | 安徽新华传媒股份有限公司 | Intelligent photographing 3D simulation imaging system |
CN114092523A (en) * | 2021-12-20 | 2022-02-25 | 常州星宇车灯股份有限公司 | Matrix reading lamp with hand tracking function through lamplight and control method of matrix reading lamp |
Also Published As
Publication number | Publication date |
---|---|
CN103400380B (en) | 2016-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | 360sd-net: 360 stereo depth estimation with learnable cost volume | |
CN105528785B (en) | A kind of binocular vision image solid matching method | |
CN108073857B (en) | Dynamic visual sensor DVS event processing method and device | |
CN103093479B (en) | A kind of object localization method based on binocular vision | |
CN104156957B (en) | Stable and high-efficiency high-resolution stereo matching method | |
CN103458261B (en) | Video scene variation detection method based on stereoscopic vision | |
CN107560592B (en) | Precise distance measurement method for photoelectric tracker linkage target | |
CN111046767B (en) | 3D target detection method based on monocular image | |
CN105869167A (en) | High-resolution depth map acquisition method based on active and passive fusion | |
CN103413352A (en) | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion | |
CN103400380A (en) | Single camera underwater target three-dimensional trace stimulation method merged with image matrix offset | |
CN104346608A (en) | Sparse depth map densing method and device | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN106485753A (en) | Method and apparatus for the camera calibration of pilotless automobile | |
CN106534833A (en) | Space and time axis joint double-viewpoint three dimensional video stabilizing method | |
CN105100546A (en) | Motion estimation method and device | |
CN115601406A (en) | Local stereo matching method based on fusion cost calculation and weighted guide filtering | |
CN104240229A (en) | Self-adaptation polarline correcting method based on infrared binocular camera | |
CN113989758A (en) | Anchor guide 3D target detection method and device for automatic driving | |
CN103024420B (en) | 2D-3D (two-dimension to three-dimension) conversion method for single images in RGBD (red, green and blue plus depth) data depth migration | |
CN117115359B (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
CN104104911A (en) | Timestamp eliminating and resetting method in panoramic image generation process and system thereof | |
Wang et al. | Ihnet: Iterative hierarchical network guided by high-resolution estimated information for scene flow estimation | |
Choi et al. | Stereo-augmented depth completion from a single rgb-lidar image | |
CN112270701A (en) | Packet distance network-based parallax prediction method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200622 Address after: 266590 No. 579, Bay Road, Huangdao District, Shandong, Qingdao Patentee after: Chen Erkui Address before: Xikang Road, Gulou District of Nanjing city of Jiangsu Province, No. 1 210098 Patentee before: HOHAI University |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161123 |
|
CF01 | Termination of patent right due to non-payment of annual fee |