CN104021548A - Method for acquiring 4D scene information - Google Patents
Method for acquiring 4D scene information Download PDFInfo
- Publication number
- CN104021548A CN104021548A CN201410209953.9A CN201410209953A CN104021548A CN 104021548 A CN104021548 A CN 104021548A CN 201410209953 A CN201410209953 A CN 201410209953A CN 104021548 A CN104021548 A CN 104021548A
- Authority
- CN
- China
- Prior art keywords
- information
- scene
- image
- visible images
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
Abstract
The invention provides a method for acquiring 4D scene information. According to the method, a binocular stereo vision three-dimensional imaging system and an infrared imaging system are combined, through methods of image registration and image fusion, three-dimensional stereo coordinate information and temperature information of a certain position in the real world can be simultaneously acquired, and four-dimensional expression of the real world is realized in a computer. The method mainly comprises steps that: visible light images and infrared images of a left view and a right view of a shooting scene are received and are then pre-processed respectively; characteristic point extraction is then carried out, characteristic point matching is further carried out, and all registration points on the visible light images and the infrared images are established; on the basis of distance and focal length information between two visible light imaging systems, and a depth map of the shooting scene of the two visible light images can be calculated to acquire three dimensional coordinate information of the scene; according to the determined registration points, an interpolation algorithm is employed to realize superposition of the temperature information and the three-dimensional coordinate information, multi-resolution information fusion is realized, and 4D information of the shooting scene can be finally acquired.
Description
Technical field
The invention belongs to the photoelectric imaging technology field based on visible ray and infrared image sensor, be specifically related to a kind of method of obtaining scene real information.
Background technology
For the objective world that itself is 3 D stereo, when traditional 2D photoelectronic imaging equipment carries out imaging to scene, can only obtain a two-dimensional representation of three-dimensional world, lose the third dimension information in real world, apart from dimension information.Without any priori in the situation that, the mankind are size and the distances that cannot estimate by 2-dimentional photo object.The mankind why can the three-dimensional world of perception, be because the mankind's eyes as same set of impayable camera chain, not only there is aperture, the Varifocal zoom lens of very high dynamic range, flexibility and changeability, also there is the ability that at once light signal is become to brain discernible signal, and there are whole adjustment all in the power completing accidentally.For the reduction of 3 D stereo scenery, its maximum feature is exactly the image angle difference that eyes are seen same scene.Although the sighting distance of two eyes only has trivial 6.5 centimetres of left and right, this small parallax has been enough to distinguish the nuance between the object that right and left eyes sees.After once our brain receives the signal of these two height correlations that right and left eyes sees respectively, the visual fusion that just brain the inside is seen right and left eyes automatically again becomes an object, not only have up and down, more have the abundant performance of front and back information, form the stereoscopic vision of the unique degree of depth.Binocular stereo vision imaging system is exactly by imitating human eye, two photo electric imaging systems are comprehensively obtained to the three-dimensional imaging of real scene, and show by three-dimensional monitor to coming together.Therefore, binocular stereo vision imaging system can be called 3D imaging system.
No matter binocular stereo vision 3D imaging system or traditional 2D imaging system are all generally, at visible light wave range, real scene is carried out to imaging.And at visible light wave range, no matter be human eye or the imaging system of various visible light wave ranges, all cannot perceive the temperature information of real world.In fact, any one object higher than absolute zero all can divergent-ray at infrared spectral coverage, after the energy that these rays comprise is caught by infrared image sensor, just can realize the infrared imaging to real world.Due to the energy difference of the infrared-ray sending in the object of different temperatures, in the infrared image obtaining therefrom, include the temperature information of object.
At present, the binocular stereo vision imaging technique based on visible light image sensor and the infrared imagery technique based on infrared image sensor be equal comparative maturity separately, but the advantage that both are had separately combines the also not yet appearance of technology that forms 4D image.
The present invention, realizing in the synthetic process of 4D image, need to consider the problem such as image registration and image co-registration.
One, image registration
Binocular stereo vision imaging system is mainly for human simulation eye function, be directed to the two width images of catching from different perspectives of Same Scene, utilize principle of stereoscopic vision, recover the computational science of its third dimension information, its concrete algorithm comprises obtaining and three-dimensionalreconstruction etc. of camera calibration, distortion correction, Stereo matching, depth map.Wherein Stereo Matching Algorithm be exactly in the scene of different angles that two imaging systems are seen some unique points mate, only find just can be more accurate in the situation of match point calculate depth map and then carry out three-dimensionalreconstruction.
The images match scheme adopting in the present invention exists following several places different from the solid matching method adopting in traditional binocular stereo vision imaging system: (1) processes the quantity difference of image.Traditional binocular stereo vision imaging system is only considered two width visible images.And image matching method in the present invention need to consider two width visible images and a width infrared image, find out the optimum matching scheme being present between three width images.(2) the resolution difference of processing image.Traditional binocular stereo vision imaging system generally adopts two imageing sensors of equal resolution respectively the scene of right and left eyes to be carried out to imaging.And in the present invention owing to existing infrared imaging system to cause the resolution of visible images and infrared image can have inconsistent phenomenon.Due to the restriction of manufacturing process etc., the image resolution ratio of general infrared imaging system is all less than the image resolution ratio of Visible imaging system.Therefore, in process of image registration of the present invention, just need to consider the matching problem of three width different resolution images.
Two, information fusion
The edge of picture that infrared sensor becomes has fuzzy to a certain extent, the lower profile that can only demonstrate object of spatial resolution, and that the edge of visual light imaging is the profile of object is more clearly, has abundant detailed information and color.Between this Digital Image Data that same target is obtained by different modes, both there is complementarity, had again a large amount of redundant informations.How from these have the Digital Image Data of complementary and redundancy that extraction is effectively more useful concurrently, more refining, information that quality is higher, for people provides decision-making foundation for decision-making or artificial intelligence decision system, become a problem in the urgent need to being resolved, Digital Image Fusion technology is just arisen at the historic moment.Digital Image Fusion technology is by addition comprehensive to multiple sensors multispectral image about certain scene that (or different time) obtains at one time or image sequence information, generate the explanation of new relevant this scene, and this explanation cannot obtain from single image.Digital Image Fusion technology integrates separate sources, the advantage separately of Digital Image Data with different characteristics, has made up the deficiency of information on single digital image.So not only expand the range of application of various information, and greatly improved the precision of graphical analysis.Now, Digital Image Fusion technology has been widely used in the fields such as Medical Image Processing, anti-terrorism safety detection, state's rod test, environmental monitoring and disaster alarm.
Traditional Digital Image Fusion method only considers two 2D images to merge.In the present invention, because binocular stereo vision imaging system can realize the three-dimensional modeling to real scene, and infrared imaging system just obtains real scene two-dimension temperature to represent.Therefore, just need consideration the low resolution infrared image of the same 2D of geometric model of the high resolving power scene of a 3D to be merged to obtain the expression of a final high resolving power 4D scene.
Summary of the invention
The 1D temperature information that the 3D steric information of in the present invention, the binocular stereo vision imaging system based on visible light image sensor being obtained is obtained with the infrared imaging system based on infrared image sensor merges mutually, proposes a kind of method that can obtain scene 4D information.
Solution of the present invention is as follows:
A method of obtaining scene 4D information, comprises the following steps:
(1) LOOK LEFT Visible imaging system, LOOK RIGHT Visible imaging system and infrared imaging system are set, correspondingly receive LOOK LEFT visible images, LOOK RIGHT visible images and the infrared image of captured scene;
(2) LOOK LEFT visible images, LOOK RIGHT visible images and infrared image are carried out respectively to pre-service;
(3) two width visible images and infrared image are carried out to feature point extraction, then carry out Feature Points Matching, set up all registration point on visible images and infrared image, complete multi-resolution image registration;
(4) distance and the focus information between two Visible imaging systems based on known first, calculates the depth map of captured scene to two width visible images, draw the three-dimensional coordinate information of scene; Then according to the definite registration point of step (3), adopt interpolation algorithm by the three-dimensional coordinate information stack of the temperature information of infrared image and scene, complete multiresolution information fusion, finally obtain the 4D information of captured scene.
In above-mentioned steps (3), can adopt SIFT feature point extraction algorithm to carry out respectively feature point extraction to two width visible images and infrared image; Adopt the overall Stereo Matching Algorithm of cutting apart based on image to carry out Feature Points Matching to two width visible images and infrared image.
In above-mentioned steps (4), can adopt bilinear interpolation algorithm by the three-dimensional coordinate information stack of the temperature information of infrared image and scene.
In above-mentioned steps (2), to Infrared images pre-processing mainly comprise heterogeneity correct and blind element detect with compensation, to the pre-service of visible images mainly based on bilateral filtering algorithm with removal noise.
In the solution of the present invention, image registration, information fusion and 4D are shown and have done further consideration:
One, overall multi-resolution image registration technology
According to the difference that adopts Optimum Theory method, traditional Stereo Matching Algorithm can be divided into: based on the Stereo Matching Algorithm of local and global restriction.Stereo Matching Algorithm based on local restriction can be divided into again: based on the Stereo Matching Algorithm of region, feature and phase place; Stereo Matching Algorithm based on global restriction can be divided into: the Stereo Matching Algorithm based on dynamic programming, simulated annealing, figure cut method and confidence spread algorithm.
From optimum theory angle, local restriction Stereo Matching Algorithm can be understood as, and chooses independently feature, makes to be met similarity criterion by match point or make not realize local optimum containing the cost function of level and smooth according to matching strategy; And Stereo Matching Algorithm based on global restriction is based on markov random file theory, taking senior semantemes such as the structured features of image as coupling primitive, structure energy function is cost function, the constraint condition of utilizing Stereo matching to follow, separate the minimum equation of energy and obtain optimum solution, realize the global optimum of Stereo matching, wherein how to build rational energy function and how to separate the core place that ill energy minimum problem is this algorithm.Local restriction algorithm, because can only realize local optimum, for the clinoplane in image, illumination variation, lacks complex texture and pixel and the problem such as blocks and can not propose good solution, can not obtain good disparity estimation figure, its advantage is that calculated amount is little, and it is comparatively easy to realize; At present, global restriction Stereo Matching Algorithm, can realize global optimum, can effectively solve as above problem, and especially figure cuts, the optimized algorithm such as layering become the main direction of studying that realizes now Stereo matching, and shortcoming is that calculated amount is larger.
In the present invention, owing to there are three width images (two panel height resolution visible images, a width low resolution infrared image), the effect of local restriction algorithm is limited, in a lot of situations, can not obtain satisfied matching effect.In theory, any effective global registration method can be used in the present invention.Therefore, the present invention uses the global restriction image matching algorithm cutting based on figure to mate three width images, to obtaining satisfied image matching effect.
Two, the information fusion technology based on multi-scale geometric analysis
After image registration, due to certain the information difference of a bit carrying in real scene, different information need to be merged and then generates last 4D information in all registration point.Two width visible images, due to registration, according to the result of camera calibration, can calculate the depth information of current pending point.For this pending point, obtaining after the depth information of this point, then going to find this to put corresponding temperature information in registration point corresponding in infrared image, complete Function of Information Fusion.
Three, final syncretizing effect shows
At present the multipotency of computing machine is realized to real-world object the 3-D display in computing machine.In the present invention, show 4D model with finger-length measurement, as shown in Figure 4.In Fig. 4 taking a Teddy bear as example, method in use the present invention is set up the 4D model of this Teddy bear, the three-dimensional geometry that can only see it in computing machine shows, but when mouse is put into certain on Teddy bear when some, available numerical method shows the information of its temperature dimension, and wherein R represents red component, and G represents green component, B represents blue component, T representation temperature component.
Brief description of the drawings
Fig. 1 is 4D imaging system overall procedure of the present invention.
Fig. 2 is multi-resolution image registration point example.
Fig. 3 is information fusion example.
Fig. 4 is final 4D modelling effect example.
Embodiment
As shown in Figure 1, the specific implementation process of each step is as follows for the overview flow chart of the 4D imaging system in the present invention:
1. Infrared images pre-processing.Along with being showing improvement or progress day by day of Images of Infrared Focal Plane Array sensor manufacturing technology, infrared imagery technique development, has been widely used in the fields such as military affairs, industry and business.Due to manufacturing technology, manufacturing process and first material, Images of Infrared Focal Plane Array sensor is respectively surveyed the heterogeneity that conventionally has response between unit.Heteropical extreme performance is: in the time that incident radiation changes, it is too high or too low all the time that some surveys first response, causes on image occurring affecting bright spot or the dim spot of visual effect, is blind element (also referred to as invalid unit).The existence of blind element has reduced the quality of image, has affected the subsequent treatment such as nonuniformity correction, figure image intensifying, object detection and recognition.Therefore, utilize advanced image processing techniques, the infrared image that infrared image sensor imaging is formed carries out that heterogeneity is corrected and blind element detects and compensates, and can have the quality of effective raising infrared image, is the operations such as follow-up image registration lay a good foundation [3].
Specific implementation step is:
Step 1: gather certain Infrared Scene of 20 frames, calculate the first noise in time domain of each detection, determine (2n+1) × (2n+1) adaptive threshold of moving window.Suppose that IRFPA is output as X
f(i, f), image size is M × N, surveys first noise in time domain and is defined as
σ
F(i,f)。
The adaptive threshold δ (i, f) of local window is:
for the average of all pixel noise in time domains in window.
Step 2: respectively the infrared image of the different scenes of two frames is carried out to blind element Check processing as described below.Centered by pixel, definition size, for the moving window P of (2n+1) × (2n+1), is found maximum gradation value MAX and minimum gradation value MIN in window, and obtain all pixel responses in window with
Consider in window and may have multiple blind elements, and its response not necessarily equates, utilize adaptive threshold δ defined above (i, f), by P (i, j) in the gray-scale value of each pixel respectively with MAX and MIN and near δ (i thereof, f) scope compares, if equated, in S, deducts this pixel value, and correspondingly the pixel number in window is subtracted to 1, finally obtain remaining the response sum S' of pixel and remain pixel and count C.
If residue pixel is counted C=0, obtain all pixel mean value in moving window and be
Save=S/(2n+1)
2
If C ≠ 0, obtains and remains pixel mean value in moving window and be
Save=S'/C
The number percent of calculation window center pixel P (i, j) and mean value Save difference,
ΔP(i,j)=|P(i,j)-Save|/Save
Δ P (i, j) is compared with the threshold value T setting, if Δ P (i, j) is more than or equal to T, represents that this pixel is blind element, otherwise be normal pixel, and set corresponding zone bit.Two blind element matrixes are mated, determine final blind element position.
Step 3: the image to infrared focal plane array output carries out blind element compensation.The normal output of surveying first average and replacing blind element in the moving window of (2n+1) × (2n+1) for backoff algorithm.
2. visible images pre-service.The impact that numeral Visible imaging system can be subject to noise in the process of imaging declines obtained picture quality to some extent.Under some extreme conditions, noise to affect meeting very obvious, when serious, can cover target and make image invalid.Find according to the study, the noise in actual imaging system can be modeled as the mixed noise being made up of the white Gaussian noise of salt-pepper noise, additivity and the coloured noise of the property taken advantage of.General Image denoising algorithm is only considered the situation of white Gaussian noise, and this is mutually far short of what is expected with the noise of encountering in reality.
Image denoising algorithm generally can be divided into airspace filter and transform domain filtering two classes.It is many that transform domain filtering is studied recently, as the Image denoising algorithm based on wavelet transformation and Conourlet conversion etc., transform domain filtering can obtain better denoising effect than airspace filter, but the cost of paying is exactly the increase of computation complexity, can not be used for the occasion that real-time processing requirements is higher.Therefore the spatial domain filter algorithms that, this project primary study can be processed in real time.Early stage Image denoising algorithm is generally that image is carried out to airspace filter; conventional spatial filter comprises the linear spatial filter such as mean filter and spatial domain S filter, but linear spatial domain filter algorithms can cause the obviously fuzzy of marginal information in image conventionally.Recently, Tomasi etc. have proposed the non-linear spatial domain such as bilateral filtering and medium filtering wave filter, these nonlinear filters can better retain than linear filter the marginal information in image in removing noise, and algorithm is fairly simple, and computation complexity is little.But two parameters controlling filtering performance in classical bilateral filtering algorithm are constant in whole filtering.And general visible images is made up of smooth region, fringe region and texture region, if controlling two parameters of bilateral filtering can change along with the difference of filter field, be adaptive to the feature of image local, bilateral filtering should obtain better filter effect.In this project, intending request for utilization people has obtained patent of invention (title: a kind of method of mixed noise in quick removal image, the patent No.: ZL201010164555.1) and has removed the noise existing in image.
3. multi-resolution image registration
Visible ray and infrared image are being carried out after pre-service, just need to carry out image registration in the same area of finding out wherein to it.First need visible images and infrared image to carry out feature point extraction, then carry out Feature Points Matching.Aspect feature point extraction, main method have C.Harris and M.J.Stephens be subject to signal process in the classical Harris angle point that proposes of the inspiration of autocorrelation function, and the SIFT Feature Points Extraction with yardstick unchangeability based on metric space that proposes according to human retina imaging law of Lowe.SIFT has good stability at various complex conditions, and the present invention i.e. feature point extraction algorithm based on SIFT carries out feature point extraction, after feature point extraction completes, carries out Stereo matching.
Overall situation Stereo matching uses global restriction to solve and blocks and repeat the mistake matching problem that texture causes, and its core is correctly to define model of place.Global registration problem is described to energy minimization problem conventionally, first will construct an energy function, and its form is generally E=E_data+E_smooth.Wherein data item E_data has described matching degree, a level and smooth E_smooth has embodied the constraint of definition scene, then can have many algorithms to obtain its extreme value, as dynamic programming (DP), put letter expansion (BP), figure cuts the optimized algorithms such as (GC), simulated annealing (SA), sweep trace optimization, cooperation algorithm (CA) and the orthogonal dynamic programming (ORDP) based on reliability.Wherein dynamic programming, to put that letter expansion and figure cut be the most frequently used method.
Dynamic programming algorithm:
Dynamic programming is a kind of mathematical method that solves multistage decision problem, and overall optimization problem is decomposed into multistage decision by it carries out, and can reduce the complexity of Global Optimal Problem.Multistage decision refers to a PROBLEM DECOMPOSITION is become to multiple stages that interknit, and makes a policy, thereby make certain performance index of whole process reach best effect in each stage.
The basic thought of dynamic programming is: a problem is divided into multiple subproblems, each subproblem is arranged in certain sequence, for certain given state, first solve subproblem, subproblem only solves once, directly quotes answer while running into later again.This shows, dynamic programming method is only applicable to the situation in the time that problem has certain inherent order.For Stereo matching, the succession constraint on every sweep trace can regard the energy function of coupling from the minimum cost path problem of the origin-to-destination of sweep trace as us.The cost of optimal path is all subpath cost sums, these subpaths the coupling cost of point of process can be decided by region calculation of correlation operator.
In coupling, the corresponding relation between sweep trace can pass through constructed in two ways: the first is the similarity of directly setting up between left sweep trace and right sweep trace, i.e. sweep trace one scan line mode; The second is to set up left sweep trace and the similarity of right sweep trace under different parallaxes, i.e. sweep trace one parallax mode.
What obtain due to dynamic programming is the optimum matching of every core line and do not consider the restriction relation between core line and core line, and people have added constraint between core line to obtain the minimum value of energy function between core line.Ohta and Kanade are dissolved into constraint between core line in the process of Stereo matching by minimizing the cost function being defined on 2 dimensional region.First Belhumeur adopts dynamic programming to calculate the parallax value of every core line, and then utilizes that two core line parallaxes of fixed outer are constant carrys out the parallax between smoothing kernel line by the optimum solution that dynamic programming obtains intercalated nucleus line.Cox etc. join two-dimensional constrains in dynamic programming by the quantity that minimizes parallax point of discontinuity in the horizontal and vertical directions.Birchfield and Tomasi, using the large place of graded as the discontinuous border of parallax, expand to the parallax in reliable region in unreliable region and add vertical constraint.Kim etc. utilize parallax control point and binary channels dynamic programming to obtain final parallax.
Compared with other optimization methods, the advantage of dynamic programming is that it provides global restriction for those lack the texture region that easily generation is mated by mistake, has solved these regions due to all very low problems that is difficult to coupling of the local energy value under different parallaxes.For occlusion issue, in dynamic programming, generally all the energy of shield portions is replaced by a fixing value, then utilize consistency constraint to detect and block.The shortcoming of dynamic programming method is the failure that erroneous matching may cause along core line Directional Extension other correct couplings, often has striped to occur on the disparity map that therefore utilizes dynamic programming method to obtain.
Put letter expansion algorithm:
Put letter expansion algorithm and proposed by Pearl as far back as 1988, the every field that it is widely used in computer vision after 1999 solves the optimization problem of the graph structure with ring and has obtained good result.This algorithm can converge to optimum solution for the graph structure that there is no ring, but can not ensure to converge to optimum solution for the graph structure that has ring.The research emphasis of this algorithm is how to improve the efficiency of algorithm at present.
Sun etc. within 2003, will put letter expansion algorithm be applied to Stereo matching in and obtained good result, 2005, Sun etc. added again visibility constraints to detect eclipse phenomena in algorithm.Felzenszwalb etc. have proposed level and have put letter expansion algorithm, have improved the speed of putting letter expansion algorithm from many aspects.Yang etc. utilize level to put letter expansion algorithm and have realized and blocked detection.T.appen and Freeman cut and put respectively letter expansion the Potts model Markov random field of same parameter is optimized with figure, conclusion is that to put the result that letter ratio figure cuts more level and smooth, speed is also cut soon than figure, but energy cuts higher than figure, and both effects are suitable.
Figure cuts method algorithm:
Figure cuts algorithm and belief propagation algorithm is all based on markov random file, has just adopted different reasoning processes and has adopted multi-form markov random file.It is in the oriented or non-directed graph of setting up that figure cuts algorithm, give weights to each limit, adopt the form of expression of minimal cut or max-flow (min ?cut/max ?flow), utilize the method based on figure to carry out reasoning, so first will set up figure framework, the coupling cost that energy function is calculated is assigned to each limit, utilizes the method for energy minimal cut to find a best parallax to cut.Belief propagation algorithm is the expression-form that adopts probability, utilizes the Markov Network of standard, adopts maximum a posteriori probability to ask for least energy equation value.Its energy equation can adopt the multi-form expression formulas such as summation, product, by the method for iteration, the parallax information of neighborhood is passed to neighbor, utilizes energy function to minimize estimating disparity value.
In sum, in the present invention, preferably consider that the solid matching method of cutting apart based on image that the people such as application Tao propose completes the registration operation to infrared image and two width visible images.The solid matching method of cutting apart based on image that the people such as Tao proposes is shown in document [1], and the foundation of this framework supposes based on smooth surface, and in single color region, parallax there will not be sudden change.By this hypothesis traditional based in Global Algorithm, the problem of each some optimal scheme parallax being converted into the problem to cut zone optimal scheme template.Due in piece image, comprise count numerous, and in figure cut zone number and template number quite limited, so greatly reduce algorithm calculated amount, the introducing of cutting apart constraint has well improved the matching precision blocking with discontinuity zone.
4. multiresolution information fusion
After coupling completes, all registration point on visible images and infrared image are set up.First utilize two width visible ray gray level images, the information such as distance and focal length [2] between two imaging systems based on known, just can calculate the depth map of captured scene, just can obtain the three-dimensional information of scene by three-dimensional reconstruction scheduling algorithm.It is exactly the problem that this section will solve that but the three-dimensional scenic that how temperature information being obtained by infrared image sensor is added to gets on.
After images match work completes, to the point having mated, on infrared image, directly find corresponding temperature information to superpose up.Because the resolution of infrared image is generally less than visible images, therefore to certain point on visible images, its point corresponding on infrared image may be on integer pixel, at this moment just need to carry out interpolation and obtain corresponding temperature information.The method that can complete in theory this interpolation is a lot, provides a realization of information fusion in the present invention as an example of bilinear interpolation method example, and specific implementation method is as follows:
If high-resolution visible images is V, corresponding low resolution infrared image is I, is located at 1 V in visible images
m,npoint corresponding in infrared image is I
m,n, but because the resolution of infrared image is limited, there is no in this temperature information.Now just need to carry out interpolation and obtain I
m,ntemperature information on point, now interpolation formula is:
First in the enterprising line linearity interpolation of x direction of principal axis:
I′
m,n=(n-j)×I
i+1,j+(j-n+1)×I
i+1,j+1
I″
m,n=(n-j)×I
i,j+(j-n+1)×I
i,j+1
Then in the enterprising line linearity interpolation of y direction of principal axis:
I
m,n=(m-i)×I′
m,n+(i-m+1)×I″
m,n?。
Claims (4)
1. a method of obtaining scene 4D information, comprises the following steps:
(1) LOOK LEFT Visible imaging system, LOOK RIGHT Visible imaging system and infrared imaging system are set, correspondingly receive LOOK LEFT visible images, LOOK RIGHT visible images and the infrared image of captured scene;
(2) LOOK LEFT visible images, LOOK RIGHT visible images and infrared image are carried out respectively to pre-service;
(3) two width visible images and infrared image are carried out to feature point extraction, then carry out Feature Points Matching, set up all registration point on visible images and infrared image, complete multi-resolution image registration;
(4) distance and the focus information between two Visible imaging systems based on known first, calculates the depth map of captured scene to two width visible images, draw the three-dimensional coordinate information of scene; Then according to the definite registration point of step (3), adopt interpolation algorithm by the three-dimensional coordinate information stack of the temperature information of infrared image and scene, complete multiresolution information fusion, finally obtain the 4D information of captured scene.
2. the method for obtaining scene 4D information according to claim 1, is characterized in that: in step (3), adopt SIFT feature point extraction algorithm to carry out respectively feature point extraction to two width visible images and infrared image; Adopt the overall Stereo Matching Algorithm of cutting apart based on image to carry out Feature Points Matching to two width visible images and infrared image.
3. the method for obtaining scene 4D information according to claim 2, is characterized in that: step (4) adopts bilinear interpolation algorithm by the three-dimensional coordinate information stack of the temperature information of infrared image and scene.
4. the method for obtaining scene 4D information according to claim 3, it is characterized in that: in step (2), to Infrared images pre-processing mainly comprise heterogeneity correct and blind element detect with compensation, to the pre-service of visible images mainly based on bilateral filtering algorithm with removal noise.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410209953.9A CN104021548A (en) | 2014-05-16 | 2014-05-16 | Method for acquiring 4D scene information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410209953.9A CN104021548A (en) | 2014-05-16 | 2014-05-16 | Method for acquiring 4D scene information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104021548A true CN104021548A (en) | 2014-09-03 |
Family
ID=51438286
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410209953.9A Pending CN104021548A (en) | 2014-05-16 | 2014-05-16 | Method for acquiring 4D scene information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104021548A (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104377836A (en) * | 2014-12-09 | 2015-02-25 | 国家电网公司 | Online monitoring and identification method and system for substation disconnecting link closed state |
CN104392448A (en) * | 2014-12-01 | 2015-03-04 | 四川大学 | Stereo matching method based on Gauss median segmentation guided filtering (GMSGF) |
CN104473717A (en) * | 2014-12-04 | 2015-04-01 | 上海交通大学 | Wearable guide apparatus for totally blind people |
CN104618709A (en) * | 2015-01-27 | 2015-05-13 | 天津大学 | Dual-binocular infrared and visible light fused stereo imaging system |
CN104835165A (en) * | 2015-05-12 | 2015-08-12 | 努比亚技术有限公司 | Image processing method and image processing device |
CN106570852A (en) * | 2016-11-07 | 2017-04-19 | 中国航空无线电电子研究所 | Real-time 3D image situation perception method |
CN108230397A (en) * | 2017-12-08 | 2018-06-29 | 深圳市商汤科技有限公司 | Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium |
CN108470327A (en) * | 2018-03-27 | 2018-08-31 | 成都西纬科技有限公司 | Image enchancing method, device, electronic equipment and storage medium |
CN108629731A (en) * | 2017-03-15 | 2018-10-09 | 长沙博为软件技术股份有限公司 | A kind of image split-joint method being suitable for rolling screenshotss |
CN108759673A (en) * | 2018-06-20 | 2018-11-06 | 北京惠风联合防务科技有限公司 | Optics 3D accurately measures monitoring system with infrared dual mode composite video |
CN109747566A (en) * | 2017-11-02 | 2019-05-14 | 郭宇铮 | A kind of automatic night vision system |
CN110298872A (en) * | 2019-07-03 | 2019-10-01 | 云南电网有限责任公司电力科学研究院 | A kind of method for registering of ultraviolet light camera and Visible Light Camera array |
CN110363806A (en) * | 2019-05-29 | 2019-10-22 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using black light projection feature |
CN110493587A (en) * | 2019-08-02 | 2019-11-22 | 深圳市灵明光子科技有限公司 | Image acquiring device and method, electronic equipment, computer readable storage medium |
CN110633682A (en) * | 2019-09-19 | 2019-12-31 | 合肥英睿***技术有限公司 | Infrared image anomaly monitoring method, device and equipment based on double-light fusion |
CN111426393A (en) * | 2020-04-07 | 2020-07-17 | 北京迈格威科技有限公司 | Temperature correction method, device and system |
CN111563559A (en) * | 2020-05-18 | 2020-08-21 | 国网浙江省电力有限公司检修分公司 | Imaging method, device, equipment and storage medium |
CN111798382A (en) * | 2020-05-27 | 2020-10-20 | 中汽数据有限公司 | Visual sensor denoising method based on Markov random field |
CN111915792A (en) * | 2020-05-19 | 2020-11-10 | 武汉卓目科技有限公司 | Method and device for identifying zebra crossing image-text |
CN112016478A (en) * | 2020-08-31 | 2020-12-01 | 中国电子科技集团公司第三研究所 | Complex scene identification method and system based on multispectral image fusion |
CN112577439A (en) * | 2020-12-03 | 2021-03-30 | 华中科技大学 | Microelectronic substrate warpage measurement method and system based on infrared and optical images |
CN113436129A (en) * | 2021-08-24 | 2021-09-24 | 南京微纳科技研究院有限公司 | Image fusion system, method, device, equipment and storage medium |
CN113902666A (en) * | 2021-12-13 | 2022-01-07 | 湖南警察学院 | Vehicle-mounted multiband stereoscopic vision sensing method, device, equipment and medium |
CN114143419A (en) * | 2020-09-04 | 2022-03-04 | 聚晶半导体股份有限公司 | Dual-sensor camera system and depth map calculation method thereof |
-
2014
- 2014-05-16 CN CN201410209953.9A patent/CN104021548A/en active Pending
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392448A (en) * | 2014-12-01 | 2015-03-04 | 四川大学 | Stereo matching method based on Gauss median segmentation guided filtering (GMSGF) |
CN104473717A (en) * | 2014-12-04 | 2015-04-01 | 上海交通大学 | Wearable guide apparatus for totally blind people |
CN104377836A (en) * | 2014-12-09 | 2015-02-25 | 国家电网公司 | Online monitoring and identification method and system for substation disconnecting link closed state |
CN104618709A (en) * | 2015-01-27 | 2015-05-13 | 天津大学 | Dual-binocular infrared and visible light fused stereo imaging system |
CN104835165A (en) * | 2015-05-12 | 2015-08-12 | 努比亚技术有限公司 | Image processing method and image processing device |
WO2016180325A1 (en) * | 2015-05-12 | 2016-11-17 | 努比亚技术有限公司 | Image processing method and device |
CN104835165B (en) * | 2015-05-12 | 2017-05-24 | 努比亚技术有限公司 | Image processing method and image processing device |
CN106570852B (en) * | 2016-11-07 | 2019-12-03 | 中国航空无线电电子研究所 | A kind of real-time 3D rendering Situation Awareness method |
CN106570852A (en) * | 2016-11-07 | 2017-04-19 | 中国航空无线电电子研究所 | Real-time 3D image situation perception method |
CN108629731A (en) * | 2017-03-15 | 2018-10-09 | 长沙博为软件技术股份有限公司 | A kind of image split-joint method being suitable for rolling screenshotss |
CN109747566A (en) * | 2017-11-02 | 2019-05-14 | 郭宇铮 | A kind of automatic night vision system |
CN108230397A (en) * | 2017-12-08 | 2018-06-29 | 深圳市商汤科技有限公司 | Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium |
CN108470327A (en) * | 2018-03-27 | 2018-08-31 | 成都西纬科技有限公司 | Image enchancing method, device, electronic equipment and storage medium |
CN108470327B (en) * | 2018-03-27 | 2022-05-17 | 成都西纬科技有限公司 | Image enhancement method and device, electronic equipment and storage medium |
CN108759673A (en) * | 2018-06-20 | 2018-11-06 | 北京惠风联合防务科技有限公司 | Optics 3D accurately measures monitoring system with infrared dual mode composite video |
CN110363806B (en) * | 2019-05-29 | 2021-12-31 | 中德(珠海)人工智能研究院有限公司 | Method for three-dimensional space modeling by using invisible light projection characteristics |
CN110363806A (en) * | 2019-05-29 | 2019-10-22 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using black light projection feature |
CN110298872A (en) * | 2019-07-03 | 2019-10-01 | 云南电网有限责任公司电力科学研究院 | A kind of method for registering of ultraviolet light camera and Visible Light Camera array |
CN110493587A (en) * | 2019-08-02 | 2019-11-22 | 深圳市灵明光子科技有限公司 | Image acquiring device and method, electronic equipment, computer readable storage medium |
CN110493587B (en) * | 2019-08-02 | 2023-08-11 | 深圳市灵明光子科技有限公司 | Image acquisition apparatus and method, electronic device, and computer-readable storage medium |
CN110633682A (en) * | 2019-09-19 | 2019-12-31 | 合肥英睿***技术有限公司 | Infrared image anomaly monitoring method, device and equipment based on double-light fusion |
CN111426393A (en) * | 2020-04-07 | 2020-07-17 | 北京迈格威科技有限公司 | Temperature correction method, device and system |
CN111426393B (en) * | 2020-04-07 | 2021-11-16 | 北京迈格威科技有限公司 | Temperature correction method, device and system |
CN111563559A (en) * | 2020-05-18 | 2020-08-21 | 国网浙江省电力有限公司检修分公司 | Imaging method, device, equipment and storage medium |
CN111563559B (en) * | 2020-05-18 | 2024-03-29 | 国网浙江省电力有限公司检修分公司 | Imaging method, device, equipment and storage medium |
CN111915792A (en) * | 2020-05-19 | 2020-11-10 | 武汉卓目科技有限公司 | Method and device for identifying zebra crossing image-text |
CN111915792B (en) * | 2020-05-19 | 2022-06-07 | 武汉卓目科技有限公司 | Method and device for identifying zebra crossing image-text |
CN111798382A (en) * | 2020-05-27 | 2020-10-20 | 中汽数据有限公司 | Visual sensor denoising method based on Markov random field |
CN111798382B (en) * | 2020-05-27 | 2024-04-12 | 中汽数据有限公司 | Visual sensor denoising method based on Markov random field |
CN112016478A (en) * | 2020-08-31 | 2020-12-01 | 中国电子科技集团公司第三研究所 | Complex scene identification method and system based on multispectral image fusion |
CN112016478B (en) * | 2020-08-31 | 2024-04-16 | 中国电子科技集团公司第三研究所 | Complex scene recognition method and system based on multispectral image fusion |
CN114143419A (en) * | 2020-09-04 | 2022-03-04 | 聚晶半导体股份有限公司 | Dual-sensor camera system and depth map calculation method thereof |
CN114143419B (en) * | 2020-09-04 | 2023-12-26 | 聚晶半导体股份有限公司 | Dual-sensor camera system and depth map calculation method thereof |
CN112577439A (en) * | 2020-12-03 | 2021-03-30 | 华中科技大学 | Microelectronic substrate warpage measurement method and system based on infrared and optical images |
CN113436129A (en) * | 2021-08-24 | 2021-09-24 | 南京微纳科技研究院有限公司 | Image fusion system, method, device, equipment and storage medium |
CN113902666A (en) * | 2021-12-13 | 2022-01-07 | 湖南警察学院 | Vehicle-mounted multiband stereoscopic vision sensing method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104021548A (en) | Method for acquiring 4D scene information | |
CN106355570B (en) | A kind of binocular stereo vision matching method of combination depth characteristic | |
CN112634341B (en) | Method for constructing depth estimation model of multi-vision task cooperation | |
EP3673461B1 (en) | Systems and methods for hybrid depth regularization | |
WO2018000752A1 (en) | Monocular image depth estimation method based on multi-scale cnn and continuous crf | |
CN110364253B (en) | System and method for assisted patient positioning | |
Correal et al. | Automatic expert system for 3D terrain reconstruction based on stereo vision and histogram matching | |
CN103903246A (en) | Object detection method and device | |
Hervieu et al. | Stereoscopic image inpainting: distinct depth maps and images inpainting | |
CN111354077B (en) | Binocular vision-based three-dimensional face reconstruction method | |
CN104424640A (en) | Method and device for carrying out blurring processing on images | |
CN107560592A (en) | A kind of precision ranging method for optronic tracker linkage target | |
CN109949354B (en) | Light field depth information estimation method based on full convolution neural network | |
Yadav et al. | A review on image fusion methodologies and applications | |
CN111508013A (en) | Stereo matching method | |
Malik et al. | Application of passive techniques for three dimensional cameras | |
CN113313740A (en) | Disparity map and surface normal vector joint learning method based on plane continuity | |
CN111582437B (en) | Construction method of parallax regression depth neural network | |
CN110889868B (en) | Monocular image depth estimation method combining gradient and texture features | |
Kim et al. | Depth image filter for mixed and noisy pixel removal in RGB-D camera systems | |
CN108090920B (en) | Light field image depth stream estimation method | |
Jeong et al. | High‐quality stereo depth map generation using infrared pattern projection | |
CN113160210A (en) | Drainage pipeline defect detection method and device based on depth camera | |
Marto et al. | Structure from plenoptic imaging | |
Wang et al. | Self-supervised learning for RGB-guided depth enhancement by exploiting the dependency between RGB and depth |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140903 |