CN201927079U - Rapid real-time integration processing system for visible image and infrared image - Google Patents

Rapid real-time integration processing system for visible image and infrared image Download PDF

Info

Publication number
CN201927079U
CN201927079U CN2011200566285U CN201120056628U CN201927079U CN 201927079 U CN201927079 U CN 201927079U CN 2011200566285 U CN2011200566285 U CN 2011200566285U CN 201120056628 U CN201120056628 U CN 201120056628U CN 201927079 U CN201927079 U CN 201927079U
Authority
CN
China
Prior art keywords
image
infrared
visible
sensor
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN2011200566285U
Other languages
Chinese (zh)
Inventor
王滨海
王骞
陈西广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Intelligent Technology Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Priority to CN2011200566285U priority Critical patent/CN201927079U/en
Application granted granted Critical
Publication of CN201927079U publication Critical patent/CN201927079U/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The utility model relates to a rapid real-time integration processing system for a visible image and an infrared image, which can conduct the pixel-level real-time integration for an infrared image obtained by an infrared thermal imager and a visible image obtained by a charge coupled device (CCD) camera, can provide an integrated image with redundant information being deleted and integrated complementary information to a user, and can finally realize the consistent description and expression of a target to be observed in a way of high clearness, high resolution and vast information. The rapid real-time integration processing system comprises at least a visible sensor and at least an infrared sensor, wherein the visible sensor and the infrared sensor are connected with a data collecting device, and the data collecting device is connected with a computer.

Description

A kind of quick real time fusion process system for visible images and infrared image
Technical field
The utility model relates to a kind of scientific experiment system of rapid image fusion treatment, specifically a kind of quick real time fusion process system for visible images and infrared image.
Background technology
When general multi-sensor device object observing, will carry out the switching of each channel image inevitably, and in handoff procedure, because the consistance of optical axis, the problems such as consistance of visual field size cause observing the reduction of validity.Therefore the quick real time fusion process system that designs a cover visual picture and infrared image can improve observation efficiency greatly, and can provide more effectively to decision-making and help.
In recent years, in all trades and professions, particularly based on the computer vision process field of optical sensor, image fusion technology is by extensive concern and research.More and more accelerating algorithm research to image fusion technology based on the industrial circle of Flame Image Process.Image co-registration belongs to the class in the form of data fusion, is that the multi-source information that the multi-source sensor obtains is mated under certain standard, and synthetic automatically have the more brief expression of large information capacity with formation.The fundamental purpose of doing like this is to make things convenient for tasks such as the decision-making in later stage such as pattern-recognition, is beneficial to computing machine and people's quick judgment processing.
Image co-registration is deleted the redundant information of the multi-source image sequence that the multi-source sensor obtains, and complementary information is in addition comprehensive, and the final consistance that forms high-resolution to the destination objects of observation, high-resolution, high information quantity is described and expressed.In other words, image co-registration also can be regarded the information reduction of certain form as.Data after the reduction, the image after just merging has high reliability, high image quality, the feature of high information quantity had both reduced unnecessary duplicate message, had promoted the information capacity of single image again, allowed image information know easy identification more.
At present, the research of image co-registration has been had the history that surpassed for two more than ten years, especially in recent years, along with the development of various sensor, image fusion technology burns the wind.At first vectoring information/image co-registration development is the development and application of military scientific and technological aspect.The informationization of US military is as the high-tech operation pattern on the fusion judgement decision-making that is based upon various information wartime, and these multi-source informations all derive from all kinds of image informations greatly, as visible light information, infrared light information, low-light information, laser intelligence, radar imagery information or the like.Resulting multiple target detection and identification on fusion information basis, investigation monitor that navigational guidance etc. are used will be more accurate.In addition, being applied in over past ten years aspect non-military is also just flourish, high-caliber commercial production, and such as the research aspect of intelligent robot, image co-registration is to set up the requisite prerequisite link of intelligent robot vision; Aspect the civilian Flame Image Process of taking photo by plane, to carry out image co-registration to remote sensing images too.At present, image co-registration is not also set up complete theoretical research system.Therefore blending algorithm all is a kind of trial under various data processing backgrounds.These data processing methods have comprised filtering method, as Wiener filtering, and Kalman filtering; Probabilistic method is very big like method as Bayes; The data clusters method is as the clustering method based on various judgment criteria; The intelligence computation method is as based on approximate data of heuritic approaches such as ant group algorithm, genetic algorithm or the like.
Visible images and infrared image are the most widely used two class images in the present application.They can express the abundant information feature of observed target at two spectral ranges.Therefore, in imaging system, two class sensors tend to be installed side by side, and guarantee the unification of boresight, thus object observing better.But two class images can't be observed simultaneously, can only switch accordingly.This has brought trouble to the observer, and therefore, how visible images and infrared image being carried out quick real time fusion is the difficult problem that multi-sensor imaging system faces.
The utility model content
The purpose of this utility model is exactly for addressing the above problem, a kind of quick real time fusion process system for visible images and infrared image is provided, it can be with infrared image that obtains from thermal infrared imager and the visible images that obtains from ccd video camera, carry out the real time fusion on the pixel level, to delete redundant information, the fused images of integrated complementary information offers the user, and the final consistance that forms high-resolution to the destination object of observation, high-resolution, high information quantity is described and expressed.
For achieving the above object, the utility model adopts following technical scheme:
A kind of quick real time fusion process system for visible images and infrared image, it comprises at least one visible light sensor and at least one infrared sensor, and they are connected with data acquisition equipment, and data acquisition equipment is connected with computing machine.
Described visible light sensor is connected with data acquisition equipment by data bus with infrared sensor.
The utility model in use, at first read in visible light and Infrared video image, read in computing machine through transmission link and data acquisition unit, by process software undertaken of the sequence image method for registering based on the image shift parameter is mated fast then, obtain the fused images for the treatment of of the thick coupling in relevant position.Adopt the image co-registration processing based on the pixel level of bottom then, its basic step comprises the following aspects:
Step 1, treat the pre-service work of fused images.Pre-service such as image pre-filtering noise reduction enhancing are to obtain the high-quality fused images sequence for the treatment of.
The correction work of step 2, image.Treat that fused images at first needs to carry out based on accurate correction work such as the smart registration of feature.Image-forming principle according to the multi-source sensor adopts corresponding transformation model accurately to adjust.
The consistance sampling of step 3, image.In order to allow the fusion work of pixel level can be effective fast,, adopt the consistance picture element interpolation to form the image of identical scale size for sensor image with different resolution.
The fusion one by one of step 4, pixel level.
The aftertreatment work of step 5, fused image.Because fused image often can be introduced certain noise in the gray scale performance of integral body, so aftertreatment work generally comprises image denoising, image lifting, image cutting etc.
Be described in detail step 3 below to step 5.At first, image is carried out the wavelet decomposition conversion of multilayer respectively, form turriform Wavelet image coefficient.Then, the turriform Wavelet image coefficient that above-mentioned decomposition is obtained merges according to certain blending algorithm, and promptly the wavelet coefficient for different frequency bands merges respectively, obtains having turriform Wavelet image coefficient after the fusion of relevant block sign.At last, the turriform Wavelet image coefficient after the above-mentioned fusion is carried out inverse wavelet transform reconstruct and form fused image.If syncretizing effect is not enough, can carry out corresponding aftertreatment.
In whole fusion flow process, to the influence of final fusion results bigger comprise the choosing of small echo kind, choosing of the turriform wavelet decomposition number of plies and choosing of fusion criterion algorithm.Especially the selection of fusion criterion is to influence the syncretizing effect maximum aspect that works, and is the core and the key of fusion process, and it has directly determined the computing velocity that merges and the quality of fusion results.The utility model does not adopt traditional single wavelet decomposition module, and employing multi-dimensional direction bank of filters (N-dimensional Directional Filter Banks, NDFB) Surfacelet conversion is as two generations wavelet transformation realization non-self-adapting multi-scale geometric analysis, to the signal travel direction decomposition of any dimension.Generally for the blending algorithm correction of low-frequency range image can be to a great extent the information amount of obtaining of decision entire image, therefore a good blending algorithm must be made optimal selection in the low-frequency range blending algorithm, find through experiment, with the gradient-norm is that piece merges the low frequency characteristic that the low frequency blending algorithm that indicates has taken into full account image, is preferable selection.For the image of high band, blending algorithm at first must be considered these directivity factors, and through experiment confirm, energy value makes the fine degree of image be greatly improved as the high frequency blending algorithm that piece merges sign really.
In image co-registration process based on small echo, image is carried out wavelet decomposition, decompose the coefficient that obtains and be divided into low frequency component and high fdrequency component, need obtain the blocking effect region according to low frequency component, just the ghost image region.At first low frequency component is obtained to merge identification image according to fusion rule, this fusion identification image is a bianry image, if pixel value 1 promptly wherein shows the low frequency component that select left source images; If 0, then select the low frequency component of right source images.After having obtained this identification image, need carry out filtering to identification image, with deletion clutter zone, but keep main signal area, the template of selection 3 * 3 is carried out medium filtering.With bigger template identification image is carried out filtering once more then, to obtain background image, we select the template of 16 * 16 sizes.Then filtered identification image and background image are wanted to subtract, obtain to treat the image of filter field, need carry out filtering to eliminate ghost image, guarantee to keep the absolute mass that does not need filter field simultaneously in order to indicate which zone.Filtering should spread along the edge of image direction, and the assurance image border can not degenerated fuzzy.
The beneficial effects of the utility model are: can provide abundant target image directly perceived more for the observer, the target infrared feature can be superimposed upon on the Visible Light Characteristics.Can carry out registration process to visible light video and infrared video, guarantee the unanimity of video image boresight, can guarantee the two height unanimity spatially simultaneously.Can carry out the consistance cutting to visible light video image and Infrared video image, guarantee the unanimity of the two observation visual field size.Can and merge rear video image three to visible light video image, Infrared video image and carry out while numeral output.And can show stack and non-Overlay at the enterprising line operate of fused images.The utlity model has reasonable in designly, easy to operate, characteristics such as the result is accurately objective have good value for applications.
Description of drawings
Fig. 1 is a structural representation of the present utility model.
Fig. 2 is a processing flow chart of the present utility model.
Wherein, 1. visible light sensor, 2. infrared sensor, 3. data bus, 4. data acquisition equipment, 5. computing machine.
Embodiment
Below in conjunction with accompanying drawing and embodiment the utility model is described further.
Among Fig. 1, it comprises at least one visible light sensor 1 and at least one infrared sensor 2, and they are connected with data acquisition equipment 4 by data bus 3, and data acquisition equipment 4 is connected with computing machine 5.
Among Fig. 2, job step of the present utility model is:
At first read in corresponding visible light sequential image and the infrared sequence image that needs fusion by fusion processing system.Simultaneously according to optical axis regulate input parameter as according to image to carrying out coarse alignment, differ with adjustment.Respectively visible images and infrared image are carried out corresponding image denoising pre-service then.Follow pretreated image according to carry out the smart registration of image based on the method for feature angle point.Send into the image co-registration module and carry out Surfacelet conversion, fusion and answer, and carry out the filtering aftertreatment according to the information that obtains in the conversion and eliminate the fusion blocking effect.Export fused image at last for aftertreatment or observation uses such as pattern-recognitions.
Except that the described technical characterictic of instructions, all the other are those skilled in the art's known technology.

Claims (2)

1. the quick real time fusion process system for visible images and infrared image is characterized in that it comprises at least one visible light sensor and at least one infrared sensor, and they are connected with data acquisition equipment, and data acquisition equipment is connected with computing machine.
2. the quick real time fusion process system for visible images and infrared image as claimed in claim 1 is characterized in that described visible light sensor is connected with data acquisition equipment by data bus with infrared sensor.
CN2011200566285U 2011-03-07 2011-03-07 Rapid real-time integration processing system for visible image and infrared image Expired - Lifetime CN201927079U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011200566285U CN201927079U (en) 2011-03-07 2011-03-07 Rapid real-time integration processing system for visible image and infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011200566285U CN201927079U (en) 2011-03-07 2011-03-07 Rapid real-time integration processing system for visible image and infrared image

Publications (1)

Publication Number Publication Date
CN201927079U true CN201927079U (en) 2011-08-10

Family

ID=44430918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011200566285U Expired - Lifetime CN201927079U (en) 2011-03-07 2011-03-07 Rapid real-time integration processing system for visible image and infrared image

Country Status (1)

Country Link
CN (1) CN201927079U (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436576A (en) * 2011-10-21 2012-05-02 洪涛 Multi-scale self-adaptive high-efficiency target image identification method based on multi-level structure
CN102800097A (en) * 2012-07-19 2012-11-28 中国科学院自动化研究所 Multi-feature multi-level visible light and infrared image high-precision registering method
CN102855621A (en) * 2012-07-18 2013-01-02 中国科学院自动化研究所 Infrared and visible remote sensing image registration method based on salient region analysis
CN104427245A (en) * 2013-08-20 2015-03-18 三星泰科威株式会社 Image fusion system and method
CN104795017A (en) * 2015-04-24 2015-07-22 深圳市虚拟现实科技有限公司 Display control method and head-mounted display equipment
CN104796625A (en) * 2015-04-21 2015-07-22 努比亚技术有限公司 Picture synthesizing method and device
CN105096285A (en) * 2014-05-23 2015-11-25 南京理工大学 Image fusion and target tracking system based on multi-core DSP
CN105260990A (en) * 2015-09-18 2016-01-20 新疆医科大学第一附属医院 Denoising method of noisy infrared spectral signal
CN105510787A (en) * 2016-01-26 2016-04-20 国网上海市电力公司 Portable ultrasonic, infrared and ultraviolet detector based on image synthesis technology
CN105894009A (en) * 2014-05-08 2016-08-24 韩华泰科株式会社 IMAGE FUSING METHOD and apparatus
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN106971385A (en) * 2017-03-30 2017-07-21 西安微电子技术研究所 A kind of aircraft Situation Awareness multi-source image real time integrating method and its device
CN107782454A (en) * 2017-10-11 2018-03-09 广东电网有限责任公司佛山供电局 A kind of electric power thermal image analysis method of mobile device
CN108510455A (en) * 2018-03-27 2018-09-07 长春理工大学 A kind of laser irradiation device image interfusion method and system
CN109429018A (en) * 2017-08-23 2019-03-05 纬创资通股份有限公司 Image processing device and method
CN110392230A (en) * 2018-04-19 2019-10-29 广东电网有限责任公司 A kind of substation's unattended system

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436576A (en) * 2011-10-21 2012-05-02 洪涛 Multi-scale self-adaptive high-efficiency target image identification method based on multi-level structure
CN102855621A (en) * 2012-07-18 2013-01-02 中国科学院自动化研究所 Infrared and visible remote sensing image registration method based on salient region analysis
CN102800097A (en) * 2012-07-19 2012-11-28 中国科学院自动化研究所 Multi-feature multi-level visible light and infrared image high-precision registering method
CN102800097B (en) * 2012-07-19 2015-08-19 中国科学院自动化研究所 The visible ray of multi-feature multi-level and infrared image high registration accuracy method
CN104427245A (en) * 2013-08-20 2015-03-18 三星泰科威株式会社 Image fusion system and method
CN104427245B (en) * 2013-08-20 2019-04-19 韩华泰科株式会社 Image fusion system and method
CN105894009A (en) * 2014-05-08 2016-08-24 韩华泰科株式会社 IMAGE FUSING METHOD and apparatus
CN105894009B (en) * 2014-05-08 2020-05-19 韩华泰科株式会社 Image fusion method and image fusion device
CN105096285A (en) * 2014-05-23 2015-11-25 南京理工大学 Image fusion and target tracking system based on multi-core DSP
CN104796625A (en) * 2015-04-21 2015-07-22 努比亚技术有限公司 Picture synthesizing method and device
CN104795017A (en) * 2015-04-24 2015-07-22 深圳市虚拟现实科技有限公司 Display control method and head-mounted display equipment
CN105260990B (en) * 2015-09-18 2018-10-09 新疆医科大学第一附属医院 Contaminate the denoising method for infrared spectroscopy signals of making an uproar
CN105260990A (en) * 2015-09-18 2016-01-20 新疆医科大学第一附属医院 Denoising method of noisy infrared spectral signal
CN105510787A (en) * 2016-01-26 2016-04-20 国网上海市电力公司 Portable ultrasonic, infrared and ultraviolet detector based on image synthesis technology
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN106952245B (en) * 2017-03-07 2018-04-10 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN106971385B (en) * 2017-03-30 2019-10-01 西安微电子技术研究所 A kind of aircraft Situation Awareness multi-source image real time integrating method and its device
CN106971385A (en) * 2017-03-30 2017-07-21 西安微电子技术研究所 A kind of aircraft Situation Awareness multi-source image real time integrating method and its device
CN109429018A (en) * 2017-08-23 2019-03-05 纬创资通股份有限公司 Image processing device and method
CN107782454A (en) * 2017-10-11 2018-03-09 广东电网有限责任公司佛山供电局 A kind of electric power thermal image analysis method of mobile device
CN108510455A (en) * 2018-03-27 2018-09-07 长春理工大学 A kind of laser irradiation device image interfusion method and system
CN108510455B (en) * 2018-03-27 2020-07-17 长春理工大学 Laser irradiator image fusion method and system
CN110392230A (en) * 2018-04-19 2019-10-29 广东电网有限责任公司 A kind of substation's unattended system

Similar Documents

Publication Publication Date Title
CN201927079U (en) Rapid real-time integration processing system for visible image and infrared image
CN105866790B (en) A kind of laser radar obstacle recognition method and system considering lasing intensity
CN102982518A (en) Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
CN101894364B (en) Image fusion method and device based on optical non-down sampling contourlet transform
CN108229440A (en) One kind is based on Multi-sensor Fusion indoor human body gesture recognition method
CN102005054A (en) Real-time infrared image target tracking method
Tan et al. Integrating Advanced Computer Vision and AI Algorithms for Autonomous Driving Systems
Liu et al. A novel multi-sensor fusion based object detection and recognition algorithm for intelligent assisted driving
CN101588480B (en) Multi-agent visual servo-coordination control method
CN103295221A (en) Water surface target motion detecting method simulating compound eye visual mechanism and polarization imaging
CN111681283A (en) Monocular stereoscopic vision-based relative pose calculation method applied to wireless charging alignment
Li et al. Judgment and optimization of video image recognition in obstacle detection in intelligent vehicle
Wang et al. [Retracted] Sensor‐Based Environmental Perception Technology for Intelligent Vehicles
CN114820733A (en) Interpretable thermal infrared visible light image registration method and system
Zhang et al. Visual Object Tracking Algorithm Based on Biological Visual Information Features and Few‐Shot Learning
Liu et al. Visual driving assistance system based on few-shot learning
Marnissi et al. Thermal image enhancement using generative adversarial network for pedestrian detection
CN114677531A (en) Water surface unmanned ship target detection and positioning method fusing multi-mode information
CN107147877A (en) FX night fog day condition all-weather colorful video imaging system and its construction method
Yue et al. Low-illumination traffic object detection using the saliency region of infrared image masking on infrared-visible fusion image
Tran et al. Adaptive active fusion of camera and single-point lidar for depth estimation
Schöller et al. Object detection at sea using ensemble methods across spectral ranges
CN107798854A (en) A kind of ammeter long-distance monitoring method based on image recognition
Shahista et al. Detection of the traffic light in challenging environmental conditions
Wei et al. Fast detection of moving objects based on sequential images processing

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: STATE ELECTRIC NET CROP.

Effective date: 20130320

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20130320

Address after: 250002 Ji'nan City Central District, Shandong, No. 2 South Road, No. 500

Patentee after: Shandong Research Inst. of Electric Power

Patentee after: State Grid Corporation of China

Address before: 250002 Ji'nan City Central District, Shandong, No. 2 South Road, No. 500

Patentee before: Shandong Research Inst. of Electric Power

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 250002 Ji'nan City Central District, Shandong, No. 2 South Road, No. 500

Co-patentee after: State Grid Corporation of China

Patentee after: Shandong Research Inst. of Electric Power

Address before: 250002 Ji'nan City Central District, Shandong, No. 2 South Road, No. 500

Co-patentee before: State Grid Corporation of China

Patentee before: Shandong Research Inst. of Electric Power

EE01 Entry into force of recordation of patent licensing contract

Assignee: National Network Intelligent Technology Co., Ltd.

Assignor: Shandong Research Inst. of Electric Power

Contract record no.: X2019370000007

Denomination of utility model: Rapid real-time integration processing system for visible image and infrared image

Granted publication date: 20110810

License type: Exclusive License

Record date: 20191014

EE01 Entry into force of recordation of patent licensing contract
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201102

Address after: 250101 Electric Power Intelligent Robot Production Project 101 in Jinan City, Shandong Province, South of Feiyue Avenue and East of No. 26 Road (ICT Industrial Park)

Patentee after: National Network Intelligent Technology Co.,Ltd.

Address before: 250002 Ji'nan City Central District, Shandong, No. 2 South Road, No. 500

Patentee before: Shandong Electric Power Research Institute

Patentee before: STATE GRID CORPORATION OF CHINA

CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20110810

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: National Network Intelligent Technology Co.,Ltd.

Assignor: Shandong Electric Power Research Institute

Contract record no.: X2019370000007

Date of cancellation: 20210324