WO2019042034A1 - 三光融合智能成像仪及其方法 - Google Patents

三光融合智能成像仪及其方法 Download PDF

Info

Publication number
WO2019042034A1
WO2019042034A1 PCT/CN2018/096022 CN2018096022W WO2019042034A1 WO 2019042034 A1 WO2019042034 A1 WO 2019042034A1 CN 2018096022 W CN2018096022 W CN 2018096022W WO 2019042034 A1 WO2019042034 A1 WO 2019042034A1
Authority
WO
WIPO (PCT)
Prior art keywords
fusion
image data
imaging device
light
intelligent
Prior art date
Application number
PCT/CN2018/096022
Other languages
English (en)
French (fr)
Inventor
赵毅
谢小波
钱晨
刘宁
杨超
马新华
Original Assignee
江苏宇特光电科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏宇特光电科技股份有限公司 filed Critical 江苏宇特光电科技股份有限公司
Publication of WO2019042034A1 publication Critical patent/WO2019042034A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry

Definitions

  • the present invention relates to the field of spectral imaging technology, and in particular, to a three-light fusion intelligent imager and a method thereof.
  • Intelligent detection imaging and multi-spectral imaging have great significance in the fields of community security, industrial production, fire safety, forest fire prevention, security inspection and explosion prevention.
  • a single spectral imaging scheme cannot accurately discover all hidden information in the scene, thereby causing a potential threat to fail to respond in time and accurately.
  • community security most of the current community security uses the visible light camera + near-infrared light fill imaging method, which can initially realize day and night imaging, but because the nighttime ambient light is very weak, only the near-infrared fill light can only be blurred. Identifying the outline of the scene in the scene, you cannot carefully distinguish the features of the scene.
  • the traditional infrared thermal imaging method can only identify abnormal heat sources, so as to prevent potential disasters caused by abnormal parts heating, but prevent high-voltage arcing caused by power transmission.
  • the object of the present invention is to provide a three-light fusion intelligent imager capable of synchronously observing imaging features of three bands and combining various features for display, thereby improving observation efficiency.
  • the present invention provides a three-light fusion intelligent imager, including an intelligent imaging system and an imaging device; the imaging device communicates with an intelligent imaging system;
  • the imaging device is configured to acquire image data, and the imaging device includes a visible light imaging device, an infrared light imaging device, an ultraviolet light imaging device, and an ultrasonic distance measuring component;
  • the visible light imaging device is configured to acquire image data obtained by using visible light;
  • the infrared light imaging device is configured to collect image data obtained by using infrared light;
  • the ultraviolet light imaging device is configured to collect image data obtained by using ultraviolet light;
  • the ultrasonic ranging component is used to measure the distance of the observed target to the imaging device;
  • the intelligent imaging system is configured to fuse three-channel image data collected by the imaging device in various manners, and switch display between various fusion modes;
  • the intelligent imaging system includes a back-end processing system, a touch screen control unit, WiFi unit;
  • the back-end processing system is configured to perform lumping and data scheduling on all image and video data collected by the imaging device and control signals, communication data, and a touch screen in the intelligent imaging system;
  • the back-end processing system includes an FPGA chip, an ARM chip, and a storage unit; the FPGA chip is configured to pre-process each acquired image data of the band, and firstly perform scenes on each of the band image data streams collected in real time. Registration and geometric distortion correction processing, so that the three-way band image data can be aligned pixel by pixel and output the same scene information together;
  • the ARM chip is used for asynchronously isolating each corrected image video stream of each band by using a data stream bus scheduling architecture, and unifying each clock domain of each band image video stream into the same clock domain inside the ARM chip, and then utilizing the fusion
  • the algorithm puts the target image to be fused on the pipeline architecture, and sequentially extracts and samples the three-band video image frame by frame for fusion processing;
  • the storage unit is configured to write video data at a high speed to a frame for buffering for access by other modules, and simultaneously read data from the storage unit in parallel at the output, and each spectrum is required according to the requirements of the fusion algorithm.
  • Each frame of image is aligned pixel by pixel and then written to the bus concurrently at the same clock frequency;
  • the touch screen control unit is configured to display the image data of the fusion processing through the touch screen, and all the human-machine feedback information;
  • the WiFi unit is used to implement a remote transmission control function.
  • the storage unit includes a DDR memory chip, an EPCS serial storage chip, a FLASH chip, and a TF card;
  • the DDR memory chip memory particles are used to implement memory management and virtual memory of the entire system; and
  • the EPCS serial storage chip is used for Store the running program of the whole system;
  • the FLASH chip is used to store the logs and parameters in the system work, which can be used by the operator for the maintenance of the device in the later stage.
  • the TF card is used for the scene photos and videos that the real-time storage device needs to record during the working process. For the operator to retain or reproduce.
  • the imaging device integrates a visible light imaging device, an infrared light imaging device, an ultraviolet light imaging device, and an ultrasonic distance measuring component.
  • the interface module further includes an interface of at least a USB interface, a LAN interface, a VGA interface, a TF card interface, and a Cameralink Base interface.
  • the intelligent imaging system is further connected to a plurality of auxiliary detecting devices, and the auxiliary detecting device comprises at least an infrared temperature measuring device and a two-dimensional code scanning device.
  • the imaging device is connected to the intelligent imaging system through an interface or a cable.
  • the invention also provides a three-light fusion intelligent imaging method, comprising the following steps:
  • Step S1 collecting three-channel band image data; using visible light imaging device, infrared light imaging device, and ultraviolet light imaging device to perform image data acquisition on the same measured object, and using ultrasonic ranging component to measure the distance of the observed target to the imaging device, and The collected three-way band image data and the measurement distance are transmitted to the FPGA chip;
  • Step S2 preprocessing; the FPGA chip sequentially performs scene registration and geometric distortion correction processing on each of the band image data streams collected in real time, so that the three-way band image data can be aligned pixel by pixel and output the same scene information together;
  • Step S3 merging the three-way band image data; the corrected image data of each band is sent to the ARM chip, and the ARM chip adopts the data stream bus scheduling architecture.
  • the fifo inside the ARM chip is used. Asynchronously isolate the input band image data, and unify the clock domain of each band image data into the same clock domain inside the ARM chip, and then use the memory particles of the DDR memory chip in the memory unit to write the video data frame by frame at high speed. Cache into the other modules for access.
  • the data is read out from the DDR memory chip in parallel, and each frame of each spectrum is aligned pixel by pixel according to the requirements of the fusion algorithm. Write concurrently to the bus, and use the fusion algorithm to perform image data fusion;
  • the fusion algorithm uses the improved Laplacian pyramid as the layering rule, and puts the target image to be fused on the pipeline architecture, and sequentially extracts and samples the three-band video image data frame by frame, and improves the Laplacian.
  • the hierarchical structure of the pyramid is divided into three layers.
  • the decision strategy of each layer is to compare the absolute value of the three-band video image data pixel by pixel, and select the most detailed gray value as the fusion result and follow the inverse pyramid method.
  • the interpolation reconstruction process is performed until the bottom layer of the pyramid is restored to the bottom of the tower, and the fusion process of the entire algorithm is completed;
  • Step S4 the display process; the fused data is displayed on the touch screen, which helps the operator to observe various band target features in the measured object.
  • the fusion strategy of the fusion algorithm includes at least: a visible light fusion strategy, an infrared light fusion strategy, an ultraviolet light fusion strategy, a visible light and ultraviolet light fusion strategy, a visible light and ultraviolet light fusion strategy, and an infrared light and ultraviolet light fusion.
  • a visible light fusion strategy an infrared light fusion strategy, an ultraviolet light fusion strategy, a visible light and ultraviolet light fusion strategy, a visible light and ultraviolet light fusion strategy, and an infrared light and ultraviolet light fusion.
  • the touch screen display at least displays according to the fusion algorithm: visible light image data, infrared light image data, ultraviolet light image data, image data after fusion of visible light and infrared light, image data of visible light and ultraviolet light fusion, infrared Image data of light and ultraviolet light, three-light fusion image data of visible light and infrared light and ultraviolet light.
  • the method further includes the step S5, the ARM chip communicates with the host computer through the interface, and the remote computer performs remote data intercommunication and firmware update on the intelligent imaging system and the imaging device.
  • the invention integrates imaging devices of visible light, infrared light and ultraviolet light into a set of devices, extracts and registers the same scene to be observed through a field of view matching correction algorithm, and then adopts a fusion algorithm with excellent performance.
  • the image features of the three bands are integrated in all directions and displayed on the same display device (ie, the touch screen). The operator can observe the various band target features in the scene at a glance, and can easily navigate between various fusion modes. Switching to adapt to the observation needs of various target characteristics of different scenes, so as to respond in time, greatly improving the observation efficiency.
  • the invention is also an innovative three-light fusion intelligent imager, which realizes the innovation of multi-spectral fusion imaging conceptually, and overcomes the problem of high-speed big data concurrent real-time processing in technology, and completes data processing only by FPGA chip.
  • Algorithm implementation and internal and external control, the whole system is light in weight, small in size and low in power consumption, and is currently the first in the market.
  • FIG. 1 is a structural block diagram of a three-light fusion intelligent imager of the present invention
  • FIG. 2 is a schematic structural view of a three-light fusion intelligent imager of the present invention.
  • FIG. 3 is a general flow chart of a three-light fusion intelligent imaging method of the present invention.
  • Camera Link's interface has three configurations: Base, Medium, and Full. It is mainly to solve the problem of data transmission volume, which provides suitable configuration and connection method for cameras of different speeds.
  • the present invention provides a three-light fusion intelligent imager, which is shown in FIG. 1-2, and includes an intelligent imaging system 2 and an imaging device 1; the imaging device 1 communicates with the intelligent imaging system 2, and can be connected through an interface or a cable, that is, Preferably, the two parts are plugged into one body by a standard Cameralink Base interface connection, or the two parts can be connected at a long distance by cables of different lengths to facilitate the laying of the entire system.
  • the imaging device 1 is for acquiring image data, and the imaging device 1 includes a visible light imaging device 11, an infrared light imaging device 12, an ultraviolet light imaging device 13, and an ultrasonic distance measuring component 14.
  • the visible light imaging device 11 is used for acquiring image data obtained by using visible light
  • the infrared light imager 12 is for collecting image data obtained by using infrared light
  • the ultraviolet light imaging device 13 is for collecting image data obtained by using ultraviolet light
  • the ranging component 14 is for measuring the distance of the observed target to the imaging device 1.
  • the imaging device 1 may integrate the visible light imaging device 11, the infrared light imaging device 12, the ultraviolet light imaging device 13, and the ultrasonic distance measuring assembly 14.
  • the intelligent imaging system 2 is configured to fuse the three-way band image data collected by the imaging device 1 in various ways, and switch display between various fusion modes; the intelligent imaging system 2 includes a back-end processing system, a touch screen control unit 24, WiFi unit 23.
  • the back-end processing system is configured to perform lumping and data scheduling on all image and video data collected by the imaging device and control signals, communication data, and touch screens in the intelligent imaging system.
  • the back-end processing system includes an FPGA chip 21, an ARM chip 22, and a storage unit; the FPGA chip 21 is configured to pre-process each acquired image data of the band, and firstly perform scene matching on each of the band image data streams collected in real time.
  • the quasi-and geometric distortion correction processing allows the three-way band image data to be aligned pixel by pixel and outputs the same scene information together.
  • the ARM chip 22 is used to adopt a data stream bus scheduling architecture (preferably a 4-level Cache Cache bus architecture. It should be noted that the above preferred architecture is not intended to limit the scope of the present invention).
  • the corrected image channel video stream is asynchronously isolated.
  • the clock domain of each channel image video stream is unified into the same clock domain inside the ARM chip, and then the fusion target algorithm is used to put the target image to be merged on the pipeline architecture, and the three-band video image is sequentially layer-by-frame. Extract and sample for fusion processing.
  • the storage unit is used for writing video data at high speed on a frame-by-frame basis for buffering for access by other modules.
  • the data is also read out from the storage unit in parallel, and each frame of each spectrum is required according to the requirements of the fusion algorithm.
  • the image is aligned pixel by pixel and then written to the bus concurrently at the same clock frequency.
  • the storage unit includes a DDR memory chip, an EPCS serial storage chip, a FLASH chip, and a TF card; the DDR memory chip memory granule is used to implement memory management and virtual memory of the entire system, and the memory design of the system can be realized up to 4G according to the architecture of the system. 512M virtual memory; EPCS serial memory chip is used to store the running program of the whole system; FLASH chip is used to store the logs and parameters in the system work, which can be used for the operator to maintain the device later, and the TF card is used for real-time storage. Scenes and videos of scenes that need to be recorded during the work of the equipment for the operator to retain or reproduce.
  • the touch screen control unit 24 is configured to display the image data of the fusion processing and all the human-machine feedback information through the touch screen.
  • the intelligent imaging system 2 adopts the dual-core ARM chip and the FPGA chip to work together. Therefore, a small operating system is specially designed and implemented inside the system for human-computer interaction, which replaces the control of multiple physical buttons of similar products. the way.
  • the feedback from the system during the human-computer interaction process can only be performed by superimposing the menu on the external display, which largely blocks the on-screen video content.
  • the touch screen control mode is adopted in the system, and all human-machine feedback is directly displayed on the touch screen, which does not cover and block the video information on the display, which brings convenience to the operator's scene recognition.
  • the WiFi unit 23 is used to implement the remote transmission control function, that is, the remote control of the system by the computer host computer or the mobile phone app end can be realized.
  • the PWR power module is used to supply power to the entire system.
  • the power consumption of the whole system is ⁇ 6w.
  • the 4000mA lithium battery can guarantee continuous operation for 3.3 hours, which is convenient for long-term field work.
  • the interface module further includes a USB interface 251, a LAN interface 252, a VGA interface 253, a TF card interface 254, a Cameralink Base interface, and the like. It is connected to the host computer through the USB interface and the LAN interface to realize interworking with the host computer.
  • the intelligent imaging system 2 is also connected to a plurality of auxiliary detecting devices.
  • the auxiliary detecting device includes at least an infrared temperature measuring device and a two-dimensional code scanning device, so that the integration degree, functionality and intelligence of the entire system are greatly improved.
  • the three-light fusion intelligent imager of the invention intelligently realizes data interconnection, system control scheduling and remote communication functions between components by means of an embedded operating system.
  • the touch screen control method replaces the control mode of the traditional physical buttons of such products, so that the industrial design of the whole system is more simple and clear, and the operation mode is extremely easy to use.
  • the original multi-spectral image fusion algorithm and the optimized data stream bus scheduling architecture preferably adopt a 4-level Cache cache bus architecture, which can easily realize a total of 7 different image fusion modes of three bands (ie, visible light, infrared light, Ultraviolet light, visible light + infrared light, visible light + ultraviolet light, infrared light + ultraviolet light, visible light + ultraviolet light + infrared light three-light fusion mode), users can easily switch between various fusion modes to adapt to various scene targets The observation of features is required.
  • three bands ie, visible light, infrared light, Ultraviolet light, visible light + infrared light, visible light + ultraviolet light, infrared light + ultraviolet light, visible light + ultraviolet light + infrared light three-light fusion mode
  • the invention also provides a three-light fusion intelligent imaging method, as shown in FIG. 3, comprising the following steps:
  • Step S1 collecting three-channel band image data; using visible light imaging device, infrared light imaging device, and ultraviolet light imaging device to perform image data acquisition on the same measured object, and using ultrasonic ranging component to measure the distance of the observed target to the imaging device, and The acquired three-way band image data and the measurement distance are transmitted to the FPGA chip.
  • Step S2 preprocessing; the FPGA chip sequentially performs scene registration and geometric distortion correction processing on each of the band image data streams collected in real time, so that the three-way band image data can be aligned pixel by pixel and output the same scene information together.
  • Step S3 merging the three-way band image data; the corrected image data of each band is sent to the ARM chip, because the clock frequency of the three-way band image data is different, the fusion algorithm operation cannot be directly performed, and the ARM chip adopts the data stream bus.
  • Scheduling architecture when the three-band image data is sent, the input image of the band image data is asynchronously isolated by the fifo inside the ARM chip, and the clock domain of each band image data is unified into the same clock domain inside the ARM chip, and then The memory data of the DDR memory chip in the storage unit is used to write the video data into the frame at high speed for buffering for access by other modules. In the output, the data is also read out from the DDR memory chip in parallel, and according to the fusion algorithm. It is required that each frame of each spectrum is aligned pixel by pixel and then written to the bus at the same clock frequency, and the fusion of the image data is performed by using a fusion algorithm.
  • the fusion algorithm uses the improved Laplacian pyramid as the layering rule, and puts the target image to be fused on the pipeline architecture, and sequentially extracts and samples the three-band video image data frame by frame, and improves the Laplacian.
  • the hierarchical structure of the pyramid is divided into three layers.
  • the decision strategy of each layer is to compare the absolute value of the three-band video image data pixel by pixel, and select the most detailed gray value as the fusion result and follow the inverse pyramid method.
  • the interpolation reconstruction process is performed until the bottom layer of the pyramid is restored to the bottom of the tower, and the fusion process of the entire algorithm is completed.
  • the fusion strategy of the fusion algorithm includes at least: visible light fusion strategy, infrared light fusion strategy, ultraviolet light fusion strategy, visible light and ultraviolet light fusion strategy, visible light and ultraviolet light fusion strategy, infrared light and ultraviolet light fusion strategy, visible light and infrared light, and ultraviolet light.
  • the three-light fusion strategy of light includes at least: visible light fusion strategy, infrared light fusion strategy, ultraviolet light fusion strategy, visible light and ultraviolet light fusion strategy, visible light and ultraviolet light fusion strategy, infrared light and ultraviolet light fusion strategy, visible light and infrared light, and ultraviolet light.
  • Step S4 the display process; the fused data is displayed on the touch screen, which helps the operator to observe various band target features in the measured object.
  • the touch screen display at least displays according to the fusion algorithm: visible light image data, infrared light image data, ultraviolet light image data, image data of visible light and infrared light fusion, image data of visible light and ultraviolet light fusion, image data of infrared light and ultraviolet light fusion Three-light fusion image data of visible light and infrared light and ultraviolet light.
  • step S5 the ARM chip communicates with the host computer through the interface, and the remote computer performs remote data intercommunication and firmware update on the intelligent imaging system and the imaging device.
  • the imaging devices of the three different bands are discrete devices, even if they are placed in accordance with the horizontal optical axis design, it is inevitable that the target rotation and scaling problems in the field of view appearing when observing the same scene are inevitable in the structure. Problems can cause fused image mismatch. Therefore, the invention adopts an FPGA chip to perform a pre-processing process on the input image, that is, performing scene registration and geometric distortion correction operations on the image data stream collected in real time, so that the three-way video images can be aligned pixel by pixel, and the same output is common. Scene information.
  • the same scene to be observed is extracted and registered by the field of view matching correction algorithm, and then three kinds of fusion algorithms are performed by excellent performance.
  • the image features of the band are omnidirectionally integrated and displayed on the same display device (ie, the touch screen). The operator can observe the various band target features in the scene at a glance, and can easily switch between various fusion modes. In order to adapt to the observation needs of various target characteristics of different scenes, timely response can be made.
  • the invention can realize multiple spectral common imaging on a single set of devices, and can simultaneously realize a combined imaging device of visible light, infrared light and ultraviolet light to perform multi-spectral real-time synchronous detection and observation of targets in the scene, fundamentally Solve the problem of cost, laying difficulty, ease of use and observation efficiency.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种三光融合智能成像仪及其方法,包括智能成像***(2)、成像设备(1);成像设备(1)与智能成像***(2)进行通信;成像设备(1)用于采集图像数据,智能成像***(2)用于将成像设备(1)采集到的三路波段图像数据进行多种方式融合,并在各种融合方式之间切换显示;后端处理***用于对成像设备(1)采集到的所有图像视频数据和智能成像***(2)中的控制信号、通信数据及触摸屏进行集总以及数据调度。从而实现了多光谱融合成像的创新,在技术上攻克了高速大数据并发实时处理的难题,仅以FPGA芯片即完成了数据处理,算法实现及内外部控制,整套***重量轻,体积小,功能低。

Description

三光融合智能成像仪及其方法 技术领域
本发明涉及光谱成像技术领域,特别涉及一种三光融合智能成像仪及其方法。
背景技术
智能探测成像及多光谱成像在社区安防、工业生产、消防安全、森林防火、安检防爆等领域均有着重大的意义。在各个对象领域中,由于不同的待检测目标通常具有不同的光谱特征,因此单一光谱成像方案不能够准确的发现场景中的所有隐藏信息,从而导致对潜在威胁不能够及时准确的做出反应,导致了灾害的发生。以社区安防为例,目前的社区安防大多采用可见光相机+近红外光补光的成像方式,可以初步实现昼夜成像,但由于夜间环境光非常弱,仅采用近红外补光的方式只能模糊的辨识场景中的景物轮廓,无法仔细辨别景物特征。另如在高压电线输电领域,传统的红外热成像方式仅能够辨别异常热源,做到对异常部件发热而可能引起的潜在灾害做到提前预防,但是对输电过程中引起的高压电弧放电却无能为力。
传统的单一光谱成像主要覆盖三个波段,即可见光,红外光和紫外光波段。采用的设备解决方案是为每一个单一光谱成像器件制作一套***进行铺设。在这样的产品思路下,若是要对对象区域进行多光谱观察就必须配备多个成像设备,不仅大大增加了铺设成本,整套方法也相当占体积,因此多成像设备共同架设的方案难以执行,但是该方案在实际的生产生活 中又显得极其重要。
传统的多设备组合架设方案中,采用不同显示窗口分别显示不同光谱图像的方式,这给操作人员实际观察中带来了不可避免的使用不便。因为不同的光谱成像器件制作材料不同,工艺不同,导致每种设备的焦平面象元尺寸及间距都有较大的差异,导致在为每种成像器件配备镜头时,相同场景在不同设备上的成像视场大小各异,给操作人员在观察及寻找场景中相同目标的不同光谱特征及位置时带来了非常大的不便利性。操作人员只能用比对的方式来辨识被观测目标,大大降低了观测效率。
发明内容
本发明的目的旨在至少解决所述技术缺陷之一。
为此,本发明的目的在于提出一种三光融合智能成像仪,能够将三种波段的成像特征的同步观测,并将各种特征融合在一起进行显示,能够提高观测效率。
为了实现上述目的,本发明提供一种三光融合智能成像仪,包括智能成像***、成像设备;所述成像设备与智能成像***进行通信;
所述成像设备用于采集图像数据,所述成像设备包括可见光成像器件、红外光成像器件、紫外光成像器件、超声波测距组件;
所述可见光成像器件用于采集利用可见光获得的图像数据;所述红外光成像器件用于采集利用红外光获得的图像数据;所述紫外光成像器件用于采集利用紫外光获得的图像数据;所述超声波测距组件用于测量被观测目标到成像设备的距离;
所述智能成像***用于将成像设备采集到的三路波段图像数据进行多种方式融合,并在各种融合方式之间切换显示;所述智能成像***包括后端处理***、触摸屏控制单元、WiFi单元;
所述后端处理***用于对成像设备采集到的所有图像视频数据和智能成像***中的控制信号、通信数据及触摸屏进行集总以及数据调度;
所述后端处理***包括FPGA芯片、ARM芯片、存储单元;所述FPGA芯片用于对采集到的每路波段图像数据进行预处理,首先依次对实时采集到的每路波段图像数据流进行场景配准和几何畸变校正处理,令三路波段图像数据可以逐像素点对齐,并且共同输出相同的场景信息;
所述ARM芯片用于采用数据流总线调度架构对校正后的每路波段图像视频流进行异步隔离,将每路波段图像视频流各自的时钟域统一成ARM芯片内部的相同时钟域,再利用融合算法将待融合的目标图像放在流水线架构上,依次对三路波段视频图像逐帧进行细节层提取及采样,进行融合处理;
所述存储单元用于将视频数据逐帧高速的写入其中进行缓存以供其他模块访问,在输出时同样先将数据从存储单元中并行读出,并根据融合算法的要求将每一个光谱的每帧图像逐像素点对齐后以相同的时钟频率并发写入总线;
所述触摸屏控制单元用于通过触摸屏显示融合处理的图像视频数据,以及所有的人机反馈信息;
所述WiFi单元用于实现远程传输控制功能。
进一步的,所述存储单元包括DDR存储芯片、EPCS串行存储芯片、FLASH芯片、TF卡;所述DDR存储芯片内存颗粒用以实现整套***的内存管理及虚拟显存;EPCS串行存储芯片用以存储整套***的运行程序;FLASH芯片用以存储***工作中的日志及参数,可以方便操作人员用于后期对设备进行维护,TF卡用于实时存储设备在工作过程中需要记录的场景照片及视频,供操作人员留档或复现之用。
进一步的,所述成像设备集可见光成像器件、红外光成像器件、紫外光成像器件、超声波测距组件于一体。
进一步的,还包括接口模块,所述接口模块至少包括USB接口、LAN接口、VGA接口、TF卡接口、Cameralink Base接口。
进一步的,所述智能成像***还与多个辅助检测设备连接,所述辅助检测设备至少包括红外测温设备、二维码扫描设备。
进一步的,所述成像设备与智能成像***通过接口或线缆连接。
本发明还提供一种三光融合智能成像方法,包括以下步骤:
步骤S1,采集三路波段图像数据;利用可见光成像器件、红外光成像器件、紫外光成像器件对同一被测目标进行图像数据采集,利用超声波测距组件测量被观测目标到成像设备的距离,并将采集到的三路波段图像数据以及测量距离传输给FPGA芯片;
步骤S2,预处理;FPGA芯片依次对实时采集到的每路波段图像数据流 进行场景配准和几何畸变校正处理,令三路波段图像数据可以逐像素点对齐,并且共同输出相同的场景信息;
步骤S3,融合三路波段图像数据;校正后的每路波段图像数据被送至ARM芯片中,ARM芯片采用数据流总线调度架构,当三路波段图像数据送入时,由ARM芯片内部的fifo对输入的波段图像数据进行异步隔离,将每路波段图像数据各自的时钟域统一成ARM芯片内部的相同时钟域,继而利用存储单元内的DDR存储芯片的内存颗粒将视频数据逐帧高速的写入其中进行缓存以供其他模块访问,在输出时同样先将数据从DDR存储芯片中并行读出,并根据融合算法的要求将每一个光谱的每帧图像逐像素点对齐后以相同的时钟频率并发写入总线,在利用融合算法进行图像数据的融合;
融合算法采用改进的拉普拉斯金字塔作为分层规则,将待融合的目标图像放在流水线架构上,依次对三路波段视频图像数据逐帧进行细节层提取及采样,改进的拉普拉斯金字塔的分层结构共分为三层,每层的决策策略为将三路波段视频图像数据逐像素点进行绝对值求解比较,选出细节最强的灰度值作为融合结果并按照逆金字塔方式进行插值重建过程,直到从金字塔塔尖层恢复回塔底层时,即完成了整个算法的融合过程;
步骤S4,显示过程;融合后的数据经触摸屏进行显示,有助于操作人员观察到被测目标中的各种波段目标特征。
进一步的,在步骤S3中,融合算法的融合策略至少包括:可见光融合策略、红外光融合策略、紫外光融合策略、可见光与紫外光融合策略、可 见光与紫外光融合策略、红外光与紫外光融合策略、可见光与红外光和紫外光的三光融合策略。
进一步的,在步骤S4中,触摸屏显示根据融合算法至少显示:可见光图像数据、红外光图像数据、紫外光图像数据、可见光与红外光融合后的图像数据、可见光与紫外光融合的图像数据、红外光与紫外光融合的图像数据、可见光与红外光和紫外光的三光融合图像数据。
进一步的,还包括步骤S5,ARM芯片通过接口与上位机进行通讯,由上位机对智能成像***和成像设备进行远程数据互通和固件更新。
本发明通过将可见光、红外光和紫外光这三种波段的成像器件集成到一套设备中,通过视场匹配校正算法提取并配准得到相同的待观测场景,继而通过性能优良的融合算法将三种波段的图像特征进行全方位融合,并在同一显示设备(即触摸屏)上显示出来,操作人员可以一目了然的观察到场景中的各种波段目标特征,也可以在各种融合方式之间轻松切换,以适应各种不同场景目标特征的观测需要,从而及时作出应对,大大提高观测效率。
本发明还是一种创新性的三光融合智能成像仪,在概念上实现了多光谱融合成像的创新,在技术上攻克了高速大数据并发实时处理的难题,仅以FPGA芯片即完成了数据处理,算法实现及内外部控制,整套***重量轻,体积小,功耗低,目前在市场上属于首创。
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面 的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:
图1为本发明的三光融合智能成像仪的结构框图;
图2为本发明的三光融合智能成像仪的结构示意图;
图3为本发明的三光融合智能成像方法的总体流程图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
Camera Link的接口有三种配置Base、Medium、Full,主要是解决数据传输量的问题,这为不同速度的相机提供了适合的配置和连接方式。
本发明提供一种三光融合智能成像仪,参考附图1-2所示,包括智能成像***2、成像设备1;成像设备1与智能成像***2进行通信,可以通过接口或线缆连接,即优选用标准的Cameralink Base接口连接将两部分插接为一体,也可以通过不同长度的线缆将两部分远距离连接,方便整个***的铺设。
成像设备1用于采集图像数据,成像设备1包括可见光成像器件11、 红外光成像器件12、紫外光成像器件13、超声波测距组件14。
其中,可见光成像器件11用于采集利用可见光获得的图像数据;红外光成像器12件用于采集利用红外光获得的图像数据;紫外光成像器件13用于采集利用紫外光获得的图像数据;超声波测距组件14用于测量被观测目标到成像设备1的距离。对于同一被测目标采集时,成像设备1可以集可见光成像器件11、红外光成像器件12、紫外光成像器件13、超声波测距组件14于一体。
智能成像***2用于将成像设备1采集到的三路波段图像数据进行多种方式融合,并在各种融合方式之间切换显示;智能成像***2包括后端处理***、触摸屏控制单元24、WiFi单元23。
其中,后端处理***用于对成像设备采集到的所有图像视频数据和智能成像***中的控制信号、通信数据及触摸屏进行集总以及数据调度。
后端处理***包括FPGA芯片21、ARM芯片22、存储单元;FPGA芯片21用于对采集到的每路波段图像数据进行预处理,首先依次对实时采集到的每路波段图像数据流进行场景配准和几何畸变校正处理,令三路波段图像数据可以逐像素点对齐,并且共同输出相同的场景信息。
ARM芯片22用于采用数据流总线调度架构(优选4级Cache缓存总线架构,需要说明的是,上述优选架构并不是为了限制本发明的范围)对校正后的每路波段图像视频流进行异步隔离,将每路波段图像视频流各自的时钟域统一成ARM芯片内部的相同时钟域,再利用融合算法将待融合的目 标图像放在流水线架构上,依次对三路波段视频图像逐帧进行细节层提取及采样,进行融合处理。
存储单元用于将视频数据逐帧高速的写入其中进行缓存以供其他模块访问,在输出时同样先将数据从存储单元中并行读出,并根据融合算法的要求将每一个光谱的每帧图像逐像素点对齐后以相同的时钟频率并发写入总线。
存储单元包括DDR存储芯片、EPCS串行存储芯片、FLASH芯片、TF卡;DDR存储芯片内存颗粒用以实现整套***的内存管理及虚拟显存,根据本***的架构设计最大可以实现4G的内存存储及512M的虚拟显存;EPCS串行存储芯片用以存储整套***的运行程序;FLASH芯片用以存储***工作中的日志及参数,可以方便操作人员用于后期对设备进行维护,TF卡用于实时存储设备在工作过程中需要记录的场景照片及视频,供操作人员留档或复现之用。
触摸屏控制单元24用于通过触摸屏显示融合处理的图像视频数据,以及所有的人机反馈信息。
此外,智能成像***2采用了双核ARM芯片与FPGA芯片协同工作的方式,因此***内部特别设计并实现了一小型操作***用以进行人机交互,取代了类似产品传统的多个实体按键的控制方式。此外,由于传统的实体按键控制方式下,用户在进行人机交互的过程中来自***的反馈只能通过在外部显示器上叠加菜单的操作来进行,很大程度上遮挡了屏显视频内容, 本***中采用了触摸屏控制方式,所有的人机反馈均在触摸屏上直接显示,不会覆盖和遮挡显示器上的视频信息,给操作人员的场景辨识带来了便利性。
WiFi单元23用于实现远程传输控制功能,即能够可实现电脑上位机客户端或手机app端对本***的远程控制。
还包括PWR电源模块,PWR电源模块用于给整套***提供电源;整套***功耗<6w,用4000mA锂电池可以保证持续工作3.3小时,便于长时间外场工作。
还包括接口模块,接口模块至少包括USB接口251、LAN接口252、VGA接口253、TF卡接口254、Cameralink Base接口等。通过USB接口和LAN接口与上位机连接,实现与上位机之间的互通。
智能成像***2还与多个辅助检测设备连接,辅助检测设备至少包括红外测温设备、二维码扫描设备,使得整套***的集成度、功能性和智能性大大提升。
本发明的三光融合智能成像仪通过内嵌操作***的方式智能的实现各组件之间的数据互联、***控制调度及远程通信功能。通过触摸屏控制方式替代了传统此类产品实体按键的控制方式,使整套***的工业设计更加简约、清楚,操作方式极度易于上手。通过独创性的多光谱图像的融合算法及优化的数据流总线调度架构优选采用4级Cache缓存总线架构,可以非常方便的实现三种波段总共7种不同的图像融合方式(即可见光、红外 光、紫外光、可见光+红外光,可见光+紫外光、红外光+紫外光、可见光+紫外光+红外光三光融合方式),用户可以在各种融合方式之间轻松切换,以适应各种不同场景目标特征的观测需要。
本发明还提供一种三光融合智能成像方法,如图3所示,包括以下步骤:
步骤S1,采集三路波段图像数据;利用可见光成像器件、红外光成像器件、紫外光成像器件对同一被测目标进行图像数据采集,利用超声波测距组件测量被观测目标到成像设备的距离,并将采集到的三路波段图像数据以及测量距离传输给FPGA芯片。
步骤S2,预处理;FPGA芯片依次对实时采集到的每路波段图像数据流进行场景配准和几何畸变校正处理,令三路波段图像数据可以逐像素点对齐,并且共同输出相同的场景信息。
步骤S3,融合三路波段图像数据;校正后的每路波段图像数据被送至ARM芯片中,由于三路波段图像数据的时钟频率不同,因此无法直接进行融合算法操作,ARM芯片采用数据流总线调度架构,当三路波段图像数据送入时,由ARM芯片内部的fifo对输入的波段图像数据进行异步隔离,将每路波段图像数据各自的时钟域统一成ARM芯片内部的相同时钟域,继而利用存储单元内的DDR存储芯片的内存颗粒将视频数据逐帧高速的写入其中进行缓存以供其他模块访问,在输出时同样先将数据从DDR存储芯片中并行读出,并根据融合算法的要求将每一个光谱的每帧图像逐像素点对齐 后以相同的时钟频率并发写入总线,在利用融合算法进行图像数据的融合。
融合算法采用改进的拉普拉斯金字塔作为分层规则,将待融合的目标图像放在流水线架构上,依次对三路波段视频图像数据逐帧进行细节层提取及采样,改进的拉普拉斯金字塔的分层结构共分为三层,每层的决策策略为将三路波段视频图像数据逐像素点进行绝对值求解比较,选出细节最强的灰度值作为融合结果并按照逆金字塔方式进行插值重建过程,直到从金字塔塔尖层恢复回塔底层时,即完成了整个算法的融合过程。
融合算法的融合策略至少包括:可见光融合策略、红外光融合策略、紫外光融合策略、可见光与紫外光融合策略、可见光与紫外光融合策略、红外光与紫外光融合策略、可见光与红外光和紫外光的三光融合策略。
步骤S4,显示过程;融合后的数据经触摸屏进行显示,有助于操作人员观察到被测目标中的各种波段目标特征。
触摸屏显示根据融合算法至少显示:可见光图像数据、红外光图像数据、紫外光图像数据、可见光与红外光融合后的图像数据、可见光与紫外光融合的图像数据、红外光与紫外光融合的图像数据、可见光与红外光和紫外光的三光融合图像数据。
步骤S5,ARM芯片通过接口与上位机进行通讯,由上位机对智能成像***和成像设备进行远程数据互通和固件更新。
由于三个不同波段的成像器件属于分立器件,因此即使按照水平光轴设计方式摆放,在结构上也不可避免的会出现当观察同一个场景时出现的 视场中目标旋转,缩放问题,这些问题会引起融合图像失配现象。因此,本发明采用FPGA芯片对输入图像进行预处理过程,即对实时采集到的图像数据流进行场景配准和几何畸变校正操作,令三路视频图像可以逐像素点对齐,并且共同输出相同的场景信息。
通过将可见光、红外光和紫外光这三种波段的成像器件集成到一套设备中,通过视场匹配校正算法提取并配准得到相同的待观测场景,继而通过性能优良的融合算法将三种波段的图像特征进行全方位融合,并在同一显示设备(即触摸屏)上显示出来,操作人员可以一目了然的观察到场景中的各种波段目标特征,也可以在各种融合方式之间轻松切换,以适应各种不同场景目标特征的观测需要,从而及时作出应对。
本发明能够在单套设备上实现多个光谱共同成像,还能够同时实现可见光、红外光和紫外光三种波段联合成像的设备来对场景中的目标进行多光谱实时同步检测和观察,从根本上解决成本、铺设难度、操作易用性及观测效率的问题。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在不脱离本发明的原理和宗旨的情况下在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。本发明的范围由所附权利要求极其等同限定。

Claims (10)

  1. 一种三光融合智能成像仪,其特征在于,包括:智能成像***、成像设备;所述成像设备与智能成像***进行通信;
    所述成像设备用于采集图像数据,所述成像设备包括可见光成像器件、红外光成像器件、紫外光成像器件、超声波测距组件;
    所述可见光成像器件用于采集利用可见光获得的图像数据;所述红外光成像器件用于采集利用红外光获得的图像数据;所述紫外光成像器件用于采集利用紫外光获得的图像数据;所述超声波测距组件用于测量被观测目标到成像设备的距离;
    所述智能成像***用于将成像设备采集到的三路波段图像数据进行多种方式融合,并在各种融合方式之间切换显示;所述智能成像***包括后端处理***、触摸屏控制单元、WiFi单元;
    所述后端处理***用于对成像设备采集到的所有图像视频数据和智能成像***中的控制信号、通信数据及触摸屏进行集总以及数据调度;
    所述后端处理***包括FPGA芯片、ARM芯片、存储单元;所述FPGA芯片用于对采集到的每路波段图像数据进行预处理,首先依次对实时采集到的每路波段图像数据流进行场景配准和几何畸变校正处理,令三路波段图像数据可以逐像素点对齐,并且共同输出相同的场景信息;
    所述ARM芯片用于采用数据流总线调度架构对校正后的每路波段图像视频流进行异步隔离,将每路波段图像视频流各自的时钟域统一成ARM芯 片内部的相同时钟域,再利用融合算法将待融合的目标图像放在流水线架构上,依次对三路波段视频图像逐帧进行细节层提取及采样,进行融合处理;
    所述存储单元用于将视频数据逐帧高速的写入其中进行缓存以供其他模块访问,在输出时同样先将数据从存储单元中并行读出,并根据融合算法的要求将每一个光谱的每帧图像逐像素点对齐后以相同的时钟频率并发写入总线;
    所述触摸屏控制单元用于通过触摸屏显示融合处理的图像视频数据,以及所有的人机反馈信息;
    所述WiFi单元用于实现远程传输控制功能。
  2. 如权利要求1所述的三光融合智能成像仪,其特征在于:所述存储单元包括DDR存储芯片、EPCS串行存储芯片、FLASH芯片、TF卡;所述DDR存储芯片内存颗粒用以实现整套***的内存管理及虚拟显存;EPCS串行存储芯片用以存储整套***的运行程序;FLASH芯片用以存储***工作中的日志及参数,可以方便操作人员用于后期对设备进行维护,TF卡用于实时存储设备在工作过程中需要记录的场景照片及视频,供操作人员留档或复现之用。
  3. 如权利要求1所述的三光融合智能成像仪,其特征在于:所述成像设备集可见光成像器件、红外光成像器件、紫外光成像器件、超声波测距组件于一体。
  4. 如权利要求1所述的三光融合智能成像仪,其特征在于:还包括接口模块,所述接口模块至少包括USB接口、LAN接口、VGA接口、TF卡接口、Cameralink Base接口。
  5. 如权利要求1所述的三光融合智能成像仪,其特征在于:所述智能成像***还与多个辅助检测设备连接,所述辅助检测设备至少包括红外测温设备、二维码扫描设备。
  6. 如权利要求1所述的三光融合智能成像仪,其特征在于:所述成像设备与智能成像***通过接口或线缆连接。
  7. 一种三光融合智能成像方法,其特征在于:包括以下步骤:
    步骤S1,采集三路波段图像数据;利用可见光成像器件、红外光成像器件、紫外光成像器件对同一被测目标进行图像数据采集,利用超声波测距组件测量被观测目标到成像设备的距离,并将采集到的三路波段图像数据以及测量距离传输给FPGA芯片;
    步骤S2,预处理;FPGA芯片依次对实时采集到的每路波段图像数据流进行场景配准和几何畸变校正处理,令三路波段图像数据可以逐像素点对齐,并且共同输出相同的场景信息;
    步骤S3,融合三路波段图像数据;校正后的每路波段图像数据被送至ARM芯片中,ARM芯片采用数据流总线调度架构,当三路波段图像数据送入时,由ARM芯片内部的fifo对输入的波段图像数据进行异步隔离,将每路波段图像数据各自的时钟域统一成ARM芯片内部的相同时钟域,继而利用 存储单元内的DDR存储芯片的内存颗粒将视频数据逐帧高速的写入其中进行缓存以供其他模块访问,在输出时同样先将数据从DDR存储芯片中并行读出,并根据融合算法的要求将每一个光谱的每帧图像逐像素点对齐后以相同的时钟频率并发写入总线,在利用融合算法进行图像数据的融合;
    融合算法采用改进的拉普拉斯金字塔作为分层规则,将待融合的目标图像放在流水线架构上,依次对三路波段视频图像数据逐帧进行细节层提取及采样,改进的拉普拉斯金字塔的分层结构共分为三层,每层的决策策略为将三路波段视频图像数据逐像素点进行绝对值求解比较,选出细节最强的灰度值作为融合结果并按照逆金字塔方式进行插值重建过程,直到从金字塔塔尖层恢复回塔底层时,即完成了整个算法的融合过程;
    步骤S4,显示过程;融合后的数据经触摸屏进行显示,有助于操作人员观察到被测目标中的各种波段目标特征。
  8. 如权利要求7所述的三光融合智能成像方法,其特征在于:在步骤S3中,融合算法的融合策略至少包括:可见光融合策略、红外光融合策略、紫外光融合策略、可见光与紫外光融合策略、可见光与紫外光融合策略、红外光与紫外光融合策略、可见光与红外光和紫外光的三光融合策略。
  9. 如权利要求7所述的三光融合智能成像方法,其特征在于:在步骤S4中,触摸屏显示根据融合算法至少显示:可见光图像数据、红外光图像数据、紫外光图像数据、可见光与红外光融合后的图像数据、可见光与紫外光融合的图像数据、红外光与紫外光融合的图像数据、可见光与红外光 和紫外光的三光融合图像数据。
  10. 如权利要求7所述的三光融合智能成像方法,其特征在于:还包括步骤S5,ARM芯片通过接口与上位机进行通讯,由上位机对智能成像***和成像设备进行远程数据互通和固件更新。
PCT/CN2018/096022 2017-08-31 2018-07-17 三光融合智能成像仪及其方法 WO2019042034A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710770931.3 2017-08-31
CN201710770931.3A CN107607202B (zh) 2017-08-31 2017-08-31 三光融合智能成像仪

Publications (1)

Publication Number Publication Date
WO2019042034A1 true WO2019042034A1 (zh) 2019-03-07

Family

ID=61057006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096022 WO2019042034A1 (zh) 2017-08-31 2018-07-17 三光融合智能成像仪及其方法

Country Status (2)

Country Link
CN (1) CN107607202B (zh)
WO (1) WO2019042034A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107607202B (zh) * 2017-08-31 2021-05-11 江苏宇特光电科技股份有限公司 三光融合智能成像仪
CN108399612B (zh) * 2018-02-06 2022-04-05 江苏宇特光电科技股份有限公司 基于双边滤波金字塔的三光图像智能融合方法
CN108737728B (zh) * 2018-05-03 2021-06-11 Oppo广东移动通信有限公司 一种图像拍摄方法、终端及计算机存储介质
JP7218106B2 (ja) * 2018-06-22 2023-02-06 株式会社Jvcケンウッド 映像表示装置
CN110942475B (zh) * 2019-11-13 2023-02-17 北方夜视技术股份有限公司 紫外与可见光图像融合***及快速图像配准方法
CN113628255B (zh) * 2021-07-28 2024-03-12 武汉三江中电科技有限责任公司 一种三光融合无损检测图像配准算法
CN113807364A (zh) * 2021-09-08 2021-12-17 国网内蒙古东部电力有限公司兴安供电公司 一种基于三光融合成像的电力设备缺陷检测方法及***

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101349591A (zh) * 2008-08-29 2009-01-21 北京理工大学 可重构、分布式多光谱成像***
CN102567979A (zh) * 2012-01-20 2012-07-11 南京航空航天大学 车载红外夜视***及其多源图像融合方法
KR20140065796A (ko) * 2012-11-22 2014-05-30 김성준 이미지센서모듈 및 메인프로세싱모듈이 교체 가능한 스마트 카메라
CN104376546A (zh) * 2014-10-27 2015-02-25 北京环境特性研究所 基于dm642的三路图像金字塔融合算法的实现方法
CN204761607U (zh) * 2015-07-15 2015-11-11 淮阴师范学院 一种实时多源视频图像融合***
CN105678727A (zh) * 2016-01-12 2016-06-15 四川大学 基于异构多核构架的红外光与可见光图像实时融合***
CN107607202A (zh) * 2017-08-31 2018-01-19 江苏宇特光电科技股份有限公司 三光融合智能成像仪及其方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527826B (zh) * 2009-04-17 2011-12-28 北京数码视讯科技股份有限公司 视频监控前端***
CN101547364B (zh) * 2009-05-05 2010-08-25 北京牡丹视源电子有限责任公司 一种传输流生成装置
CN102039738A (zh) * 2009-12-09 2011-05-04 辉县市文教印务有限公司 高速装订机页面在线模糊识别***
CN201854353U (zh) * 2010-10-13 2011-06-01 山东神戎电子股份有限公司 多光谱图像融合摄像机
CN103376615A (zh) * 2012-04-24 2013-10-30 鸿富锦精密工业(深圳)有限公司 自动对焦装置及自动对焦方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101349591A (zh) * 2008-08-29 2009-01-21 北京理工大学 可重构、分布式多光谱成像***
CN102567979A (zh) * 2012-01-20 2012-07-11 南京航空航天大学 车载红外夜视***及其多源图像融合方法
KR20140065796A (ko) * 2012-11-22 2014-05-30 김성준 이미지센서모듈 및 메인프로세싱모듈이 교체 가능한 스마트 카메라
CN104376546A (zh) * 2014-10-27 2015-02-25 北京环境特性研究所 基于dm642的三路图像金字塔融合算法的实现方法
CN204761607U (zh) * 2015-07-15 2015-11-11 淮阴师范学院 一种实时多源视频图像融合***
CN105678727A (zh) * 2016-01-12 2016-06-15 四川大学 基于异构多核构架的红外光与可见光图像实时融合***
CN107607202A (zh) * 2017-08-31 2018-01-19 江苏宇特光电科技股份有限公司 三光融合智能成像仪及其方法

Also Published As

Publication number Publication date
CN107607202B (zh) 2021-05-11
CN107607202A (zh) 2018-01-19

Similar Documents

Publication Publication Date Title
WO2019042034A1 (zh) 三光融合智能成像仪及其方法
CN106385530B (zh) 一种双光谱摄像机
WO2019184709A1 (zh) 多传感器融合的数据处理方法、装置与多传感器融合方法
JP6286432B2 (ja) 熱画像診断装置および熱画像診断方法
CN201689138U (zh) 基于窄带光谱的日盲型紫外成像仪
CN105865723B (zh) 气体泄漏检测非均匀校正方法及气体泄漏检测装置
CN109410159A (zh) 双目可见光及红外热成像复合成像***、方法及介质
CN104270570A (zh) 双目摄像机及其图像处理方法
CN208795816U (zh) 一种多光谱电力检测设备
CN105741379A (zh) 一种变电站全景巡检方法
CN110620885B (zh) 一种红外微光图像融合***、方法及电子设备
US20120075409A1 (en) Image segmentation system and method thereof
CN107231514A (zh) 一种全景监控摄像装置
CN107907803A (zh) 一种便携式的增强现实紫外成像***
CN206725097U (zh) 红外热成像测温终端以及测温***
CN109342891A (zh) 一种基于红外紫外可见光图像融合的故障检测方法和装置
CN104301634A (zh) 基于随机采样的短波红外单像素相机
CN104459457A (zh) 一种红外与紫外双路成像电力检测仪
CN105554354A (zh) 一种高清摄像头
KR101772820B1 (ko) 열 이미지 생성 방법 및 이를 수행하는 전자 기기
CN205681547U (zh) 一种多通道偏振与红外图像采集***
CN114067134A (zh) 烟尘环境下多光谱目标探测方法、***、设备和存储介质
JP2009010669A (ja) 画像処理装置、撮像装置、画像処理方法およびプログラム
US20210190594A1 (en) Personal electronic device with built-in visible camera and thermal sensor with thermal information available in the visible camera user interface and display
CN103175527A (zh) 一种应用于微小卫星的大视场低功耗的地球敏感器***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18852094

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18852094

Country of ref document: EP

Kind code of ref document: A1