CN107607202B - Three-light fusion intelligent imager - Google Patents
Three-light fusion intelligent imager Download PDFInfo
- Publication number
- CN107607202B CN107607202B CN201710770931.3A CN201710770931A CN107607202B CN 107607202 B CN107607202 B CN 107607202B CN 201710770931 A CN201710770931 A CN 201710770931A CN 107607202 B CN107607202 B CN 107607202B
- Authority
- CN
- China
- Prior art keywords
- image data
- fusion
- imaging device
- chip
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 91
- 238000003384 imaging method Methods 0.000 claims abstract description 120
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000004891 communication Methods 0.000 claims abstract description 8
- 238000003860 storage Methods 0.000 claims description 26
- 238000000034 method Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 9
- 238000007499 fusion processing Methods 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 239000002245 particle Substances 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 238000002955 isolation Methods 0.000 claims description 4
- 238000009529 body temperature measurement Methods 0.000 claims description 3
- 238000000701 chemical imaging Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 239000000047 product Substances 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000010891 electric arc Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
Landscapes
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The invention provides a three-light fusion intelligent imager, which comprises an intelligent imaging system and imaging equipment, wherein the imaging system comprises a light source, a light source and a light source; the imaging device is in communication with an intelligent imaging system; the imaging device is used for acquiring image data, and the intelligent imaging system is used for fusing the three-path waveband image data acquired by the imaging device in multiple modes and switching and displaying among the various fusion modes; the back-end processing system is used for integrating all image video data acquired by the imaging equipment and control signals, communication data and a touch screen in the intelligent imaging system and scheduling data. The invention realizes the innovation of multispectral fusion imaging conceptually, technically overcomes the difficulty of concurrent real-time processing of high-speed big data, finishes data processing, algorithm realization and internal and external control only by using an FPGA chip, and has light weight, small volume and low power consumption of the whole set of system.
Description
Technical Field
The invention relates to the technical field of spectral imaging, in particular to a three-light fusion intelligent imager.
Background
The intelligent detection imaging and the multispectral imaging have great significance in the fields of community security, industrial production, fire safety, forest fire prevention, security inspection explosion prevention and the like. In each object field, different targets to be detected usually have different spectral characteristics, so that a single spectral imaging scheme cannot accurately find all hidden information in a scene, and therefore, potential threats cannot be timely and accurately responded, and disasters are caused. The community security protection is taken as an example, the current community security protection mostly adopts the imaging mode of a visible light camera and near infrared light supplement, day and night imaging can be preliminarily realized, but because the ambient light at night is very weak, the mode of only adopting the near infrared light supplement can only be used for identifying the scene outline in the scene in a fuzzy mode, and the scene characteristics can not be carefully distinguished. In addition, in the field of high-voltage wire power transmission, the traditional infrared thermal imaging mode can only distinguish abnormal heat sources, so that potential disasters possibly caused by heating of abnormal parts can be prevented in advance, but high-voltage arc discharge caused in the power transmission process cannot be avoided.
Traditional single spectral imaging mainly covers three bands, namely the visible light, infrared light and ultraviolet light bands. The adopted equipment solution is to manufacture a set of system for each single spectrum imaging device to lay. Under the product idea, if multispectral observation is carried out on a target area, a plurality of imaging devices are required to be arranged, so that not only is the laying cost greatly increased, but also the whole method occupies a large volume, and therefore, a scheme of jointly erecting the plurality of imaging devices is difficult to implement, but the scheme is extremely important in the actual production life.
In the traditional multi-equipment combined erection scheme, different display windows are adopted to respectively display different spectral images, which brings inevitable inconvenience for operators in actual observation. Because different spectral imaging devices are made of different materials and have different processes, the sizes and the intervals of focal plane pixels of each device are greatly different, so that when a lens is arranged for each imaging device, the sizes of imaging view fields of the same scene on different devices are different, and great inconvenience is brought to operators when observing and searching different spectral characteristics and positions of the same target in the scene. The operator can only identify the observed target in a comparison mode, so that the observation efficiency is greatly reduced.
Disclosure of Invention
The object of the present invention is to solve at least one of the technical drawbacks mentioned.
Therefore, the invention aims to provide a three-light fusion intelligent imager which can synchronously observe imaging characteristics of three wave bands, fuse various characteristics together for display and improve the observation efficiency.
In order to achieve the above object, the present invention provides a three-light fusion intelligent imager, which comprises an intelligent imaging system and an imaging device; the imaging device is in communication with an intelligent imaging system;
the imaging device is used for acquiring image data and comprises a visible light imaging device, an infrared light imaging device, an ultraviolet light imaging device and an ultrasonic ranging assembly;
the visible light imaging device is used for acquiring image data obtained by using visible light; the infrared light imaging device is used for acquiring image data obtained by using infrared light; the ultraviolet light imaging device is used for acquiring image data obtained by using ultraviolet light; the ultrasonic ranging assembly is used for measuring the distance from an observed target to the imaging equipment;
the intelligent imaging system is used for fusing the three-path waveband image data acquired by the imaging equipment in multiple modes and switching and displaying among various fusion modes; the intelligent imaging system comprises a back-end processing system, a touch screen control unit and a WiFi unit;
the back-end processing system is used for integrating all image video data acquired by the imaging equipment and control signals, communication data and a touch screen in the intelligent imaging system and scheduling data;
the back-end processing system comprises an FPGA chip, an ARM chip and a storage unit; the FPGA chip is used for preprocessing the acquired image data of each wave band, firstly, scene registration and geometric distortion correction processing are sequentially carried out on the image data stream of each wave band acquired in real time, so that the image data of three wave bands can be aligned pixel by pixel, and the same scene information is jointly output;
the ARM chip is used for asynchronously isolating each corrected wave band image video stream by adopting a data stream bus scheduling architecture, unifying the clock domain of each wave band image video stream into the same clock domain in the ARM chip, putting a target image to be fused on a pipeline architecture by utilizing a fusion algorithm, and sequentially performing fine layer extraction and sampling on three wave band video images frame by frame for fusion processing;
the storage unit is used for writing video data into the storage unit frame by frame at a high speed for caching so as to be accessed by other modules, reading the data out of the storage unit in parallel when outputting the video data, aligning each frame of image of each spectrum pixel by pixel according to the requirement of a fusion algorithm, and then writing the images into a bus in parallel at the same clock frequency;
the touch screen control unit is used for displaying the image video data subjected to fusion processing and all the man-machine feedback information through the touch screen;
the WiFi unit is used for realizing a remote transmission control function.
Further, the storage unit comprises a DDR storage chip, an EPCS serial storage chip, a FLASH chip and a TF card; the DDR memory chip memory particles are used for realizing memory management and virtual video memory of the whole set of system; the EPCS serial storage chip is used for storing the running program of the whole set of system; the FLASH chip is used for storing logs and parameters in the system work, an operator can conveniently maintain the equipment in the later period, and the TF card is used for storing scene photos and videos needing to be recorded in the working process of the equipment in real time for the operator to keep files or reproduce.
Furthermore, the imaging device integrates a visible light imaging device, an infrared light imaging device, an ultraviolet light imaging device and an ultrasonic ranging assembly.
Further, the system also comprises an interface module, wherein the interface module at least comprises a USB interface, a LAN interface, a VGA interface, a TF card interface and a Cameralink Base interface.
Furthermore, the intelligent imaging system is further connected with a plurality of auxiliary detection devices, and the auxiliary detection devices at least comprise infrared temperature measurement devices and two-dimensional code scanning devices.
Further, the imaging device is connected with the intelligent imaging system through an interface or a cable.
The invention also provides a three-light fusion intelligent imaging method, which comprises the following steps:
step S1, collecting three paths of wave band image data; the method comprises the steps that a visible light imaging device, an infrared light imaging device and an ultraviolet light imaging device are used for collecting image data of the same target to be measured, an ultrasonic ranging component is used for measuring the distance from the target to be observed to imaging equipment, and the collected three-way waveband image data and the measured distance are transmitted to an FPGA chip;
step S2, preprocessing; the FPGA chip sequentially carries out scene registration and geometric distortion correction processing on each path of wave band image data flow acquired in real time, so that the three paths of wave band image data can be aligned pixel by pixel, and the same scene information is output together;
step S3, fusing three paths of wave band image data; the corrected image data of each band are sent to an ARM chip, the ARM chip adopts a data flow bus scheduling architecture, when three paths of band image data are sent, fifo in the ARM chip is used for carrying out asynchronous isolation on the input band image data, respective clock domains of the band image data of each path are unified into the same clock domain in the ARM chip, then memory particles of a DDR memory chip in a storage unit are used for writing video data into the memory domain frame by frame at a high speed for other modules to access, data are read out from the DDR memory chip in parallel during output, each frame of image of each spectrum is aligned pixel by pixel according to the requirement of a fusion algorithm, then the image data are written into a bus at the same clock frequency, and the fusion algorithm is used for fusing the image data;
the fusion algorithm adopts an improved Laplace pyramid as a layering rule, a target image to be fused is placed on a pipeline framework, fine layer extraction and sampling are sequentially carried out on three paths of band video image data frame by frame, the layering structure of the improved Laplace pyramid is divided into three layers, the fusion strategy of each layer is that absolute value solution comparison is carried out on the three paths of band video image data pixel by pixel, a gray value with the strongest detail is selected as a fusion result, an interpolation reconstruction process is carried out according to an inverse pyramid mode, and the fusion process of the whole algorithm is completed until the pyramid top layer is restored back to a pyramid bottom layer;
step S4, displaying a procedure; the fused data are displayed through a touch screen, and operators can observe various wave band target characteristics in the detected target conveniently.
Further, in step S3, the fusion strategy of the fusion algorithm at least includes: the system comprises a visible light fusion strategy, an infrared light fusion strategy, an ultraviolet light fusion strategy, a visible light and ultraviolet light fusion strategy, an infrared light and ultraviolet light fusion strategy, and a visible light and infrared light and ultraviolet light fusion strategy.
Further, in step S4, the touch screen display displays at least: visible light image data, infrared light image data, ultraviolet light image data, image data obtained by fusing visible light and infrared light, image data obtained by fusing visible light and ultraviolet light, image data obtained by fusing infrared light and ultraviolet light, and three-light fusion image data obtained by fusing visible light, infrared light and ultraviolet light.
Further, the method comprises a step S5, wherein the ARM chip communicates with an upper computer through an interface, and the upper computer performs remote data intercommunication and firmware update on the intelligent imaging system and the imaging equipment.
The invention integrates the imaging devices of the visible light, the infrared light and the ultraviolet light into a set of equipment, extracts and registers the same scene to be observed through a field matching correction algorithm, then carries out all-around fusion on the image characteristics of the three wave bands through a fusion algorithm with excellent performance, and displays the image characteristics on the same display equipment (namely a touch screen), so that an operator can observe various wave band target characteristics in the scene at a glance and can easily switch among various fusion modes to adapt to the observation requirements of various target characteristics of different scenes, thereby timely making a response and greatly improving the observation efficiency.
The invention also discloses an innovative three-light fusion intelligent imager, which conceptually realizes the innovation of multispectral fusion imaging, technically solves the problem of high-speed big data concurrent real-time processing, completes data processing, algorithm realization and internal and external control only by an FPGA chip, has light weight, small volume and low power consumption of the whole set of system, and is pioneered in the market at present.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a block diagram of a three-light fusion intelligent imager according to the present invention;
FIG. 2 is a schematic structural diagram of a three-light-integration intelligent imager of the present invention;
fig. 3 is a general flow chart of the three-light fusion intelligent imaging method of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The interface of the Camera Link has three configurations, namely Base, Medium and Full, and mainly solves the problem of data transmission quantity, so that a proper configuration and connection mode is provided for cameras with different speeds.
The invention provides a three-light fusion intelligent imager, which is shown in the attached drawings 1-2 and comprises an intelligent imaging system 2 and an imaging device 1; the imaging device 1 communicates with the intelligent imaging system 2, and can be connected through an interface or a cable, that is, the two parts are preferably plugged into a whole by using a standard Cameralink Base interface connection, and also can be remotely connected through cables with different lengths, so that the whole system can be conveniently laid.
The imaging device 1 is used for collecting image data, and the imaging device 1 comprises a visible light imaging device 11, an infrared light imaging device 12, an ultraviolet light imaging device 13 and an ultrasonic ranging assembly 14.
The visible light imaging device 11 is used for acquiring image data obtained by using visible light; the infrared light imager 12 is used for acquiring image data obtained by using infrared light; the ultraviolet light imaging device 13 is used for acquiring image data obtained by using ultraviolet light; the ultrasonic ranging module 14 is used to measure the distance from the observed object to the imaging device 1. When the same measured object is collected, the imaging device 1 may integrate the visible light imaging device 11, the infrared light imaging device 12, the ultraviolet light imaging device 13, and the ultrasonic ranging module 14.
The intelligent imaging system 2 is used for fusing the three-path waveband image data acquired by the imaging device 1 in multiple modes and switching and displaying among various fusion modes; the intelligent imaging system 2 includes a back-end processing system, a touch screen control unit 24, and a WiFi unit 23.
The back-end processing system is used for integrating all image video data acquired by the imaging equipment and control signals, communication data and a touch screen in the intelligent imaging system and scheduling data.
The back-end processing system comprises an FPGA chip 21, an ARM chip 22 and a storage unit; the FPGA chip 21 is configured to perform preprocessing on each acquired band image data, and first sequentially perform scene registration and geometric distortion correction processing on each acquired band image data stream in real time, so that the three bands image data can be aligned pixel by pixel, and the same scene information is output together.
The ARM chip 22 is configured to perform asynchronous isolation on each corrected band image video stream by using a data stream bus scheduling architecture (preferably, a 4-level Cache bus architecture, it should be noted that the above-mentioned preferred architecture is not intended to limit the scope of the present invention), unify respective clock domains of each band image video stream into the same clock domain inside the ARM chip, place a target image to be fused on a pipeline architecture by using a fusion algorithm, and sequentially perform detail layer extraction and sampling on three band video images frame by frame for fusion processing.
The storage unit is used for writing video data into the storage unit frame by frame at a high speed for caching for other modules to access, reading the data out of the storage unit in parallel when outputting, aligning each frame image of each spectrum pixel by pixel according to the requirement of a fusion algorithm, and then writing the images into the bus in parallel at the same clock frequency.
The storage unit comprises a DDR storage chip, an EPCS serial storage chip, a FLASH chip and a TF card; the DDR memory chip memory particles are used for realizing memory management and virtual video memory of the whole set of system, and 4G memory storage and 512M virtual video memory can be realized to the maximum extent according to the architecture design of the system; the EPCS serial storage chip is used for storing the running program of the whole set of system; the FLASH chip is used for storing logs and parameters in the system work, an operator can conveniently maintain the equipment in the later period, and the TF card is used for storing scene photos and videos needing to be recorded in the working process of the equipment in real time for the operator to keep files or reproduce.
The touch screen control unit 24 is used for displaying the image video data subjected to the fusion processing and all the man-machine feedback information through the touch screen.
In addition, the intelligent imaging system 2 adopts a mode of cooperative work of a dual-core ARM chip and an FPGA chip, so that a small-sized operating system is specially designed and realized in the system for man-machine interaction, and the traditional control mode of a plurality of entity keys of similar products is replaced. In addition, because under the traditional entity key control mode, the feedback of a user from the system in the process of human-computer interaction can only be carried out through the operation of overlapping a menu on an external display, the video content of the screen display is shielded to a great extent, and the system adopts the touch screen control mode, so that all human-computer feedback is directly displayed on the touch screen, the video information on the display cannot be covered and shielded, and the convenience is brought to the scene identification of operators.
The WiFi unit 23 is used to implement a remote transmission control function, that is, can implement remote control of the system by a computer client or a mobile phone app.
The system also comprises a PWR power supply module, wherein the PWR power supply module is used for supplying power to the whole system; the power consumption of the whole system is less than 6w, and the 4000mA lithium battery can ensure continuous work for 3.3 hours, thereby being convenient for long-time field work.
The USB interface module further comprises an interface module, wherein the interface module at least comprises a USB interface 251, a LAN interface 252, a VGA interface 253, a TF card interface 254, a Cameralink Base interface and the like. The USB interface and the LAN interface are connected with the upper computer to realize the intercommunication with the upper computer.
The intelligent imaging system 2 is further connected with a plurality of auxiliary detection devices, and the auxiliary detection devices at least comprise infrared temperature measurement devices and two-dimensional code scanning devices, so that the integration level, functionality and intelligence of the whole system are greatly improved.
The three-light fusion intelligent imager intelligently realizes the functions of data interconnection, system control scheduling and remote communication among all components in a mode of embedding an operating system. The control mode of the traditional product entity key is replaced by the touch screen control mode, so that the industrial design of the whole system is simpler and clearer, and the operation mode is extremely easy to master. Through an original fusion algorithm of multispectral images and an optimized data stream bus scheduling architecture, a 4-level Cache bus architecture is preferably adopted, 7 different image fusion modes (namely, a visible light, an infrared light, an ultraviolet light, a visible light + infrared light, a visible light + ultraviolet light, an infrared light + ultraviolet light, a visible light + ultraviolet light + infrared light fusion mode) of three wave bands can be conveniently realized, and a user can easily switch among various fusion modes to meet the observation requirements of target characteristics of various different scenes.
The invention also provides a three-light fusion intelligent imaging method, as shown in fig. 3, comprising the following steps:
step S1, collecting three paths of wave band image data; the method comprises the steps of collecting image data of the same target to be measured by utilizing a visible light imaging device, an infrared light imaging device and an ultraviolet light imaging device, measuring the distance from the target to be observed to an imaging device by utilizing an ultrasonic ranging assembly, and transmitting the collected three-way waveband image data and the measured distance to an FPGA chip.
Step S2, preprocessing; the FPGA chip sequentially carries out scene registration and geometric distortion correction processing on each path of wave band image data flow acquired in real time, so that the three paths of wave band image data can be aligned pixel by pixel, and the same scene information is output together.
Step S3, fusing three paths of wave band image data; the corrected image data of each wave band is sent to an ARM chip, and because the clock frequencies of the image data of the three wave bands are different, therefore, the fusion algorithm operation can not be directly carried out, the ARM chip adopts a data flow bus scheduling architecture, when three paths of wave band image data are sent in, fifo in the ARM chip carries out asynchronous isolation on the input wave band image data, respective clock domains of each path of wave band image data are unified into the same clock domain in the ARM chip, then memory particles of a DDR memory chip in a memory unit are utilized to write the video data into the memory cells frame by frame at high speed for other modules to access, when in output, the data is read out from the DDR memory chip in parallel, each frame of image of each spectrum is aligned pixel by pixel according to the requirement of the fusion algorithm and then is written into the bus in parallel at the same clock frequency, and the fusion algorithm is utilized to perform the fusion of the image data.
The fusion algorithm adopts an improved Laplace pyramid as a layering rule, a target image to be fused is placed on a pipeline framework, fine layer extraction and sampling are sequentially carried out on three paths of band video image data frame by frame, the layering structure of the improved Laplace pyramid is divided into three layers, the fusion strategy of each layer is that absolute value solution comparison is carried out on the three paths of band video image data pixel by pixel, the gray value with the strongest detail is selected as a fusion result, an interpolation reconstruction process is carried out according to an inverse pyramid mode, and the fusion process of the whole algorithm is completed until the pyramid top layer is restored back to the pyramid bottom layer.
The fusion strategy of the fusion algorithm at least comprises the following steps: the system comprises a visible light fusion strategy, an infrared light fusion strategy, an ultraviolet light fusion strategy, a visible light and ultraviolet light fusion strategy, an infrared light and ultraviolet light fusion strategy, and a visible light and infrared light and ultraviolet light fusion strategy.
Step S4, displaying a procedure; the fused data are displayed through a touch screen, and operators can observe various wave band target characteristics in the detected target conveniently.
The touch screen display displays at least according to a fusion algorithm: visible light image data, infrared light image data, ultraviolet light image data, image data obtained by fusing visible light and infrared light, image data obtained by fusing visible light and ultraviolet light, image data obtained by fusing infrared light and ultraviolet light, and three-light fusion image data obtained by fusing visible light, infrared light and ultraviolet light.
And step S5, the ARM chip communicates with an upper computer through an interface, and the upper computer performs remote data intercommunication and firmware update on the intelligent imaging system and the imaging equipment.
Because the imaging devices of three different wave bands belong to discrete devices, even if the imaging devices are placed according to a horizontal optical axis design mode, the problems of target rotation and scaling in a view field, which are caused when the same scene is observed, can also be avoided structurally, and the problems can cause the phenomenon of fused image mismatch. Therefore, the invention adopts the FPGA chip to carry out the preprocessing process of the input image, namely, scene registration and geometric distortion correction operation are carried out on the image data stream acquired in real time, so that three paths of video images can be aligned pixel by pixel and the same scene information is output together.
The imaging devices of the three wave bands of visible light, infrared light and ultraviolet light are integrated into one set of equipment, the same scene to be observed is obtained through extraction and registration of a field-of-view matching correction algorithm, then the image features of the three wave bands are subjected to all-around fusion through a fusion algorithm with excellent performance and are displayed on the same display equipment (namely a touch screen), an operator can observe various wave band target features in the scene at a glance and can easily switch among various fusion modes to adapt to the observation requirements of various different scene target features, and therefore the corresponding action can be made in time.
The invention can realize the common imaging of a plurality of spectrums on a single set of equipment, and can also realize the combined imaging of three wave bands of visible light, infrared light and ultraviolet light simultaneously to carry out multispectral real-time synchronous detection and observation on the target in the scene, thereby fundamentally solving the problems of cost, laying difficulty, easy operation and observation efficiency.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and their full range of equivalents.
Claims (6)
1. The utility model provides a three light fusion intelligence imager which characterized in that includes: an intelligent imaging system, an imaging device; the imaging device is in communication with an intelligent imaging system;
the imaging device is used for acquiring image data and comprises a visible light imaging device, an infrared light imaging device, an ultraviolet light imaging device and an ultrasonic ranging assembly;
the visible light imaging device is used for acquiring image data obtained by using visible light; the infrared light imaging device is used for acquiring image data obtained by using infrared light; the ultraviolet light imaging device is used for acquiring image data obtained by using ultraviolet light; the ultrasonic ranging assembly is used for measuring the distance from an observed target to the imaging equipment;
the intelligent imaging system is used for fusing the three-path waveband image data acquired by the imaging equipment in multiple modes and switching and displaying among various fusion modes; the intelligent imaging system comprises a back-end processing system, a touch screen control unit and a WiFi unit;
the back-end processing system is used for integrating all image video data acquired by the imaging equipment and control signals, communication data and a touch screen in the intelligent imaging system and scheduling data;
the back-end processing system comprises an FPGA chip, an ARM chip and a storage unit; the FPGA chip is used for preprocessing the acquired image data of each wave band, firstly, scene registration and geometric distortion correction processing are sequentially carried out on the image data stream of each wave band acquired in real time, so that the image data of three wave bands can be aligned pixel by pixel, and the same scene information is jointly output;
the ARM chip is used for asynchronously isolating each corrected wave band image video stream by adopting a data stream bus scheduling architecture, unifying the clock domain of each wave band image video stream into the same clock domain in the ARM chip, putting a target image to be fused on a pipeline architecture by utilizing a fusion algorithm, and sequentially performing fine layer extraction and sampling on three wave band video images frame by frame for fusion processing;
the storage unit is used for writing video data into the storage unit frame by frame at a high speed for caching so as to be accessed by other modules, reading the data out of the storage unit in parallel when outputting the video data, aligning each frame of image of each spectrum pixel by pixel according to the requirement of a fusion algorithm, and then writing the images into a bus in parallel at the same clock frequency;
specifically, the video data are written into a memory cell frame by frame at a high speed by using memory particles of a DDR memory chip in the memory cell for caching so as to be accessed by other modules, the data are read out from the DDR memory chip in parallel in the output process, each frame image of each spectrum is aligned pixel by pixel according to the requirement of a fusion algorithm and then written into a bus in parallel at the same clock frequency, and the fusion of the image data is carried out by using the fusion algorithm;
the fusion algorithm adopts an improved Laplace pyramid as a layering rule, a target image to be fused is placed on a pipeline framework, fine layer extraction and sampling are sequentially carried out on three paths of band video image data frame by frame, the layering structure of the improved Laplace pyramid is divided into three layers, the fusion strategy of each layer is that absolute value solution comparison is carried out on the three paths of band video image data pixel by pixel, a gray value with the strongest detail is selected as a fusion result, an interpolation reconstruction process is carried out according to an inverse pyramid mode, and the fusion process of the whole algorithm is completed until the pyramid top layer is restored back to a pyramid bottom layer;
wherein, the fusion strategy of the fusion algorithm at least comprises: a visible light fusion strategy, an infrared light fusion strategy, an ultraviolet light fusion strategy, a visible light and infrared light fusion strategy, a visible light and ultraviolet light fusion strategy, an infrared light and ultraviolet light fusion strategy, and a visible light, infrared light and ultraviolet light fusion strategy;
the touch screen control unit is used for displaying the image video data subjected to fusion processing and all the man-machine feedback information through the touch screen;
wherein the touch screen display displays at least according to a fusion algorithm: visible light image data, infrared light image data, ultraviolet light image data, image data obtained by fusing visible light and infrared light, image data obtained by fusing visible light and ultraviolet light, image data obtained by fusing infrared light and ultraviolet light, and three-light fused image data obtained by fusing visible light, infrared light and ultraviolet light;
the WiFi unit is used for realizing a remote transmission control function;
the intelligent imaging method of the three-light fusion intelligent imager comprises the following steps:
step S1, collecting three paths of wave band image data; the method comprises the steps that a visible light imaging device, an infrared light imaging device and an ultraviolet light imaging device are used for collecting image data of the same target to be measured, an ultrasonic ranging component is used for measuring the distance from the target to be observed to imaging equipment, and the collected three-way waveband image data and the measured distance are transmitted to an FPGA chip;
step S2, preprocessing; the FPGA chip sequentially carries out scene registration and geometric distortion correction processing on each path of wave band image data flow acquired in real time, so that the three paths of wave band image data can be aligned pixel by pixel, and the same scene information is output together;
step S3, fusing three paths of wave band image data; the corrected image data of each band are sent to an ARM chip, the ARM chip adopts a data flow bus scheduling architecture, when three paths of band image data are sent, fifo in the ARM chip is used for carrying out asynchronous isolation on the input band image data, respective clock domains of the band image data of each path are unified into the same clock domain in the ARM chip, then memory particles of a DDR memory chip in a storage unit are used for writing video data into the memory domain frame by frame at a high speed for other modules to access, data are read out from the DDR memory chip in parallel during output, each frame of image of each spectrum is aligned pixel by pixel according to the requirement of a fusion algorithm, then the image data are written into a bus at the same clock frequency, and the fusion algorithm is used for fusing the image data;
step S4, displaying a procedure; the fused data is displayed through a touch screen, so that an operator can observe various wave band target characteristics in the target to be detected;
and step S5, the ARM chip communicates with an upper computer through an interface, and the upper computer performs remote data intercommunication and firmware update on the intelligent imaging system and the imaging equipment.
2. The tri-optic fusion smart imager of claim 1, wherein: the storage unit comprises a DDR storage chip, an EPCS serial storage chip, a FLASH chip and a TF card; the DDR memory chip memory particles are used for realizing memory management and virtual video memory of the whole set of system; the EPCS serial storage chip is used for storing the running program of the whole set of system; the FLASH chip is used for storing logs and parameters in the system work, an operator can conveniently maintain the equipment in the later period, and the TF card is used for storing scene photos and videos needing to be recorded in the working process of the equipment in real time for the operator to keep files or reproduce.
3. The tri-optic fusion smart imager of claim 1, wherein: the imaging device integrates a visible light imaging device, an infrared light imaging device, an ultraviolet light imaging device and an ultrasonic ranging assembly.
4. The tri-optic fusion smart imager of claim 1, wherein: the USB interface module at least comprises a USB interface, a LAN interface, a VGA interface, a TF card interface and a Cameralink Base interface.
5. The tri-optic fusion smart imager of claim 1, wherein: the intelligent imaging system is further connected with a plurality of auxiliary detection devices, and the auxiliary detection devices at least comprise infrared temperature measurement devices and two-dimensional code scanning devices.
6. The tri-optic fusion smart imager of claim 1, wherein: the imaging device is connected with the intelligent imaging system through an interface or a cable.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710770931.3A CN107607202B (en) | 2017-08-31 | 2017-08-31 | Three-light fusion intelligent imager |
PCT/CN2018/096022 WO2019042034A1 (en) | 2017-08-31 | 2018-07-17 | Intelligent three-light fusion imager and method therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710770931.3A CN107607202B (en) | 2017-08-31 | 2017-08-31 | Three-light fusion intelligent imager |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107607202A CN107607202A (en) | 2018-01-19 |
CN107607202B true CN107607202B (en) | 2021-05-11 |
Family
ID=61057006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710770931.3A Active CN107607202B (en) | 2017-08-31 | 2017-08-31 | Three-light fusion intelligent imager |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107607202B (en) |
WO (1) | WO2019042034A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107607202B (en) * | 2017-08-31 | 2021-05-11 | 江苏宇特光电科技股份有限公司 | Three-light fusion intelligent imager |
CN108399612B (en) * | 2018-02-06 | 2022-04-05 | 江苏宇特光电科技股份有限公司 | Three-light image intelligent fusion method based on bilateral filtering pyramid |
CN108737728B (en) * | 2018-05-03 | 2021-06-11 | Oppo广东移动通信有限公司 | Image shooting method, terminal and computer storage medium |
JP7218106B2 (en) * | 2018-06-22 | 2023-02-06 | 株式会社Jvcケンウッド | Video display device |
CN110942475B (en) * | 2019-11-13 | 2023-02-17 | 北方夜视技术股份有限公司 | Ultraviolet and visible light image fusion system and rapid image registration method |
CN113628255B (en) * | 2021-07-28 | 2024-03-12 | 武汉三江中电科技有限责任公司 | Three-light fusion nondestructive detection image registration algorithm |
CN113807364A (en) * | 2021-09-08 | 2021-12-17 | 国网内蒙古东部电力有限公司兴安供电公司 | Power equipment defect detection method and system based on three-light fusion imaging |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201854353U (en) * | 2010-10-13 | 2011-06-01 | 山东神戎电子股份有限公司 | Multi-spectral image fusion camera |
CN103376615A (en) * | 2012-04-24 | 2013-10-30 | 鸿富锦精密工业(深圳)有限公司 | Automatic focusing device and automatic focusing method |
CN204761607U (en) * | 2015-07-15 | 2015-11-11 | 淮阴师范学院 | Real -time multisource video image fusion system |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100573062C (en) * | 2008-08-29 | 2009-12-23 | 北京理工大学 | Restructural, distributed multi-optical spectrum imaging system |
CN101527826B (en) * | 2009-04-17 | 2011-12-28 | 北京数码视讯科技股份有限公司 | Video monitoring front-end system |
CN101547364B (en) * | 2009-05-05 | 2010-08-25 | 北京牡丹视源电子有限责任公司 | Transport stream generator |
CN102039738A (en) * | 2009-12-09 | 2011-05-04 | 辉县市文教印务有限公司 | Page online fuzzy identification system of high-speed binding machine |
CN102567979B (en) * | 2012-01-20 | 2014-02-05 | 南京航空航天大学 | Vehicle-mounted infrared night vision system and multi-source images fusing method thereof |
KR101441755B1 (en) * | 2012-11-22 | 2014-09-22 | 김성준 | A smart camera with interchangeable image senosr module and main processing module |
CN104376546A (en) * | 2014-10-27 | 2015-02-25 | 北京环境特性研究所 | Method for achieving three-path image pyramid fusion algorithm based on DM642 |
CN105678727A (en) * | 2016-01-12 | 2016-06-15 | 四川大学 | Infrared and visible light image real-time fusion system based on heterogeneous multi-core architecture |
CN107607202B (en) * | 2017-08-31 | 2021-05-11 | 江苏宇特光电科技股份有限公司 | Three-light fusion intelligent imager |
-
2017
- 2017-08-31 CN CN201710770931.3A patent/CN107607202B/en active Active
-
2018
- 2018-07-17 WO PCT/CN2018/096022 patent/WO2019042034A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201854353U (en) * | 2010-10-13 | 2011-06-01 | 山东神戎电子股份有限公司 | Multi-spectral image fusion camera |
CN103376615A (en) * | 2012-04-24 | 2013-10-30 | 鸿富锦精密工业(深圳)有限公司 | Automatic focusing device and automatic focusing method |
CN204761607U (en) * | 2015-07-15 | 2015-11-11 | 淮阴师范学院 | Real -time multisource video image fusion system |
Also Published As
Publication number | Publication date |
---|---|
WO2019042034A1 (en) | 2019-03-07 |
CN107607202A (en) | 2018-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107607202B (en) | Three-light fusion intelligent imager | |
CN201689138U (en) | Solar-blinded ultraviolet imaging instrument based on narrow-band spectrum | |
CN101510007B (en) | Real time shooting and self-adapting fusing device for infrared light image and visible light image | |
CN103063314B (en) | Thermal imagery device and thermal imagery image pickup method | |
CN109410159A (en) | Binocular visible light and infrared thermal imaging complex imaging system, method and medium | |
JP2018087823A (en) | Thermal image diagnostic device and thermal image diagnostic method | |
CN208795816U (en) | A kind of multispectral electric power detection device | |
CN104270570A (en) | Binocular video camera and image processing method thereof | |
CN102634632A (en) | System and method for detecting temperature of converter flame | |
CN106934394A (en) | Double-wavelength images acquisition system and method | |
CN110620885B (en) | Infrared low-light-level image fusion system and method and electronic equipment | |
CN101834989A (en) | Real-time data acquisition and storage system of helicopter in electric inspection process | |
CN104251738B (en) | A kind of helmet-type infrared radiation thermometer and its method | |
CN104459457A (en) | Infrared and ultraviolet dual-path imaging power detector | |
CN102025979A (en) | Infrared video real-time enhancing display device based on dual DSPs (digital signal processors) | |
CN109342891A (en) | A kind of fault detection method and device based on infrared and ultraviolet visual image fusion | |
CN107203987A (en) | A kind of infrared image and low-light (level) image real time fusion system | |
CN204993689U (en) | Extra -high voltage becomes station service transformer running state detection device based on infrared panoramic picture of 3D | |
CN112326038A (en) | Transformer substation intelligent temperature measurement system based on 5G communication and temperature measurement method thereof | |
CN205015088U (en) | A infrared detection images processing system for transformer substation | |
CN205681547U (en) | A kind of multichannel polarization and infrared image capturing system | |
CN102567978A (en) | Image generation method and system thereof | |
CN203851239U (en) | Dual-camera video image tracking device | |
CN206235396U (en) | Thermal infrared imager | |
CN103390272A (en) | Method for achieving registration and fusion of multi-spectral pseudo color images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |