CN108460794A - A kind of infrared well-marked target detection method of binocular solid and system - Google Patents

A kind of infrared well-marked target detection method of binocular solid and system Download PDF

Info

Publication number
CN108460794A
CN108460794A CN201611136900.4A CN201611136900A CN108460794A CN 108460794 A CN108460794 A CN 108460794A CN 201611136900 A CN201611136900 A CN 201611136900A CN 108460794 A CN108460794 A CN 108460794A
Authority
CN
China
Prior art keywords
pixel
image
infrared
well
marked target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611136900.4A
Other languages
Chinese (zh)
Other versions
CN108460794B (en
Inventor
柏连发
张超
韩静
张毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611136900.4A priority Critical patent/CN108460794B/en
Publication of CN108460794A publication Critical patent/CN108460794A/en
Application granted granted Critical
Publication of CN108460794B publication Critical patent/CN108460794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Image Processing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Analysis (AREA)

Abstract

A kind of infrared well-marked target detection method of binocular solid of present invention proposition and system, on the basis of the existing significantly detection based on LARK, the local feature that image is added builds regional area covariance, the luminance information and spatial information of image-region are introduced simultaneously, part significantly detection is extended into global significantly detection, finally obtains satisfied notable figure;It portable can be carried to realize simultaneously, the present invention proposes a kind of hardware real-time processing system based on DSP+FPGA, the processing system realizes the extraction of infrared well-marked target, the functions such as well-marked target ranging and the output of final well-marked target colorization, while can also meet the real-time of final process.

Description

A kind of infrared well-marked target detection method of binocular solid and system
Technical field
The invention belongs to well-marked target detection and target identification technology fields, and in particular to a kind of binocular solid is infrared significantly Object detection method and system.
Background technology
Vision perceives as people and recognizes one of the approach in the world, just gradually causes everybody concern.Adjoint science and technology Development, the mankind, which cannot only meet, relies only on human eye to experience the society on surface, is more desirable to vision and passes through eye-observation The affairs arrived carry out the profound real-life information of excavation.Later stage is with the development of computer technology, the energy of rapid computations The life of the mankind is just being altered in steps in power, with computer disposal visual pattern also growing up slowly, and then forms one New field-computer vision.The major function of computer vision is to perceive two dimensional image space by computer and extend Into three dimensions, the two dimension or even three-dimensional spatial information of image, the deeper understanding world of substitution human eye are obtained.Processing The required knowledge of computer vision covers every subjects, such as statistics, psychology, signal and system, depth are excavated.The U.S. As science and technology power to the research origin of computer vision the 1950s.To the 1960s or so, U.S. fiber crops The extraneous world is defined as " three-dimensional block world " by professor Roberts for saving science and engineering.Camera shooting is extracted by computer The vision figure of two-dimensional world, by the three-dimensional information of the program in later stage processing reduction computer vision, therefore it is formal will be two-dimentional Image information is generalized to three-dimensional process, this indicates the generation of this stereovision technique.
And for well-marked target detection for, it is therefore intended that highlight the marking area in vision on image target or How object improves the basic problem that the performance of conspicuousness detection algorithm is extensive concern in recent years.Conspicuousness detection is being counted It has a wide range of applications in calculation machine visual field and image processing tasks, such as image/video compression and segmentation, perception of content and figure As size adjustment includes the fields such as image mosaic.The extraction of notable information is also used in advanced visual field, such as object Detection, the identification of face, a large amount of conspicuousness detection algorithm is used to capture different conspicuousness clues.It is most of traditional Significant property model mainly identifies complicated scene (local complexity/contrast) using center ring around filter or image statistics Or the rare marking area in its appearance (rare/can not possibly).Self information approach of Shannon, is mainly utilized figure As pixels probability bears logarithm, be used for the impossible probability of observing and controlling part well-marked target information, so be used as one from upper and Under conspicuousness model.
Invention content
A kind of infrared well-marked target detection method of binocular solid of present invention proposition and system, existing notable based on LARK On the basis of detection, the local feature structure regional area covariance of image is added, while introducing the luminance information of image-region And spatial information, part significantly detection is extended into global significantly detection, finally obtains satisfied notable figure.Simultaneously for reality It is now able to portable carrying, the present invention proposes that a kind of hardware real-time processing platform based on DSP+FPGA, the processing platform are real The extraction of infrared well-marked target, the functions such as well-marked target ranging and the output of final well-marked target colorization are showed, while also can Meet the real-time of final process.
In order to solve the above technical problem, the present invention provides a kind of infrared well-marked target detection method of binocular solid, steps It is as follows:
Step 1, polar curve correction is carried out to the infrared image sequence that binocular camera acquires with same group of transformation matrix;
Step 2, conspicuousness detection, computational methods such as formula are carried out using local feature covariance matrix as characteristic value (1) such as formula,
Wherein, S (rk) be pixel k saliency value, DS(rk,ri) it is region r different in infrared imagekWith region riIt Between space length weights, σsFor space weights adjustment factor;b(rk,ri) it is region rkWith region riBrightness relationship, andSum(rk) it is region rkPixel brightness value summation, Sum (ri) it is region riPixel intensity It is worth summation;It is characterized covariance matrixWithBetween similarity, and have,
Wherein, λmIt is the Eigen Covariance Matrix C at pixel llWith the Eigen Covariance Matrix C at pixel iiIt is wide Adopted characteristic value, shown in computational methods such as formula (2),
λmClxm-Clxm=0, m=1,2 ... d (2)
Wherein, xmFor the feature vector of broad sense, d indicates the characteristic of feature vector, the Eigen Covariance square at pixel i Battle array CiComputational methods such as formula (3) shown in,
Wherein, hkIndicate the eigenmatrix at pixel i;uiIndicate the mean value of feature vector;N indicates picture in selected window The total number of element;Eigenmatrix h at pixel ikAs shown in formula (4),
hk=[I (x, y), Ive(x,y),Ile(x,y),K(x,y),x,y] (4)
Wherein, I (x, y) is the grey scale pixel value of image;Ive(x, y) and Ile(x, y) is the vertical gradient and level of image Grad;K (x, y) is the LARK core values of infrared image;X and y indicates the abscissa and ordinate of pixel in infrared image;
Step 3, after the well-marked target for extracting infrared image, the boundary of well-marked target, and two-value are calibrated with connected domain Change;The center position for choosing well-marked target, using the parallax of left and right two-way image well-marked target central point pixel, according to triangle Telemetry measures final distance.
The present invention also proposes a kind of infrared well-marked target detection method of binocular solid, including two infrared cameras, change piezoelectricity Source, dsp processor, FPGA, VGA display;Infrared camera is used to acquire two width infrared images as binocular camera, by infrared figure As being output in FPGA;FPGA receives the collected infrared image of infrared camera, and infrared image is sent to DSP;It connects simultaneously DSP treated image results are received, and VGA displays is sent to show;DSP carries out processing to binocular infrared image and obtains infrared image The location information and range information of middle well-marked target, and send image to FPGA.
Further, the method that DSP handles binocular infrared image is:
Step 1, polar curve correction is carried out to the infrared image sequence that binocular camera acquires with same group of transformation matrix;
Step 2, conspicuousness detection, computational methods such as formula are carried out using local feature covariance matrix as characteristic value (1) such as formula,
Wherein, S (rk) be pixel k saliency value, DS(rk,ri) it is region r different in infrared imagekWith region riIt Between space length weights, σsFor space weights adjustment factor;b(rk,ri) it is region rkWith region riBrightness relationship, andSum(rk) it is region rkPixel brightness value summation, Sum (ri) it is region riPixel intensity It is worth summation;It is characterized covariance matrixWithBetween similarity, and have,
Wherein, λmIt is the Eigen Covariance Matrix C at pixel llWith the Eigen Covariance Matrix C at pixel iiIt is wide Adopted characteristic value, shown in computational methods such as formula (2),
λmClxm-Clxm=0, m=1,2 ... d (2)
Wherein, xmFor the feature vector of broad sense, d indicates the characteristic of feature vector, the Eigen Covariance square at pixel i Battle array CiComputational methods such as formula (3) shown in,
Wherein, hkIndicate the eigenmatrix at pixel i;uiIndicate the mean value of feature vector;N indicates picture in selected window The total number of element;Eigenmatrix h at pixel ikAs shown in formula (4),
hk=[I (x, y), Ive(x,y),Ile(x,y),K(x,y),x,y] (4)
Wherein, I (x, y) is the grey scale pixel value of image;Ive(x, y) and Ile(x, y) is the vertical gradient and level of image Grad;K (x, y) is the LARK core values of infrared image;X and y indicates the abscissa and ordinate of pixel in infrared image;
Step 3, after the well-marked target for extracting infrared image, the boundary of well-marked target, and two-value are calibrated with connected domain Change;The center position for choosing well-marked target, using the parallax of left and right two-way image well-marked target central point pixel, according to triangle Telemetry measures final distance..
Compared with prior art, the present invention its remarkable advantage is:(1) present invention is unknown for infrared well-marked target structure It is aobvious, the features such as infrared background is complicated, it can effectively determine the position of infrared well-marked target, while extracting the wheel of well-marked target Wide characteristic information.(2) the present invention is based on DSP6678+FPGA hardware binocular platforms, the reasonable of calculation is assessed by DSP6678 eight The pretreatment of distribution and FPGA to image, can shorten the calculating time of system, realize final real-time processing.(3) system System design is easy, and stability is high, can realize portable carrying.Introduce the functions such as well-marked target ranging simultaneously so that final System it is more practical.
Description of the drawings
Fig. 1 is the method for the present invention flow chart.
Fig. 2 is present system composition schematic diagram.
Fig. 3 is the infrared polar curve correction result schematic diagram of binocular of the present invention.
Fig. 4 is the well-marked target detects schematic diagram of the present invention.
Fig. 5 is that DSP multinuclears task distributes schematic diagram in the present invention.
Specific implementation mode
It is readily appreciated that, technical solution according to the present invention, in the case where not changing the connotation of the present invention, this field Those skilled in the art can imagine a variety of embodiment party of the infrared well-marked target detection method of binocular solid of the present invention and system Formula.Therefore, detailed description below and attached drawing are only the exemplary illustrations to technical scheme of the present invention, and are not to be construed as The whole of the present invention is considered as limitation or restriction to technical solution of the present invention.
In conjunction with attached drawing, infrared well-marked target extraction proposed by the present invention and its Hardware realize that detailed process is as follows:
1, the infrared well-marked target extracting method of binocular
Step 1:Polar curve correction is carried out to collected binocular infrared image to calculate
, there is certain parallax in the horizontal direction in the collected two width infrared image of left and right of binocular solid night vision system, Using its parallax, the sequence of operations such as ranging can be carried out to well-marked target.And to binocular solid system, two camera lenses are vertical There are certain errors on direction, thus select limit correction to eliminate error, improve the precision of detection.Traditional polar curve correction Mainly to binocular camera the image collected, after measuring its transformation matrices, polar curve correction is carried out.But the calculating process for Real-time processing, increases the computation complexity of hardware platform.The binocular solid system real time detection that the present invention uses, to calculating There are strict requirements in the calculating time of method.In high precision, the algorithm of low operation is more suitable for real-time hardware platform, therefore the present invention selects With the method for correcting polar line of single transformation matrix.
It is I according to collected two width of camera or so two-way infrared imageLAnd IR, can calculate polar curve transformation matrix is L1And R1, specifically it is transformed to:
Wherein, XL1For result figure of the left camera after polar curve corrects;XR1It is right wing camera after polar curve corrects Result figure;Oeprator * representative images have obtained last correction chart under the action of transformation matrix by rotating translation;L1With R1The transformation matrix for indicating left and right two-way camera, is specifically solved to:
Wherein, fLAnd fRThe respectively focal length of left and right road camera;wL, hLAnd wR, hRFor the image of left and right road camera It is wide and high.Since camera resolution is constant, L1And R1Size determined by the focal length of Current camera.
Different two-way images is acquired, the polar curve correction matrix between them is also different, thus needs to calculate different figures The polar curve correction matrix of picture pair increases system complexity.Single method for correcting polar line is based on different acquisition images pair, in camera In the case that focal length is certain, it can be corrected with same group of transformation matrix.
When acquisition a pair of of image to forWithAfter converted, image after correction finally can be obtainedWithTool Body transformation for mula is:
When camera focus stablizes constant, all figures of binocular camera acquisition can be met using same transformation matrix As right.For Reduction Computation amount, the two width infrared images that binocular solid is acquired, the image on the basis of left infrared image is right Right wing image carries out polar curve correction, the error of reduction two images in the vertical direction, and specific transformation for mula is:
Wherein, L and R indicates to work as front left and right two-way image I in acquisitionLAnd IRWhen or so two-way camera transformation matrix, calculate Method such as formula (2), (3);XLAnd XRFor the two-way infrared image after limit correction.
Step 2:Detect the well-marked target of infrared image
The global contrast conspicuousness detection method based on provincial characteristics covariance information that present invention utilizes one.The party Method carries out conspicuousness detection, while introduce region luminance information with local feature covariance matrix as characteristic value, increases aobvious Contrast and introducing spatial weighting information between work target and background information, global conspicuousness is expanded to by the detection of local conspicuousness Detection, obtains final notable figure.
2.1 calculate local feature covariance
The variation of image gradient can effectively show the degree of homogenization of regional area.In general, background area in image General relatively flat, the graded unobvious in part, therefore the degree of homogenization of background parts is higher, remarkable result unobvious; And the structural information of well-marked target part is obvious, graded is big, thus the degree of homogenization in well-marked target region is low, shows It is apparent to write target.For infrared image, due to the structural information unobvious of its infrared image, it is more focused on luminance information Distinguish marking area and background area.And for making characteristic value using LARK, the main feature information of LARK is image Gradient information, and the gradient information of image, for infrared image, the effect of detection is not apparent, it is therefore desirable to introduce it His characteristic information detects to reinforce the conspicuousness of infrared image.
Traditional characteristics of image mainly includes the gradient information of image, the morphological erosion and morphological dilations of image and Comentropy etc..Each pixel can describe the characteristic information of the point, this feature matrix with an eigenmatrix in image It is made of the feature vector of several points.For a size is the image-region R of M × N, each pixel can use one The eigenmatrix of a d dimension describes the characteristic information at the point.The present invention mainly utilizes five feature vectors of each pixel Information come construct eigenmatrix indicate image each pixel characteristic information:
hk=[I (x, y), Ive(x,y),Ile(x,y),K(x,y),x,y] (9)
Wherein, I (x, y) is the grey scale pixel value of image, is the essential characteristic element in saliency detection;Ive(x, And I y)le(x, y) is the vertical gradient and horizontal gradient value of image, indicates the structure feature information of image;K (x, y) is image LARK core values indicate Local Structure of Image information change and its difference;X and y indicates the abscissa of pixel and vertical seat in image Mark, represents the location information of the pixel.
According to the multiple features matrix of each pixel, the later stage introduces covariance matrix to realize the synthesis of features described above.
Its Eigen Covariance is sought for each pixel, by choosing the pixel window of its periphery m × m sizes, thus It can obtain the covariance matrix of window center position i, the i.e. covariance matrix of central pixel point.For the spy at pixel i Sign covariance matrix can be expressed as:
Wherein, CiIndicate the Eigen Covariance matrix at the point;hkIndicate the eigenmatrix at this;uiIndicate feature vector Mean value;N indicates the total number of pixel in selected window.
Eigen Covariance Matrix CiIt is a symmetrical matrix, feature is, the element on diagonal of a matrix represents each Variance size between feature, and the element on off-diagonal then represents the correlation between each feature.It uses herein nearest Neighborhood method can lead to calculate the similarity between characteristic distance i.e. two covariance matrix in image between two pixels Following formula is crossed to measure:
Wherein, λmIt is ClAnd CiGeneralized eigenvalue, calculation is as follows:
λmClxm-Clxm=0, m=1,2 ... d (12)
Wherein, xmFor the feature vector of broad sense, d indicates the characteristic of feature vector.
2.2 addition luminance area information
For a width infrared image, the visual effect of human eye may be to the very big region of luminance contrast in image It is interested.In addition to detecting conspicuousness using the Characteristic Contrast degree in infrared image, brightness relationship is also detected in conspicuousness Aspect plays great effect, and be also used as conspicuousness detection one judges sexual factor.For human eye vision For, the high brightness contrast between adjacent region is easier to attract the attention of vision than low-light level contrast.Calculating picture While plain Eigen Covariance, the calculating of brightness relationship is introduced to enhance final notable handling result, two region rkAnd ri's Brightness relationship is:
Sum(rk) it is rkArea pixel brightness value summation.
2.3 addition spatial weighting region contrasts
In order to preferably introduce influence of the spatial information to conspicuousness testing result, local conspicuousness is detected and is extended It is detected to global conspicuousness, invention introduces space weights.The introducing of space weights enhances the sky between region and region Between impact effect so that between adjacent area notable proportion enhancing, and distance compared between far region notable proportion weaken, examine Influence of the global variable to final significantly result is considered.
Conspicuousness invention defines final image is:
Wherein, DS(rk,ri) that indicate is region rkAnd riBetween space length weights, σsSpace weights intensity is controlled, σsBigger, the influence of space weights is smaller, and the contrast compared with far region is caused to make the tribute of bigger to the saliency value of current region It offers;b(ri) it is luminance area information, luminance area contrast is more apparent, then b (ri) contribution it is bigger;It is two The similitude of pixel characteristic covariance;S(rk) be point k saliency value.
The present invention defines the Euclidean distance that the distance between two regions are two regional barycenters, is examined in last conspicuousness In survey, σs 2=0.4, the coordinate unification of pixel normalizes in [0,1] section.
Step 3 carries out ranging to the well-marked target detected
After the well-marked target extraction of infrared image, the present invention is demarcated the boundary of well-marked target using connected domain to calibration Out.The central point of the object pixel that the present invention chooses each well-marked target connected domain between two width well-marked target figures.Due to two After width image is corrected by polar curve, well-marked target is in the same horizontal position, thus the distance between well-marked target only has water Square to pixel parallax.The size that the present invention measures horizontal direction parallax is d, knows the X-axis side of two cameras before this To focal length be fx=2390.42, parallax range size is B=43.4cm, brings range of triangle publicity into:
According to the calculating between parallax, can obtain corresponding apart from size between well-marked target.According to different distance Different color bars is chosen to indicate it apart from size, while marking the location information of well-marked target in artwork in section.2. System platform is built
In order to reach the well-marked target extraction of binocular solid night vision, the present invention has built a set of binocular solid night vision hardware system System, if the composition of system includes two infrared camera resolution ratio be 640 × 512, variable-voltage power supply, DSPTMSC6678 processors, The components compositions such as FPGA (6 chips of Spartan), VGA display output (display resolution is 768 × 576).Binocular solid night It regards hard-wired function and acquires binocular infrared image as two width infrared cameras, it, will after being corrected to two width infrared images Image carries out well-marked target extraction, and the location information of well-marked target is marked on former infrared image, and red according to two The parallax of outer image is measured the location information of well-marked target, is exported eventually by display screen.
It is communicated between step 1 camera and FPGA
It is attached by PAL interfaces between infrared camera and FPGA.The production of ADI companies is mainly utilized in PAL system interface ADV7180 decoding chips carry out image input.The chip supports 3 channels to input simultaneously, the output of single channel switching at runtime, can Automatic identification and demodulate NTSC, PAL, SECAM system formula analog composite video signal, it is same after AD conversion and decoding to export row Walk signal HS, field sync signal VS, field mark signal FIELD and 8 YCbCr 4 for meeting ITU-R BT.656 standards:2:2 The picture signal of format
It is communicated between step 2FPGA and DSP
The flow for establishing data transfer communications link between DSP6678 and FPGA by the ports SRIO is as follows:First by DSP The SRIO mouths at end carry out initialization process, the parameter of SRIO mouthfuls of configuration, while the device id address and most of the ports SRIO is arranged The transmission rate of whole data.After initializing successfully, FPGA retransmits a Doorbell packet, DSP exists to after DSP transmission datas Internuclear interruption is triggered after receiving Doorbell packets, and reads the data of FPGA transmissions, establishes communication connection.
Image transmitting specific workflow is as follows:
(1) FPGA comes out image reading from the DDR3 of itself, is then sent to DSP by SWRITE data packets, After FPGA has sent a width complete image, doorbell packets are sent to DSP, DSP is waited for return to mono- doorbell's of FPGA Response bag into corresponding transmission delay, and then sends out piece image other again after receiving response bag.
(2) DSP stores data into purpose after receiving the SWRITE packets that FPGA is transmitted by internal DMA The region of memory position of core.After DSP receives the doorbell packets of FPGA, doorbell values become 0 from 1 in program, jump out journey Sequence self-loopa state starts to execute the corresponding algorithm processing module of program.After the completion of the processing of eight cores, core 7 can be by DSP most Termination fruit is transferred to FPGA by the form of SWRITE packets.
(3) FPGA is stored data into after having received the SWRITE packets of DSP in corresponding DDR3, and the later stage is aobvious by VGA Show and shows image.
The internuclear processing communications of step 3DSP
In the present invention, binocular solid is two width infrared images, in processing well-marked target extraction, is carried using parallel processing Height calculates the time, while the polar curve correction of image needs to calculate the global information of two images in binocular solid, can not be to image It handles respectively, the final flowing water type tupe using under parallel connection realizes the extraction of binocular solid night vision well-marked target.
The task of core 0 mainly establishes communication linkage between FPGA, while two width infrared images are received from FPGA, and Next core is sent to after two width infrared images are carried out polar curve correction, and core 1, core 2 and core 3, core 4 execute wherein one infrared respectively The well-marked target of image extracts, and core 5 executes the ranging between two width infrared images, and core 6 executes the colorization between infrared image It has been shown that, core 7 are responsible for DSP and FPGA communications, and by the image transmitting handled well to FPGA.Eight cores communication between DSP6678 makes With Message communication modes.The advantage of Message communication modes is that single core, both can be in the communication for executing Message The queue message for receiving a upper core also can be toward next core transmit queue message.It is only necessary to know that the queue address between core and core, It may be implemented in DSPTMS320C6678 to communicate between arbitrary between eight cores.In addition, MessageQ_alloc () is in core memory Create Dram space, can arbitrary control message queue length, therefore Message communication modes are for super large number There is significant advantage according to processing and transmission
The present invention achieves satisfied application effect through validation trial, can obtain ideal imaging effect, can be wide It is general to be applied to military detection, biomedical, unmanned vehicle automatic Pilot etc., with good application prospect.

Claims (3)

1. a kind of infrared well-marked target detection method of binocular solid, which is characterized in that steps are as follows:
Step 1, polar curve correction is carried out to the infrared image sequence that binocular camera acquires with same group of transformation matrix;
Step 2, conspicuousness detection is carried out using local feature covariance matrix as characteristic value, computational methods such as formula (1) is such as Formula,
Wherein, S (rk) be pixel k saliency value, DS(rk,ri) it is region r different in infrared imagekWith region riBetween Space length weights, σsFor space weights adjustment factor;b(rk,ri) it is region rkWith region riBrightness relationship, andSum(rk) it is region rkPixel brightness value summation, Sum (ri) it is region riPixel intensity It is worth summation;It is characterized covariance matrixWithBetween similarity, and have,
Wherein, λmIt is the Eigen Covariance Matrix C at pixel llWith the Eigen Covariance Matrix C at pixel iiBroad sense it is special Value indicative, shown in computational methods such as formula (2),
λmClxm-Clxm=0, m=1,2 ... d (2)
Wherein, xmFor the feature vector of broad sense, d indicates the characteristic of feature vector, the Eigen Covariance Matrix C at pixel ii Computational methods such as formula (3) shown in,
Wherein, hkIndicate the eigenmatrix at pixel i;uiIndicate the mean value of feature vector;N indicates pixel in selected window Total number;Eigenmatrix h at pixel ikAs shown in formula (4),
hk=[I (x, y), Ive(x,y),Ile(x,y),K(x,y),x,y] (4)
Wherein, I (x, y) is the grey scale pixel value of image;Ive(x, y) and Ile(x, y) is the vertical gradient and horizontal gradient of image Value;K (x, y) is the LARK core values of infrared image;X and y indicates the abscissa and ordinate of pixel in infrared image;
Step 3, after the well-marked target for extracting infrared image, the boundary of well-marked target, and binaryzation are calibrated with connected domain;Choosing The center position for taking well-marked target, using the parallax of left and right two-way image well-marked target central point pixel, according to range of triangle Method measures final distance.
2. a kind of infrared well-marked target detecting system of binocular solid, which is characterized in that including two infrared cameras, variable-voltage power supply, Dsp processor, FPGA, VGA display;
Infrared camera, for acquiring two width infrared images, infrared image is output in FPGA as binocular camera;
FPGA receives the collected infrared image of infrared camera, and infrared image is sent to DSP;After receiving DSP processing simultaneously Image result, and VGA displays is sent to show;
DSP carries out binocular infrared image the location information and range information that processing obtains well-marked target in infrared image, and will Image sends FPGA to.
3. the infrared well-marked target detecting system of binocular solid as claimed in claim 2, which is characterized in that DSP is to the infrared figure of binocular As the method handled is:
Step 1, polar curve correction is carried out to the infrared image sequence that binocular camera acquires with same group of transformation matrix;
Step 2, conspicuousness detection is carried out using local feature covariance matrix as characteristic value, computational methods such as formula (1) is such as Formula,
Wherein, S (rk) be pixel k saliency value, DS(rk,ri) it is region r different in infrared imagekWith region riBetween Space length weights, σsFor space weights adjustment factor;b(rk,ri) it is region rkWith region riBrightness relationship, andSum(rk) it is region rkPixel brightness value summation, Sum (ri) it is region riPixel intensity It is worth summation;It is characterized covariance matrixWithBetween similarity, and have,
Wherein, λmIt is the Eigen Covariance Matrix C at pixel llWith the Eigen Covariance Matrix C at pixel iiBroad sense it is special Value indicative, shown in computational methods such as formula (2),
λmClxm-Clxm=0, m=1,2 ... d (2)
Wherein, xmFor the feature vector of broad sense, d indicates the characteristic of feature vector, the Eigen Covariance Matrix C at pixel ii Computational methods such as formula (3) shown in,
Wherein, hkIndicate the eigenmatrix at pixel i;uiIndicate the mean value of feature vector;N indicates pixel in selected window Total number;Eigenmatrix h at pixel ikAs shown in formula (4),
hk=[I (x, y), Ive(x,y),Ile(x,y),K(x,y),x,y] (4)
Wherein, I (x, y) is the grey scale pixel value of image;Ive(x, y) and Ile(x, y) is the vertical gradient and horizontal gradient of image Value;K (x, y) is the LARK core values of infrared image;X and y indicates the abscissa and ordinate of pixel in infrared image;
Step 3, after the well-marked target for extracting infrared image, the boundary of well-marked target, and binaryzation are calibrated with connected domain;Choosing The center position for taking well-marked target, using the parallax of left and right two-way image well-marked target central point pixel, according to range of triangle Method measures final distance.
CN201611136900.4A 2016-12-12 2016-12-12 Binocular three-dimensional infrared salient target detection method and system Active CN108460794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611136900.4A CN108460794B (en) 2016-12-12 2016-12-12 Binocular three-dimensional infrared salient target detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611136900.4A CN108460794B (en) 2016-12-12 2016-12-12 Binocular three-dimensional infrared salient target detection method and system

Publications (2)

Publication Number Publication Date
CN108460794A true CN108460794A (en) 2018-08-28
CN108460794B CN108460794B (en) 2021-12-28

Family

ID=63228813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611136900.4A Active CN108460794B (en) 2016-12-12 2016-12-12 Binocular three-dimensional infrared salient target detection method and system

Country Status (1)

Country Link
CN (1) CN108460794B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084822A (en) * 2019-05-05 2019-08-02 中国人民解放军战略支援部队航天工程大学 A kind of target acquisition real time processing system and method towards the in-orbit application of satellite
CN110824317A (en) * 2019-12-06 2020-02-21 国网天津市电力公司 Transformer partial discharge source rapid positioning system based on thermal imaging technology
CN111951299A (en) * 2020-07-01 2020-11-17 中国科学院上海技术物理研究所 Infrared aerial target detection method
CN113822352A (en) * 2021-09-15 2021-12-21 中北大学 Infrared dim target detection method based on multi-feature fusion
CN115170792A (en) * 2022-09-07 2022-10-11 烟台艾睿光电科技有限公司 Infrared image processing method, device and equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110109691A (en) * 2010-03-31 2011-10-06 경북대학교 산학협력단 Providing device of eye scan path
CN104778721A (en) * 2015-05-08 2015-07-15 哈尔滨工业大学 Distance measuring method of significant target in binocular image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110109691A (en) * 2010-03-31 2011-10-06 경북대학교 산학협력단 Providing device of eye scan path
CN104778721A (en) * 2015-05-08 2015-07-15 哈尔滨工业大学 Distance measuring method of significant target in binocular image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YI ZHANG 等: "An infrared salient object stereo matching algorithm based on epipolar rectification", 《SPRINGER》 *
万一龙 等: "低照度双目立体显著目标距离测定方法与实现", 《红外与激光工程》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084822A (en) * 2019-05-05 2019-08-02 中国人民解放军战略支援部队航天工程大学 A kind of target acquisition real time processing system and method towards the in-orbit application of satellite
CN110824317A (en) * 2019-12-06 2020-02-21 国网天津市电力公司 Transformer partial discharge source rapid positioning system based on thermal imaging technology
CN111951299A (en) * 2020-07-01 2020-11-17 中国科学院上海技术物理研究所 Infrared aerial target detection method
CN111951299B (en) * 2020-07-01 2022-09-16 中国科学院上海技术物理研究所 Infrared aerial target detection method
CN113822352A (en) * 2021-09-15 2021-12-21 中北大学 Infrared dim target detection method based on multi-feature fusion
CN113822352B (en) * 2021-09-15 2024-05-17 中北大学 Infrared dim target detection method based on multi-feature fusion
CN115170792A (en) * 2022-09-07 2022-10-11 烟台艾睿光电科技有限公司 Infrared image processing method, device and equipment and storage medium
CN115170792B (en) * 2022-09-07 2023-01-10 烟台艾睿光电科技有限公司 Infrared image processing method, device and equipment and storage medium

Also Published As

Publication number Publication date
CN108460794B (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN111739077B (en) Monocular underwater image depth estimation and color correction method based on depth neural network
CN108460794A (en) A kind of infrared well-marked target detection method of binocular solid and system
CN107292921B (en) Rapid three-dimensional reconstruction method based on kinect camera
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
Drews et al. Transmission estimation in underwater single images
CN113052835B (en) Medicine box detection method and system based on three-dimensional point cloud and image data fusion
CN104036488B (en) Binocular vision-based human body posture and action research method
CN109377530A (en) A kind of binocular depth estimation method based on deep neural network
CN107767413A (en) A kind of image depth estimation method based on convolutional neural networks
CN108665496A (en) A kind of semanteme end to end based on deep learning is instant to be positioned and builds drawing method
CN101996407B (en) Colour calibration method for multiple cameras
CN104504389B (en) A kind of satellite cloudiness computational methods based on convolutional neural networks
CN112801074B (en) Depth map estimation method based on traffic camera
CN109559310A (en) Power transmission and transformation inspection image quality evaluating method and system based on conspicuousness detection
CN106600632B (en) A kind of three-dimensional image matching method improving matching cost polymerization
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
CN110189294B (en) RGB-D image significance detection method based on depth reliability analysis
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN109801215A (en) The infrared super-resolution imaging method of network is generated based on confrontation
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN110006444B (en) Anti-interference visual odometer construction method based on optimized Gaussian mixture model
CN110517306A (en) A kind of method and system of the binocular depth vision estimation based on deep learning
CN109410171A (en) A kind of target conspicuousness detection method for rainy day image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant