CN102393958A - Multi-focus image fusion method based on compressive sensing - Google Patents

Multi-focus image fusion method based on compressive sensing Download PDF

Info

Publication number
CN102393958A
CN102393958A CN2011101993643A CN201110199364A CN102393958A CN 102393958 A CN102393958 A CN 102393958A CN 2011101993643 A CN2011101993643 A CN 2011101993643A CN 201110199364 A CN201110199364 A CN 201110199364A CN 102393958 A CN102393958 A CN 102393958A
Authority
CN
China
Prior art keywords
image
fusion
sub
subblock
multiple focussing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101993643A
Other languages
Chinese (zh)
Other versions
CN102393958B (en
Inventor
王爽
焦李成
杨奕堂
刘芳
杨淑媛
侯彪
钟桦
刘忠伟
杨国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN 201110199364 priority Critical patent/CN102393958B/en
Publication of CN102393958A publication Critical patent/CN102393958A/en
Application granted granted Critical
Publication of CN102393958B publication Critical patent/CN102393958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a multi-focus image fusion method based on compressive sensing and relates to the technical field of image processing. By the multi-focus image fusion method, the main problem that a clear image with all focused sceneries is difficult to acquire due to a limited depth of field of an optical lens in the prior art can be solved. The multi-focus image fusion method is implemented by the following steps of: (1) blocking an image; (2) calculating an average gradient of each image sub block to determine a fusion weight value; (3) performing sparse representation on each image sub block and observing each image sub block by adopting a random Gaussian matrix; (4) performing weighted fusion on the fusion weight value of an observed value of each image sub block; and (5) recovering a fused image observed value by adopting an orthogonal matching traceback algorithm and performing wavelet inverse transformation on a recovered result to acquire a fused fully-focused image. By the multi-focus image fusion method based on the compressive sensing, a better image fusion effect can be achieved and higher convergence property is realized; and the method can be applied to fusion of a multi-focus image.

Description

Multi-focus image fusing method based on compressed sensing
Technical field
The invention belongs to technical field of image processing, relate to image fusion technology, specifically a kind of theoretical multi-focus image fusing method of compressed sensing that combined, this method can be used in the multiple focussing image fusion.
Background technology
Image co-registration has vast potential for future development as an emerging scientific research field.It obtains more accurate, comprehensive, the reliable iamge description to Same Scene or target through extracting with comprehensively from the information of a plurality of sensor images, so as to image further analyze, detection, identification or the tracking of understanding and target.From the early 1980s so far; Multi-sensor image merges and has caused worldwide extensive interest and research boom, and it has a wide range of applications in fields such as automatic target identification, computer vision, remote sensing, machine learning, Medical Image Processing and military applications.Through the development in 30 years nearly, the research of image fusion technology reaches a certain scale, and has developed multiple emerging system both at home and abroad, but this does not show that this technology is perfect.From present case, image fusion technology also exists the problem of many theory and technologies aspect to have to be solved.Especially it is to be noted that the research that image fusion technology is carried out at home starts late with respect to international research work, also be in a backward condition.Therefore press for the basic theory of carrying out extensively and profoundly and the research of basic technology.
Along with fast development of information technology, people grow with each passing day to the demand of quantity of information.Under this background; The traditional image fusion method; Such as fusion method based on multi-scale transform; Referring to article " Region based multisensor image fusion using generalized Gaussian distribution ", in Int.Workshop on Nonlinear Sign.and Image Process.Sep.2007 needs the data volume of processing very considerable; This has just caused the immense pressure to signal sampling, transmission and storage, and how alleviating this pressure, effectively to extract the useful information that is carried in the signal again be one of urgent problem in the Signal and Information Processing.The theoretical CS of the compressed sensing that occurred in the world in recent years provides solution route for alleviating these pressure.Compressed sensing can fully extract the useful information in the image under not needing to suppose in advance any prior imformation of image, like this, only the useful information that extracts is merged and can alleviate data computing and storage pressure greatly.At present, scholars have launched applied research widely at numerous areas such as simulation-intelligence sample, synthetic aperture radar image-forming, remotely sensed image, Magnetic resonance imaging, recognition of face, information source codings to compressed sensing.Recent years the domestic research boom that also starts compressed sensing.But with the compressed sensing theory be used on the image co-registration research also seldom.People such as scholar T.Wan take the lead in the theory of compressed sensing the is used for trial of image co-registration; Referring to article " Compressive Image Fusion "; In Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008. these methods adopt absolute value to get big fusion rule; Not only computation complexity is high, and fusion results exists much noise and striped.
Summary of the invention
The objective of the invention is to overcome the shortcoming of above-mentioned prior art, proposed a kind of multi-focus image fusing method, to reduce data volume, reduce the complexity of calculating, and when reducing complexity, improve the effect of image co-registration based on compressed sensing.
The key problem in technology of realizing the object of the invention is to utilize compressed sensing that the incomplete sampling of signal is reduced data volume; Utilize the orthogonal matching pursuit algorithm to reduce the complexity of calculating; Entire process is divided into three parts; The sub-piece of each multiple focussing image after at first multiple focussing image being carried out piecemeal and adopts the random gaussian matrix to rarefaction representation is observed; Adopt fusion method to merge to each image subblock observed reading after the observation again, adopt the total focus image after the orthogonal matching pursuit algorithm carries out reconstruct and obtains merging the observed reading after merging then based on the average gradient weighting.Its concrete steps comprise as follows:
(1) two width of cloth multiple focussing image A and the B to input carries out piecemeal, obtains n size and is 32 * 32 image subblock x iAnd y i(i=1,2 Λ n);
(2) calculate and write down multiple focussing image A and the sub-piece x of each correspondence image of B iAnd y iAverage gradient
Figure BDA0000076278580000021
With
Figure BDA0000076278580000022
(3) to the sub-piece x of each correspondence image of multiple focussing image A and B iAnd y iCarry out wavelet transformation, obtain the image subblock a after the sparse conversion iAnd b i, the small echo that adopts in the experiment is wave filter CDF 9/7 wavelet basis of biorthogonal wavelet, decomposing the number of plies is 3;
(4) with each image subblock a behind the wavelet transformation iAnd b iLine up column vector, observe, obtain the observed reading y of per two the sub-pieces of correspondence image of multiple focussing image A and B with the random gaussian matrix AAnd y B
(5) to the observed reading y of the image subblock of multiple focussing image A and per two correspondences of B AAnd y B, merging as follows, the image subblock observed reading after obtaining merging is y:
(5a) the fusion weights of the image subblock of calculating multiple focussing image A and per two correspondences of B:
w A = 0.5 if ▿ G ‾ A = ▿ G ‾ B = 0 ▿ G ‾ A ▿ G ‾ A + ▿ G ‾ B else
w B=1-w A
Wherein,
Figure BDA0000076278580000031
Figure BDA0000076278580000032
Be respectively the average gradient of multiple focussing image A and the sub-piece of B correspondence image, w A, w BBe respectively the fusion weights of image A and the sub-piece of B correspondence image.
(5b) observed reading of the image subblock of multiple focussing image A and per two correspondences of B is carried out weighting fusion:
y=w 1y A+w 2y B
Wherein, y A, y BBe respectively the observed reading of multiple focussing image A and the sub-piece of B two correspondence image, y is its value after merging.
(6) the observed reading y with the sub-piece of fused image adopts orthogonal matching pursuit OMP algorithm to recover the image subblock f after being restored;
(7) the image subblock f after will recovering carries out wavelet inverse transformation, the total focus image F after obtaining merging.
The present invention is owing to adopt image co-registration quality evaluation index average gradient to confirm that image subblock merges weights and combined compressed sensing theoretical, so compare with the traditional image fusion method, has the following advantages:
(A) sampling process does not need to suppose in advance any prior imformation of image;
(B) sub-piece fusion can obtain more excellent fusion weights to multiple focussing image;
(C) data volume of reconstruct is little, saves storage space.
Experiment showed, that the present invention merges problem to multiple focussing image, the visual effect of fusion results is better, and speed of convergence is also very fast.
Description of drawings
Figure l is whole realization flow figure of the present invention;
Fig. 2 is the source images figure of two groups of multiple focussing images;
Fig. 3 is the figure as a result that the burnt Clock image of poly is merged with the present invention and existing two kinds of blending algorithms;
Fig. 4 is the figure as a result that the burnt Pepsi image of poly is merged with the present invention and existing two kinds of blending algorithms.
Embodiment
With reference to Fig. 1, concrete performing step of the present invention is following:
Step 1 is to two width of cloth multiple focussing image A and B piecemeal of importing and average gradient
Figure BDA0000076278580000033
and
Figure BDA0000076278580000034
that calculates each image subblock
Image A and B are respectively that two width of cloth left sides focuses on and right focusedimage, the message complementary sense of the clear part of two width of cloth images, and purpose is to obtain about a width of cloth all total focus image clearly through fusion.Image is carried out piecemeal, help handling, and can reduce computation complexity, it is 32 * 32 image subblock that the present invention is divided into size to two width of cloth multiple focussing image A and B, and calculates the average gradient of each image subblock, by following formula calculating:
▿ G ‾ I = 1 M × N Σ i = 1 M Σ j = 1 N [ Δxf ( i , j ) 2 + Δyf ( i , j ) 2 ] 1 2
Wherein, and Δ xf (x, y); (x is that (i is j) at x for the sub-piece pixel of multiple focussing image I y) to Δ yf; First order difference on the y direction, I=A, B; M * N is the image subblock size, and
Figure BDA0000076278580000042
is the average gradient of the sub-piece of multiple focussing image I.
The sub-piece x of each correspondence image of step 2 couple multiple focussing image A and B iAnd y iCarry out wavelet transformation, obtain the image subblock a after the sparse conversion iAnd b i
It is in order to let signal satisfy the precondition of compressed sensing that each image subblock of multiple focussing image A and B is carried out sparse conversion; Promptly if signal be compressible or be sparse at certain transform domain; Just can use one with incoherent higher-dimension signal projection to the lower dimensional space of observing matrix of transform-based with the conversion gained on, just can from these a spot of projections, reconstruct original signal through finding the solution an optimization problem then with high probability.Sparse bi-orthogonal filter CDF 9/7 wavelet transformation that is transformed to that this instance adopts, decomposing the number of plies is 3 layers, but is not limited to wavelet transformation, for example can use the discrete cosine dct transform, Fourier FT conversion etc.
Step 3 is carried out CS observation with the random gaussian matrix to the sub-piece of each correspondence image of multiple focussing image A and B.
The CS observation of image is a linear process, and in order to guarantee accurate reconstruct, system of linear equations exists confirms that the necessary and sufficient condition of separating is that observing matrix and sparse transform-based matrix satisfy limited equidistance character RIP.The random gaussian matrix is uncorrelated with the matrix that most of fixedly orthogonal basiss constitute; This characteristic has determined to select it as observing matrix; Other orthogonal basis is during as sparse transform-based; So can satisfy RIP character. the present invention adopts the random gaussian matrix as observing matrix, and the sub-piece of each correspondence image is carried out CS observation, and concrete operations are following:
(3a) with the image subblock a of the N * N of multiple focussing image A and per two correspondences of B iAnd b iLine up N 2* 1 column vector θ AAnd θ B
(3b) generate M * N at random 2Random gaussian matrix and to its orthogonalization utilizes the random gaussian matrix that column vector is observed, and concrete computing formula is following;
y I=Φθ I
Wherein, Φ is the random gaussian observing matrix, θ IBe the column vector of image subblock, I=A, B, y IObserved reading for each image subblock.The sampling rate of each image subblock in this instance is
Figure BDA0000076278580000043
controls sampling rate through the value of regulating the random gaussian matrix M.
Obtain the observed reading of each image subblock after each image subblock of multiple focussing image A and B observed, the size of observation vector is M * 1, and that in the experiment each image subblock is observed employing is same observing matrix Φ.
Step 4 merges two width of cloth multiple focussing image A and per two the sub-pieces of correspondence image of the B method with weighting.
The observed reading that each image subblock of multiple focussing image A and B obtains after observing through the random gaussian matrix is still keeping all information of the sub-piece of original image; So confirm the fusion weights of each image subblock observed reading of observation back through the average gradient that calculates the sub-piece of original image; Average gradient is an evaluation index of image co-registration, has reflected the readability of image, the distinct image piece; Average gradient is just big, and the weights of getting in the time of fusion also should be big.
The image subblock of image A and per two correspondences of the B method with weighting is merged, and practical implementation is following:
(4a) the fusion weights of the image subblock of calculating multiple focussing image A and per two correspondences of B:
w A = 0.5 if ▿ G ‾ A = ▿ G ‾ B = 0 ▿ G ‾ A ▿ G ‾ A + ▿ G ‾ B else
w B=1-w A
Wherein,
Figure BDA0000076278580000052
Figure BDA0000076278580000053
Be respectively the average gradient of multiple focussing image A and the sub-piece of B correspondence image, w A, w BBe respectively the fusion weights of image A and the sub-piece of B correspondence image;
(4b) observed reading of the image subblock of multiple focussing image A and per two correspondences of B is carried out weighting fusion:
y=w Ay A+w By B
Wherein, y A, y BBe respectively the observed reading of multiple focussing image A and the sub-piece of B two correspondence image, y is its observed reading after merging.
Step 5 adopts the orthogonal matching pursuit algorithm to recover the image subblock after being restored the observed reading of the sub-piece of fused image.
Quadrature coupling OMP tracing algorithm is based on the algorithm of greedy iteration; Exchange the reduction of computation complexity for the number of samples that needs more than base tracking BP algorithm. utilize orthogonal matching pursuit OMP algorithm to come solving-optimizing problem reformulation signal, improved the speed of calculating greatly, and be easy to realize. during concrete operations; Be that the image block after merging is recovered one by one; The concrete steps of algorithm are referring to " Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit ", IEEE Transactions on Information Theory, vol.53; No.12, December 2007.
Step 6, multiple focussing image A that the pair of orthogonal matching pursuit algorithm recovers and the image subblock of B carry out wavelet inverse transformation.
Adopt the sparse form of orthogonal matching pursuit algorithm recovered data for the total focus image of process fusion; The sub-piece of each image restored is carried out the total focus image subblock that wavelet inverse transformation obtains merging, the total focus image subblock after merging is combined into the total focus image after piece image just can obtain merging.
Effect of the present invention can specify through emulation experiment:
1. experiment condition
Testing used microcomputer CPU is Intel Core (TM) 2Duo 2.33GHz internal memory 2GB, and programming platform is Matlab7.0.1.The view data that adopts in the experiment is two groups of multiple focussing images of registration, and size is respectively 512 * 512, and 512 * 512, two groups of burnt source images of poly derive from the image co-registration website Http: ∥ www.imagefusion.org/, first group is the Clock image, like Fig. 2 (a) and Fig. 2 (b); Wherein Fig. 2 (a) focuses on the source images on the right for Clock; Fig. 2 (b) focuses on the source images on the left side for Clock, and second group is the Pepsi image, like Fig. 2 (c) and Fig. 2 (d); Wherein Fig. 2 (c) for Pepsi focus on the right source images, Fig. 2 (d) focuses on the source images on the left side for Clock.
2. experiment content
(2a) with method of the present invention and existing two kinds of fusion methods the Clock image is carried out fusion experiment, every group of sampling rate is set to 0.3,0.5 respectively; 0.7, fusion results such as Fig. 3, wherein Fig. 3 (a) is the fusion results figure of the existing method of average; Fig. 3 (b) is article " Compressive Image Fusion "; In Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008 fusion results figure; Fig. 3 (c) is fusion results figure of the present invention, and the sampling rate of three picture groups is 0.5.
(2b) with method of the present invention and existing two kinds of fusion methods the Pepsi image is carried out fusion experiment, every group of sampling rate is set to 0.3,0.5 respectively; 0.7, fusion results such as Fig. 4, wherein Fig. 4 (a) is the fusion results figure of the existing method of average; Fig. 4 (b) is article " Compressive Image Fusion "; In Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008 fusion results figure; Fig. 4 (c) is fusion results figure of the present invention, and the sampling rate of three picture groups is equal 0.5.The method of average is identical with method operation of the present invention, and just fusion rule is different, and the fusion weights of the method for average are w A=w B=0.5.
3. experimental result
With fusion method of the present invention and method of weighted mean and article " Compressive Image Fusion "; In Proc.IEEE Int.Conf.Image Process; Pp.1308-1311,2008. method compares on three kinds of picture appraisal indexs, estimates effect of the present invention.Fusion method of the present invention and method of weighted mean and article " Compressive Image Fusion "; In Proc.IEEE Int.Conf.Image Process; Pp.1308-1311, fusion qualitative evaluation index such as the table 1 of 2008. method on two groups of multiple focussing images:
Table 1, multiple focussing image merge the qualitative evaluation index
Mean CS-max-abs Ours is respectively the existing method of average, article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008. method and method of the present invention in the table 1; R is a sampling rate, and MI is a mutual information, and IE is an information entropy, and Q is the edge conservation degree, and T is the time that image reconstruction needs, and unit is second (s).Wherein:
Mutual information (M I): mutual information has embodied fused images what of information extraction from original image, and mutual information is bigger, and the information of then extracting is many more.
Information entropy (IE): image information entropy is to weigh an abundant important indicator of image information, and what of quantity of information that image carries are the size of entropy reflected, entropy is big more, explains that the quantity of information of carrying is big more.
Edge conservation degree (Q): its essence is and weigh the maintenance degree of fused image to the marginal information in the input picture, the scope of value is 0~1, more near 1, explains that the edge keeps degree good more.
Visible from table 1 data: on performance index; The edge conservation degree Q index of method of the present invention is existing higher than the method for average and CS-max-abs method; On mutual information MI index, method of the present invention is higher than the method for average, and more more than CS-max-abs method height; The information entropy IE index and the method for average are suitable, but lower than CS-max-abs method.On the image reconstruction time T, the required time ratio CS-max-abs fusion method of method of the present invention will be lacked a lot.Along with the raising of sampling rate, each item index of fusion results also improves gradually.
Visible from Fig. 3-Fig. 4: the fusion results of fusion method of the present invention on two groups of multiple focussing images is better than the method for average and CS-max-abs fusion method visual effect; The fusion results of CS-max-abs fusion method exists much noise and ribbon grain, and contrast is also lower.CS-max-abs fusion method visual effect is not so good as method of the present invention but on information entropy IE index, is higher than method of the present invention is because in fusion process, produced noise, causes information entropy IE index can not truly reflect the useful information amount of fused images.
The multi-focus image fusing method based on compressed sensing that the above-mentioned the present invention of experiment showed, proposes merges problem to multiple focussing image and can obtain good visual effect, and computation complexity is also lower.

Claims (3)

1. the multi-focus image fusing method based on compressed sensing comprises the steps:
(1) two width of cloth multiple focussing image A and the B to input carries out piecemeal, obtains n size and is 32 * 32 image subblock x iAnd y i, i=1,2 Λ n;
(2) calculate and write down multiple focussing image A and the sub-piece x of each correspondence image of B iAnd y iAverage gradient
Figure FDA0000076278570000011
With
Figure FDA0000076278570000012
(3) to the sub-piece x of each correspondence image of multiple focussing image A and B iAnd y iCarry out wavelet transformation, obtain the image subblock a after the sparse conversion iAnd b i, the small echo that adopts in the experiment is wave filter CDF 9/7 wavelet basis of biorthogonal wavelet, decomposing the number of plies is 3;
(4) with each image subblock a behind the wavelet transformation iAnd b iLine up column vector, column vector is observed, obtain the observed reading y of per two the sub-pieces of correspondence image of multiple focussing image A and B with the random gaussian matrix AAnd y B
(5) to the observed reading y of the image subblock of multiple focussing image A and per two correspondences of B AAnd y B, merging as follows, the image subblock observed reading after obtaining merging is y:
(5a) the fusion weights of the image subblock of calculating multiple focussing image A and per two correspondences of B:
w A = 0.5 if ▿ G ‾ A = ▿ G ‾ B = 0 ▿ G ‾ A ▿ G ‾ A + ▿ G ‾ B else
w B=1-w A
Wherein,
Figure FDA0000076278570000014
Figure FDA0000076278570000015
Be respectively the average gradient of multiple focussing image A and the sub-piece of B correspondence image, w A, w BBe respectively the fusion weights of image A and the sub-piece of B correspondence image;
(5b) observed reading of the image subblock of multiple focussing image A and per two correspondences of B is carried out weighting fusion:
y=w Ay A+w By B
Wherein, y A, y BBe respectively the observed reading of multiple focussing image A and the sub-piece of B two correspondence image, y is its value after merging;
(6) the observed reading y with the sub-piece of fused image adopts orthogonal matching pursuit OMP algorithm to recover the image subblock f after being restored;
(7) the image subblock f after will recovering carries out wavelet inverse transformation, the total focus image F after obtaining merging.
2. the multi-focus image fusing method based on compressed sensing according to claim 1, the wherein described calculating of step (2) and record multiple focussing image A and the sub-piece x of each correspondence image of B iAnd y iAverage gradient, calculate by following formula:
▿ G ‾ I = 1 M × N Σ i = 1 M Σ j = 1 N [ Δxf ( i , j ) 2 + Δyf ( i , j ) 2 ] 1 2
Wherein, and Δ xf (x, y); (x is that (i is j) at x for the sub-piece pixel of multiple focussing image I y) to Δ yf; First order difference on the y direction, I=A, B; M * N is the image subblock size, and
Figure FDA0000076278570000022
is the average gradient of the sub-piece of multiple focussing image I.
3. the multi-focus image fusing method based on compressed sensing according to claim 1, wherein step (4) is described observes column vector with the random gaussian matrix, is to carry out through following formula;
y I=Φθ I
Wherein, Φ is the random gaussian observing matrix, θ IBe the column vector of image subblock, I=A, B, y IObserved reading for each image subblock.
CN 201110199364 2011-07-16 2011-07-16 Multi-focus image fusion method based on compressive sensing Active CN102393958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110199364 CN102393958B (en) 2011-07-16 2011-07-16 Multi-focus image fusion method based on compressive sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110199364 CN102393958B (en) 2011-07-16 2011-07-16 Multi-focus image fusion method based on compressive sensing

Publications (2)

Publication Number Publication Date
CN102393958A true CN102393958A (en) 2012-03-28
CN102393958B CN102393958B (en) 2013-06-12

Family

ID=45861269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110199364 Active CN102393958B (en) 2011-07-16 2011-07-16 Multi-focus image fusion method based on compressive sensing

Country Status (1)

Country Link
CN (1) CN102393958B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164850A (en) * 2013-03-11 2013-06-19 南京邮电大学 Method and device for multi-focus image fusion based on compressed sensing
CN103593833A (en) * 2013-10-25 2014-02-19 西安电子科技大学 Multi-focus image fusion method based on compressed sensing and energy rule
CN103839244A (en) * 2014-02-26 2014-06-04 南京第五十五所技术开发有限公司 Real-time image fusion method and device
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
WO2015192570A1 (en) * 2014-06-17 2015-12-23 中兴通讯股份有限公司 Camera auto-focusing optimization method and camera
CN105534606A (en) * 2016-02-04 2016-05-04 清华大学 Intelligent imaging system for surgical operation
CN106097263A (en) * 2016-06-03 2016-11-09 江苏大学 Image reconstructing method based on full variation norm image block gradient calculation
CN106462951A (en) * 2014-06-10 2017-02-22 特拉维夫大学拉莫特有限公司 Method and system for processing an image
CN106651749A (en) * 2015-11-02 2017-05-10 福建天晴数码有限公司 Graph fusion method and system based on linear equation
CN106991665A (en) * 2017-03-24 2017-07-28 中国人民解放军国防科学技术大学 Method based on CUDA image co-registration parallel computations
CN107454330A (en) * 2017-08-24 2017-12-08 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108171680A (en) * 2018-01-24 2018-06-15 沈阳工业大学 Supersparsity CS blending algorithms applied to structure light image
CN108399611A (en) * 2018-01-31 2018-08-14 西北工业大学 Multi-focus image fusing method based on gradient regularisation
CN109785282A (en) * 2019-01-22 2019-05-21 厦门大学 A kind of multi-focus image fusing method
CN110287826A (en) * 2019-06-11 2019-09-27 北京工业大学 A kind of video object detection method based on attention mechanism
CN110456348A (en) * 2019-08-19 2019-11-15 中国石油大学(华东) The wave cut-off wavelength compensation method of more visual direction SAR ocean wave spectrum data fusions
CN112019758B (en) * 2020-10-16 2021-01-08 湖南航天捷诚电子装备有限责任公司 Use method of airborne binocular head-mounted night vision device and night vision device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN102096913A (en) * 2011-01-25 2011-06-15 西安电子科技大学 Multi-strategy image fusion method under compressed sensing framework

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN102096913A (en) * 2011-01-25 2011-06-15 西安电子科技大学 Multi-strategy image fusion method under compressed sensing framework

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JOEL A. TROPP ET AL: "Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit", 《IEEE TRANSACTIONS ON INFORMATION THEORY》 *
杨海蓉 等: "压缩传感理论与重构算法", 《电子学报》 *
符冉迪 等: "抗混叠轮廓波域采用压缩感知的云图融合方法", 《光子学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164850B (en) * 2013-03-11 2016-09-21 南京邮电大学 A kind of multi-focus image fusing method based on compressed sensing and device
CN103164850A (en) * 2013-03-11 2013-06-19 南京邮电大学 Method and device for multi-focus image fusion based on compressed sensing
CN103593833A (en) * 2013-10-25 2014-02-19 西安电子科技大学 Multi-focus image fusion method based on compressed sensing and energy rule
CN103839244A (en) * 2014-02-26 2014-06-04 南京第五十五所技术开发有限公司 Real-time image fusion method and device
CN103839244B (en) * 2014-02-26 2017-01-18 南京第五十五所技术开发有限公司 Real-time image fusion method and device
CN106462951A (en) * 2014-06-10 2017-02-22 特拉维夫大学拉莫特有限公司 Method and system for processing an image
CN106462951B (en) * 2014-06-10 2019-07-05 特拉维夫大学拉莫特有限公司 For handling the method and system of image
US11257229B2 (en) 2014-06-10 2022-02-22 Ramot At Tel-Aviv University Ltd. Method and system for processing an image
US10565716B2 (en) 2014-06-10 2020-02-18 Ramot At Tel-Aviv University Ltd. Method and system for processing an image
WO2015192570A1 (en) * 2014-06-17 2015-12-23 中兴通讯股份有限公司 Camera auto-focusing optimization method and camera
US10044931B2 (en) 2014-06-17 2018-08-07 Xi'an Zhongxing New Software Co., Ltd. Camera auto-focusing optimization method and camera
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN106651749A (en) * 2015-11-02 2017-05-10 福建天晴数码有限公司 Graph fusion method and system based on linear equation
CN106651749B (en) * 2015-11-02 2019-12-13 福建天晴数码有限公司 Graph fusion method and system based on linear equation
CN105534606A (en) * 2016-02-04 2016-05-04 清华大学 Intelligent imaging system for surgical operation
CN106097263A (en) * 2016-06-03 2016-11-09 江苏大学 Image reconstructing method based on full variation norm image block gradient calculation
CN106991665A (en) * 2017-03-24 2017-07-28 中国人民解放军国防科学技术大学 Method based on CUDA image co-registration parallel computations
CN106991665B (en) * 2017-03-24 2020-03-17 中国人民解放军国防科学技术大学 Parallel computing method based on CUDA image fusion
CN107454330B (en) * 2017-08-24 2019-01-22 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN107454330A (en) * 2017-08-24 2017-12-08 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108171680A (en) * 2018-01-24 2018-06-15 沈阳工业大学 Supersparsity CS blending algorithms applied to structure light image
CN108171680B (en) * 2018-01-24 2019-06-25 沈阳工业大学 Supersparsity CS fusion method applied to structure light image
CN108399611A (en) * 2018-01-31 2018-08-14 西北工业大学 Multi-focus image fusing method based on gradient regularisation
CN108399611B (en) * 2018-01-31 2021-10-26 西北工业大学 Multi-focus image fusion method based on gradient regularization
CN109785282A (en) * 2019-01-22 2019-05-21 厦门大学 A kind of multi-focus image fusing method
CN110287826A (en) * 2019-06-11 2019-09-27 北京工业大学 A kind of video object detection method based on attention mechanism
CN110287826B (en) * 2019-06-11 2021-09-17 北京工业大学 Video target detection method based on attention mechanism
CN110456348A (en) * 2019-08-19 2019-11-15 中国石油大学(华东) The wave cut-off wavelength compensation method of more visual direction SAR ocean wave spectrum data fusions
WO2021031466A1 (en) * 2019-08-19 2021-02-25 中国石油大学(华东) Wave cutoff wavelength compensation method for multiview sar wave spectrum data fusion
CN112019758B (en) * 2020-10-16 2021-01-08 湖南航天捷诚电子装备有限责任公司 Use method of airborne binocular head-mounted night vision device and night vision device

Also Published As

Publication number Publication date
CN102393958B (en) 2013-06-12

Similar Documents

Publication Publication Date Title
CN102393958B (en) Multi-focus image fusion method based on compressive sensing
Jahanshahi et al. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation
Choi et al. Depth analogy: Data-driven approach for single image depth estimation using gradient samples
CN102254314B (en) Visible-light/infrared image fusion method based on compressed sensing
CN101964117B (en) Depth map fusion method and device
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN103679674A (en) Method and system for splicing images of unmanned aircrafts in real time
CN102789633B (en) Based on the image noise reduction system and method for K-SVD and locally linear embedding
CN103218825B (en) Quick detection method of spatio-temporal interest points with invariable scale
Sang et al. Pose‐invariant face recognition via RGB‐D images
Xia et al. PANDA: Parallel asymmetric network with double attention for cloud and its shadow detection
CN103500345A (en) Method for learning person re-identification based on distance measure
Huang et al. Correlation and local feature based cloud motion estimation
CN103473559A (en) SAR image change detection method based on NSCT domain synthetic kernels
CN109509163A (en) A kind of multi-focus image fusing method and system based on FGF
Ghita et al. Computational approach for edge linking
CN105574835A (en) Image fusion method based on linear regular transformation
Cao et al. Joint 3D reconstruction and object tracking for traffic video analysis under IoV environment
Das et al. Gca-net: utilizing gated context attention for improving image forgery localization and detection
Ge et al. WGI-Net: A weighted group integration network for RGB-D salient object detection
CN115188066A (en) Moving target detection system and method based on cooperative attention and multi-scale fusion
Li et al. A center-line extraction algorithm of laser stripes based on multi-Gaussian signals fitting
Pang et al. HiCD: Change detection in quality-varied images via hierarchical correlation distillation
CN103530612A (en) Rapid target detection method based on small quantity of samples
CN106228169A (en) The distance extracting method in holoscan space based on discrete cosine transform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant