CN111860541A - Image fusion method based on nonlinear weight - Google Patents

Image fusion method based on nonlinear weight Download PDF

Info

Publication number
CN111860541A
CN111860541A CN202010708596.6A CN202010708596A CN111860541A CN 111860541 A CN111860541 A CN 111860541A CN 202010708596 A CN202010708596 A CN 202010708596A CN 111860541 A CN111860541 A CN 111860541A
Authority
CN
China
Prior art keywords
image
weight
matching
feature points
nonlinear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010708596.6A
Other languages
Chinese (zh)
Inventor
唐振民
徐启文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Expressway Co ltd
Jiangsu Expressway Engineering Maintenance Technology Co ltd
Nanjing Huazhi Dawei Technology Co ltd
Original Assignee
Nanjing Huazhi Dawei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huazhi Dawei Technology Co ltd filed Critical Nanjing Huazhi Dawei Technology Co ltd
Priority to CN202010708596.6A priority Critical patent/CN111860541A/en
Publication of CN111860541A publication Critical patent/CN111860541A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fusion method based on nonlinear weight, which comprises the following steps: (1) inputting a first image and a second image, and extracting feature points of the first image and the second image through an SURF algorithm; (2) performing feature matching on the feature points of the first image and the second image through a FLANN algorithm; (3) warping the image through a homography algorithm, deforming the second image to enable the transformed images of the first image and the second image to be basically aligned, and mechanically splicing; (4) and applying a nonlinear weight model to perform image fusion on the transformed images of the first image and the second image. The invention improves the weight ratio of the overlapped part of the images to be spliced, so that the pixel transition close to the spliced edge area is more gradual, and the pixel transition close to the spliced middle area is more rapid, thereby effectively optimizing the image display of the overlapped part, obviously reducing the image ghost generated after the complex image is spliced, and improving the visual impression of the fused image.

Description

Image fusion method based on nonlinear weight
Technical Field
The invention relates to image processing, in particular to a method for fusing images based on nonlinear weight.
Background
The gradual image fusion algorithm is a linear pixel-level image fusion algorithm, generates a weight model by linear weight according to an overlapping area of images to be spliced, generates pixels of the overlapping area according to the linear weight model, has higher efficiency and is more suitable for real-time application compared with the optimal suture line image fusion algorithm.
At present, scholars at home and abroad have already carried out a lot of research and improvement on gradual evolutionary algorithms.
Wandan, Liuhui et al have proposed a trigonometric function weighted image stitching algorithm, this algorithm has proposed a trigonometric function weighted image fusion model, through employing this model, have reduced the image stitching trace effectively, has dealt with the problem that the boundary is sharp well, make the image transition part comparatively smooth, but the complicated picture overlap part of stitching still has comparatively obvious ghost image phenomenon. (Wandan, Liuhui, Like, etc.. an image stitching algorithm of trigonometric function weight [ J ] infrared technique, 2017,39 (1): 53-57.)
Wangshuai, Sunwei et al propose a weighted fusion image splicing algorithm based on brightness unification, which considers the problem that images with different brightness may generate excessively fast brightness in the image fusion stage, before image splicing and image fusion, the two images are subjected to brightness preprocessing, the images in the original RGB color space are converted into the images in the l alpha beta color space, brightness linear transformation is performed on l components of the images, and then the images are converted into the RGB images again, so that the unification of the brightness of the two images is realized. Although the curve weight image fusion model provided by the method improves the ghost situation to a certain extent, the ghost phenomenon visible to the naked eye still occurs. (King, Sunwei, Zingiberensis, etc. based on the unified weighted fusion image mosaic algorithm of brightness [ J ] Shandong science, 2014,27 (3): 44-50.)
Xiu Chunbo, Ma Yunfei proposes an improved gradual image fusion model with square weight, the model carries out square operation on traditional linear weight, the model effectively optimizes excessive color of image fusion, the excessive color of the middle area of a spliced image is more natural, but the defect of the model is obvious, the weight of the model is always greater than the linear weight, and the ghost phenomenon is more severe. When the method is used for splicing simple images, the effect is good, but when the method is used for complex images, double images are very obvious. (Xiu Chunbo, Ma)Yunfei.Image Stitching Based on Improved Gradual FusionAlgorithm[A].The 31stChinese Control and Decision Conference[C].New York:IEEE,2019:2962-2966.)
The research on the gradual fade-out algorithm has made a certain progress and has achieved a certain result, but the following problems still exist: (1) the existing method mainly aims at the problems of excessive color of the image and the processing of the splicing seam, although the visual effect is improved on the whole, the details are poor, and the middle area is possibly fuzzy; (2) the existing improved method generally has the phenomenon of double images with different degrees, and the visual effect is influenced to be poorer.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides an image fusion method based on nonlinear weight, which improves the image fusion quality and reduces the image ghost generated after the splicing of complex images.
The technical scheme is as follows: a method for image fusion based on nonlinear weight comprises the following steps:
(1) inputting a first image and a second image, and extracting feature points of the first image and the second image through an SURF algorithm;
(2) performing feature matching on the feature points of the first image and the second image through a FLANN algorithm;
(3) warping the image through a homography algorithm, deforming the second image to enable the transformed images of the first image and the second image to be basically aligned, and mechanically splicing;
(4) and applying a nonlinear weight model to perform image fusion on the transformed images of the first image and the second image.
Further, in the step (2), the specific steps of feature matching are as follows:
a. taking the feature points of the first image as a training set and the feature points of the second image as a query set, and acquiring Euclidean distances between all feature points in the training set and feature points in the query set;
b. by comparing Euclidean distances, retaining the nearest point and the next nearest point of the Euclidean distance between each training set feature point and the query set feature point, and abandoning the rest matching;
c. if the nearest Euclidean distance and the second nearest Euclidean distance satisfy the following formula:
Figure BDA0002595697770000021
keeping the matching pair, otherwise discarding the matching pair; wherein, the ratio is a threshold value for judging the difference degree between the matching pair of the nearest Euclidean distance and the matching pair of the next nearest Euclidean distance; the ratio value is 0.4-0.6; the higher the ratio value is, the more the number of the matching pairs is, and the lower the matching precision is; the lower the ratio value is, the fewer the number of matching pairs is, and the higher the matching precision is.
Further, in step (3), the second image is deformed to fill a blank area without image information with pure black, that is, RGB components of the image are all 0.
Further, in the step (4), the first image and the second image are both divided into two parts according to the feature point matching condition, wherein the part with the matching feature points is an overlapping region, and the region without the matching feature points is a non-overlapping region.
Further, in step (4), the nonlinear weight model is: the first image and the second image are in the non-overlapping region C1And C2In the overlap region (C), each occupies 100% of the weight1∩C2) The resulting pixel follows the following equation:
C(x,y)=W1C1(x,y)+W2C2(x,y)
wherein, C1As a first image, C2Is a second image; w1Is the weight of the first image, W2Is the weight of the second image; c1(x, y) is the pixel value of the first image, C2(x, y) are pixel values of the second image; nonlinear weight W1And W2The values of (c) follow the following formula:
Figure BDA0002595697770000031
wherein D is1To be overlappedTotal width of area, D2And the distance between the current pixel and the left boundary of the overlapping area is taken as left, the left is the left boundary of the splicing overlapping area, and right is the right boundary of the splicing overlapping area.
Further, in the step (4), the image fusion is performed on the converted images of the first and second images by applying the nonlinear weight model, and when the overlap region is generated, the converted image of the second image has the weight of the solid black pixel which is always 0, and the solid black pixel does not participate in the generation of the overlap region.
Has the advantages that: compared with the prior art, the invention has the following remarkable effects: according to the invention, the influence of the first image on the overlapped image close to the left boundary part and the influence of the second image on the overlapped image close to the right boundary part are amplified by using the nonlinear weight model, so that the pixel change of the splicing region close to the boundary is smoother, the image ghost of the overlapped part can be effectively reduced, the visual interference factors such as ghost, crack and the like are effectively reduced, and the visual effect is remarkably improved.
Drawings
FIG. 1 is a schematic view of an image fusion process according to the present invention;
FIG. 2 is a weight diagram of a nonlinear weight image fusion model according to the present invention;
FIG. 3 is a first image to be fused;
FIG. 4 is a second image to be fused;
FIG. 5 is a diagram of the overall effect of the fused images;
FIG. 6 is a diagram illustrating conventional progressive image fusion model weights;
FIG. 7 is a diagram of the overall effect of the conventional gradual fade-out algorithm after image fusion;
FIG. 8 is a comparison graph of local magnification effect after the conventional gradual fade-out algorithm and the image of the present invention are fused.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the method for image fusion based on nonlinear weight includes the following steps:
(1) Inputting a first image and a second image, and performing feature extraction on the first image and the second image through an SURF algorithm;
the feature extraction is the first step of image stitching application, the function of the feature extraction is to extract and describe key features of images, the invention uses SURF algorithm to extract features, and the specific implementation steps are as follows: a. constructing a Hessian matrix of the image; b. constructing a Gaussian pyramid; c. preliminarily positioning the feature points; d. calculating the vector direction of the feature points; e. a descriptor of the feature points is formed.
(2) Performing feature matching on the feature points of the first image and the second image through a FLANN algorithm;
the role of feature matching is to eliminate feature mismatches for most complex images. The specific implementation steps are as follows:
a. taking the feature points of the first image as a training set and the feature points of the second image as a query set, and acquiring Euclidean distances between all feature points in the training set and feature points in the query set;
b. by comparing Euclidean distances, retaining the nearest point and the next nearest point of the Euclidean distance between each training set feature point and the query set feature point, and abandoning the rest matching;
c. if the nearest Euclidean distance and the second nearest Euclidean distance satisfy the following formula:
Figure BDA0002595697770000041
the matching pair is retained, otherwise the matching pair is discarded. Wherein, the ratio is a threshold value for judging the difference degree between the matching pair of the nearest Euclidean distance and the matching pair of the next nearest Euclidean distance; the ratio value is 0.4-0.6; the higher the ratio value is, the more the number of the matching pairs is, and the lower the matching precision is; the lower the ratio value is, the fewer the number of matching pairs is, and the higher the matching precision is.
(3) Warping the image through a homography algorithm to enable the first image and the second image to be basically aligned, and mechanically splicing; the method comprises the following specific steps:
a. the homography transform is a 3 x 3 transform rectangle H that maps points on one image to corresponding points on the other image, and is expressed by the following formula:
Figure BDA0002595697770000051
b. for a set of corresponding points (x) before and after image warping1,y1) And (x)2,y2) The mapping relationship between the two is shown as the following formula:
Figure BDA0002595697770000052
and (3) applying the mapping formula to carry out image transformation, generating a transformed image, and carrying out mechanical splicing.
(4) Applying a nonlinear weight model to perform image fusion on the first image and the second image; the method comprises the following specific steps:
a. and according to the result of image feature matching, generating a first image of the non-spliced region by 100% weight of the first image pixels, and generating a second image of the non-spliced region by 100% weight of the second image pixels.
b. And fusing the pixels of the first image and the second image by using a nonlinear weight model to form a spliced area of the image, and sequentially synthesizing the spliced area of the image with the non-spliced area of the first image and the non-spliced area of the second image to generate a fused image.
As shown in FIG. 2, the first image and the second image are in the non-overlapping region C 1And C2In the overlap region (C), each occupies 100% of the weight1∩C2) The resulting pixel follows the following equation:
C(x,y)=W1C1(x,y)+W2C2(x,y)
wherein, C1As a first image, C2Is a second image; w1Is the weight of the first image, W2Is the weight of the second image, C1(x, y) is the pixel value of the first image, C2(x, y) is the pixel value of the second image, the nonlinear weight W1And W2The values of (c) follow the following formula:
Figure BDA0002595697770000053
wherein D is1For the total width of the overlap region, D2And the distance between the current pixel and the left boundary of the overlapping area is taken as left, the left is the left boundary of the splicing overlapping area, and right is the right boundary of the splicing overlapping area.
The images shown in fig. 3 and fig. 4 are spliced by using the invention, the effect image after image fusion is shown in fig. 5, a nonlinear weight image fusion model is applied to image fusion, the influence of the first image on the weight generated by the part of pixels close to the left border of the splice (for example, at the position of x ═ left in fig. 2) is amplified, the influence of the second image on the weight generated by the part of pixels close to the right border of the splice (for example, at the position of x ═ right in fig. 2) is amplified, and the overlapped area (C) is enabled to be formed1∩C2) The pixel change close to the boundary is smoother, the possibility of image ghosting generated by the overlapped part can be effectively reduced, so that visual interference factors such as ghosting and cracks are effectively reduced, and the visual impression of the finally generated fusion image is improved.
As shown in fig. 6, the image fusion is performed by applying a conventional gradual image fusion algorithm, which specifically includes: the first image and the second image are in the non-overlapping region C1And C2In the overlap region (C), each occupies 100% of the weight1∩C2) The resulting pixel follows the following equation:
C(x,y)=W1C1(x,y)+W2C2(x,y)
wherein, C1As a first image, C2Is a second image; w1Is the weight of the first image, W2Is the weight of the second brushed image; c1(x, y) is the pixel value of the first image, C2(x, y) are pixel values of the second image; weight W1And W2The values of (c) follow the following formula:
Figure BDA0002595697770000061
wherein D is1For the total width of the overlap region, D2Is as followsDistance of the front pixel from the left boundary of the overlap region.
The images shown in fig. 3 and 4 are spliced by using a traditional gradual image fusion algorithm, and an effect image after image fusion is shown in fig. 7.
As shown in fig. 8, 8(a) performs image fusion by using a conventional progressive fading-out algorithm, and ghost images of text portions after image fusion are particularly serious, which greatly affects detail observation of fused images; and 8(b) the method is used for image fusion, the detail part information of the fused image is well displayed, no obvious ghost image exists, the definition is obviously improved compared with that of the traditional algorithm, and the subjective effect is better.
The Quality score evaluation is performed by applying a Quality score evaluation method (Pavan C M, Rajiv S. objective and objective Quality evaluation Assessment of Stitched Images for visual Reality [ J ]. IEEETRANSACTION ON IMAGE PROCESSING,2019,28(11):5620-563) proposed by Pavan C M and Rajiv S, using the IMAGE Stitched by the conventional method in FIG. 7 and the IMAGE Stitched by the present invention in FIG. 5 as input Images, and the objective Quality evaluation score is obtained by comparing the original IMAGE with the Stitched IMAGE at the pixel level and introducing a difference model into a regression vector machine. In order to make the result more reliable, other original images and the stitched image are introduced for quality score comparison, 4 additional experiments are added, and the obtained quality scores are shown in table 1.
TABLE 1 Objective quality score comparison
Figure BDA0002595697770000071
Therefore, although the quality evaluation scores of all groups of experiments fluctuate, the quality evaluation scores obtained by using the algorithm of the invention are higher than those obtained by using the traditional gradual evolution algorithm, and the quality evaluation method has better performance in the aspect of objective quality evaluation.
By combining subjective observation and objective quality evaluation scores, the nonlinear weight image fusion method of the invention achieves better effect and higher score, so that the algorithm of the invention has higher algorithm quality.

Claims (6)

1. A method for image fusion based on nonlinear weight is characterized by comprising the following steps:
(1) inputting a first image and a second image, and extracting feature points of the first image and the second image through an SURF algorithm;
(2) performing feature matching on the feature points of the first image and the second image through a FLANN algorithm;
(3) warping the image through a homography algorithm, deforming the second image to enable the transformed images of the first image and the second image to be basically aligned, and mechanically splicing;
(4) and applying a nonlinear weight model to perform image fusion on the transformed images of the first image and the second image.
2. The method of image fusion based on nonlinear weights as claimed in claim 1, wherein: in the step (2), the specific steps of feature matching are as follows:
a. taking the feature points of the first image as a training set and the feature points of the second image as a query set, and acquiring Euclidean distances between all feature points in the training set and feature points in the query set;
b. by comparing Euclidean distances, retaining the nearest point and the next nearest point of the Euclidean distance between each training set feature point and the query set feature point, and abandoning the rest matching;
c. if the nearest Euclidean distance and the second nearest Euclidean distance satisfy the following formula:
Figure FDA0002595697760000011
Keeping the matching pair, otherwise discarding the matching pair; wherein, the ratio is a threshold value for judging the difference degree between the matching pair of the nearest Euclidean distance and the matching pair of the next nearest Euclidean distance; the ratio is 0.4-0.6.
3. The method of image fusion based on nonlinear weights as claimed in claim 1, wherein: in the step (3), the second image is deformed, a blank area without image information is filled with pure black, and RGB components of the image are all 0.
4. The method of image fusion based on nonlinear weights as claimed in claim 1, wherein: in the step (4), the first image and the second image are divided into two parts according to the feature point matching condition, wherein the part with the matching feature points is an overlapping area, and the area without the matching feature points is a non-overlapping area.
5. The method of image fusion based on nonlinear weights as claimed in claim 1, wherein: in the step (4), the nonlinear weight model is: the first image and the second image are in C1And C2Respectively, occupy 100% of the weight, and in the overlap region C1∩C2The final generated pixel follows the following equation:
C(x,y)=W1C1(x,y)+W2C2(x,y)
wherein, C 1As a first image, C2Is a second image; w1Is the weight of the first image, W2Is the weight of the second image; c1(x, y) is the pixel value of the first image, C2(x, y) is the pixel value of the second image, the nonlinear weight W1And W2The values of (c) follow the following formula:
Figure FDA0002595697760000021
wherein D is1For the total width of the overlap region, D2And the distance between the current pixel and the left boundary of the overlapping area is taken as left, the left is the left boundary of the splicing overlapping area, and right is the right boundary of the splicing overlapping area.
6. The method of image fusion based on nonlinear weights as claimed in claim 1, wherein: in the step (4), the non-linear weight model is applied to perform image fusion on the transformed images of the first and second images, and when the overlap region is generated, the pure black pixels of the transformed image of the second image do not participate in the generation of the overlap region all the time, and the weight of the pure black pixels is 0 all the time.
CN202010708596.6A 2020-07-22 2020-07-22 Image fusion method based on nonlinear weight Pending CN111860541A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010708596.6A CN111860541A (en) 2020-07-22 2020-07-22 Image fusion method based on nonlinear weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010708596.6A CN111860541A (en) 2020-07-22 2020-07-22 Image fusion method based on nonlinear weight

Publications (1)

Publication Number Publication Date
CN111860541A true CN111860541A (en) 2020-10-30

Family

ID=73001494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010708596.6A Pending CN111860541A (en) 2020-07-22 2020-07-22 Image fusion method based on nonlinear weight

Country Status (1)

Country Link
CN (1) CN111860541A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855628A (en) * 2012-08-20 2013-01-02 武汉大学 Automatic matching method for multisource multi-temporal high-resolution satellite remote sensing image
CN103729834A (en) * 2013-12-23 2014-04-16 西安华海盈泰医疗信息技术有限公司 Self-adaptation splicing method and system of X-ray images
CN106940876A (en) * 2017-02-21 2017-07-11 华东师范大学 A kind of quick unmanned plane merging algorithm for images based on SURF
US20180061006A1 (en) * 2016-08-26 2018-03-01 Multimedia Image Solution Limited Method for ensuring perfect stitching of a subject's images in a real-site image stitching operation
CN107958441A (en) * 2017-12-01 2018-04-24 深圳市科比特航空科技有限公司 Image split-joint method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855628A (en) * 2012-08-20 2013-01-02 武汉大学 Automatic matching method for multisource multi-temporal high-resolution satellite remote sensing image
CN103729834A (en) * 2013-12-23 2014-04-16 西安华海盈泰医疗信息技术有限公司 Self-adaptation splicing method and system of X-ray images
US20180061006A1 (en) * 2016-08-26 2018-03-01 Multimedia Image Solution Limited Method for ensuring perfect stitching of a subject's images in a real-site image stitching operation
CN106940876A (en) * 2017-02-21 2017-07-11 华东师范大学 A kind of quick unmanned plane merging algorithm for images based on SURF
CN107958441A (en) * 2017-12-01 2018-04-24 深圳市科比特航空科技有限公司 Image split-joint method, device, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
徐启文 等: "基于改进 SURF 算法的图像拼接研究", 《南京理工大学学报》, vol. 45, no. 2 *
徐启文: "基于改进SURF算法的桥梁图像拼接技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 1 *
汪丹;刘辉;李可;周威;: "一种三角函数权重的图像拼接算法", 红外技术, no. 01 *
王凯;陈朝勇;吴敏;姚辉;张翔;: "一种改进的非线性加权图像拼接融合方法", 小型微型计算机***, no. 05 *

Similar Documents

Publication Publication Date Title
CN111080511B (en) End-to-end face exchange method for high-resolution multi-feature extraction
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN108932493B (en) Facial skin quality evaluation method
CN109671023A (en) A kind of secondary method for reconstructing of face image super-resolution
CN109359527B (en) Hair region extraction method and system based on neural network
CN114897742B (en) Image restoration method with texture and structural features fused twice
CN112734914A (en) Image stereo reconstruction method and device for augmented reality vision
CN114972134A (en) Low-light image enhancement method for extracting and fusing local and global features
CN103035000A (en) Color image edge extraction method based on cable news network (CNN)
CN116596795A (en) Underwater image enhancement method based on semantic guidance and attention fusion
Wei et al. Facial image inpainting with deep generative model and patch search using region weight
Cao et al. Attention-aware anime line drawing colorization
Weng et al. L-cad: Language-based colorization with any-level descriptions using diffusion priors
CN108924434A (en) A kind of three-dimensional high dynamic-range image synthesis method based on exposure transformation
CN111860541A (en) Image fusion method based on nonlinear weight
CN114359030A (en) Method for synthesizing human face backlight picture
Huang et al. An end-to-end dehazing network with transitional convolution layer
CN108492264A (en) Single-frame image fast super-resolution method based on sigmoid transformation
CN114862706A (en) Tone mapping method for keeping gradient direction of image
Zhang et al. Face deblurring based on separable normalization and adaptive denormalization
CN108154485B (en) Ancient painting restoration method based on layering and stroke direction analysis
Parihar et al. UndarkGAN: Low-light Image Enhancement with Cycle-consistent Adversarial Networks
Zhou et al. Near-infrared image colorization with weighted UNet++ and auxiliary color enhancement Gan
CN111062862A (en) Color-based data enhancement method and system, computer device and storage medium
CN110969621A (en) Quality evaluation method for synthetic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220712

Address after: 210049 No.6 Xianlin Avenue, Nanjing, Jiangsu Province

Applicant after: JIANGSU EXPRESSWAY CO.,LTD.

Applicant after: JIANGSU EXPRESSWAY ENGINEERING MAINTENANCE TECHNOLOGY Co.,Ltd.

Applicant after: Nanjing Huazhi Dawei Technology Co.,Ltd.

Address before: 211800 No. 99, Tuanjie Road, Jiangbei new district, Nanjing, Jiangsu

Applicant before: Nanjing Huazhi Dawei Technology Co.,Ltd.

AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240614