CN110033416B - Multi-granularity combined Internet of vehicles image restoration method - Google Patents

Multi-granularity combined Internet of vehicles image restoration method Download PDF

Info

Publication number
CN110033416B
CN110033416B CN201910274602.9A CN201910274602A CN110033416B CN 110033416 B CN110033416 B CN 110033416B CN 201910274602 A CN201910274602 A CN 201910274602A CN 110033416 B CN110033416 B CN 110033416B
Authority
CN
China
Prior art keywords
image
missing
generator
internet
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910274602.9A
Other languages
Chinese (zh)
Other versions
CN110033416A (en
Inventor
刘群
王如琪
鲁宇
董莉娜
孟艺凝
舒航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910274602.9A priority Critical patent/CN110033416B/en
Publication of CN110033416A publication Critical patent/CN110033416A/en
Application granted granted Critical
Publication of CN110033416B publication Critical patent/CN110033416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image restoration, and particularly relates to a multi-granularity combined Internet of vehicles image restoration method, which comprises the steps of utilizing a multi-scale MSR algorithm to enhance an Internet of vehicles image, and utilizing a region growing algorithm to preprocess a missing image to acquire structural information; according to the missing image and the structural information thereof, utilizing a deep neural network model with an encoder-decoder structure to carry out restoration processing; judging the integrity of the content of the completion result by using a convolutional neural network as a content discriminator; judging the definition of a completion result by using a Pixel-CNN model as a Pixel discriminator; carrying out countermeasure training optimization on the generator and the two discriminators; and when the generator is trained to be optimal, the model training is finished, and the generated result and the original missing image are spliced to be used as a final restoration result. The invention accelerates the convergence speed of training, improves the recovery effect, and can recover the missing image and remove the shielding object.

Description

Multi-granularity combined Internet of vehicles image restoration method
Technical Field
The invention belongs to the field of image restoration, and particularly relates to a multi-granularity combined Internet of vehicles image restoration method.
Background
Nowadays, the automatic driving technology is mature, and the information around the vehicle is acquired through the cameras in all directions, however, due to the complexity and variability of the real natural environment, the image information acquired by the machine is often lost, such as information loss caused by strong sunlight reflection and image information loss caused by obstacles. Since the conventional restoration method cannot achieve a good effect in the case of a large amount of missing information and the generative model can achieve an excellent result in the case of generating an image, a large number of scholars have begun to study image restoration techniques in combination with the generative model.
At present, two methods of image restoration can be roughly classified: the first is an image restoration technology based on a traditional method, which is mainly realized by utilizing the traditional methods such as texture synthesis or patch search; the second method is an image restoration technology based on deep learning, and a missing image is mainly restored through a deep neural network.
In the current image restoration method, the traditional method can fully acquire the image information of the edge of the missing region and smoothly complement the missing image, but cannot effectively process the condition of large missing region; the restoration method of deep learning can understand the whole semantics of the image and restore a missing region with complex semantics, but cannot be smoothly combined with an original region. Therefore, the restoration processing needs to be performed by comprehensively considering the overall semantics of the image and the edge information of the missing region, so as to achieve an effective restoration result. If the image is restored by a deep learning method on the basis of extracting the multi-granularity information of the missing image, the comprehension capability of the model to the image can be improved, and a better restoration effect is obtained.
Disclosure of Invention
Based on the problems in the prior art, in order to improve the comprehension capability of an image completion model to a missing image and improve the performance of the model, the invention provides a multi-granularity combined vehicle networking image restoration method, which comprises the following steps:
s1, image enhancement processing is carried out on the images of the Internet of vehicles by utilizing a multi-scale MSR algorithm, and the visual effect of the images is improved.
S2, marking the damaged part or the shelter part in the enhanced image as a missing area, and converting the missing image from an RGB color space into an HSV space containing hue, saturation and brightness characteristics;
s3, preprocessing the hue granularity, the saturation granularity and the brightness granularity of the missing image respectively, and acquiring structural information of the three granularities of the missing image by using a region growing algorithm;
s4, constructing a generator with an encoder-decoder structure, splicing the missing image and the structure information of the three granularities of the missing image to be used as the input of the generator, and performing convolution, expansion convolution and deconvolution on input data;
s5, pre-training the generator until the generator generates an image which accords with the semantic meaning of the image but lacks detailed information;
s6, constructing a content discriminator, and discriminating the generated result of the generator in the aspect of generating content;
s7, constructing a pixel discriminator, and discriminating the generated result of the generator in the aspect of definition;
s8, constructing the generator and the two discriminators to generate a confrontation network model, training the confrontation network model, and optimizing model parameters until the generator generates a complete image consistent with the real image;
s9, splicing the corresponding part of the missing area in the generated image and the missing image to form a car networking image, wherein the corresponding part of the missing area in the generated image is used as a completion result; and smoothing the splicing result, wherein the processed images of the Internet of vehicles are the final completion result.
Further, a multi-scale MSR algorithm is utilized to carry out image enhancement processing on the images of the Internet of vehicles, in order to ensure that the characteristics of the three scales of high, middle and low of the images are fully considered, the selected scale is 3, and the value of the weight w under each scale is set as
Figure BDA0002019566670000021
The visual effect of the image is improved by performing enhancement processing on the image pixels on three dimensions.
Further, for the missing image with the length and width of h and w, feature extraction is respectively performed from three granularities of hue, saturation and lightness of the image, and the structural information of the original image on the three granularities is obtained, so that the subsequent completion work is facilitated. Therefore, the RGB color space of the original missing image needs to be converted into HSV color space by using a formula
Figure BDA0002019566670000031
The R, G, B three values were normalized and Δ ═ max (R ', G', B ') -min (R', G ', B') was calculated. Calculating the lightness V of the image according to the formula V ═ max (R ', G ', B '), and calculating the lightness V of the image according to the formula
Figure BDA0002019566670000032
Calculating the saturation S of the image; according to the formula
Figure BDA0002019566670000033
The hue H of the image is calculated.
Further, the structural information of the image on three granularities is obtained by using a region growing algorithm, namely three scoring matrixes with the length and the width of h and w respectively are obtained, pixels in the image in the matrixes are divided into three categories, a missing region, pixels similar to the edge of the missing region and other pixels, the missing region is 0, the pixels similar to the edge of the missing region are 1, and the rest pixels are 0.5.
Further, a generator having an encoder-decoder structure is constructed using 11 convolutional layers, 4 expansion convolutional layers, and 2 deconvolution layers. And splicing the three scoring matrixes with the original HSV image to form a tensor with the size of h multiplied by w multiplied by 6 as the input of the generator, and outputting the HSV image with the size of h multiplied by w multiplied by 3.
Further, training the generator by using a large number of images and L2 loss of the complete image and the generated image as evaluation indexes, and updating parameters in the generator, wherein a first loss function of the specific training is as follows:
L(xl,c,xr)=||G(xl|c)⊙xr||2
wherein xrIs a complete image, xlIs a missing image, c is multi-granularity structure information, and G (-) is a generator generated image; g (x)l| c) represents a missing image xlThe image generated by the generator with the multi-granularity structure information c as input; i | · | | represents a two-norm; an inner product is indicated. And performing multiple iterative training of a large amount of data until the generator can generate an image similar to the complete image according to the missing image and the multi-granularity structural information.
Further, a content discriminator is constructed by using 3 convolutional layers and 2 fully-connected layers, HSV images with the size of h multiplied by w multiplied by 3 are used as input of the content discriminator, and output is a numerical value in an interval [0,1], so that the probability that the input images are consistent with the complete images in terms of content is represented.
Furthermore, an HSV image with a size of h × w × 3 is used as an input of the Pixel discriminator, the size of the input image is expanded to 95 × 95 × 3 by a symmetric filling method, the HSV image is divided into h × w tensors with a size of 32 × 32 × 3, the h × w tensors are output as values in h × w intervals [0,1] after passing through a Pixel-CNN model with 3 convolution layers, 2 pooling layers and 2 fully-connected layer structures, the h × w values are spliced together and then pass through a fully-connected layer, and a value in the interval [0,1] is output to represent the probability that the input image is consistent with the complete image in terms of definition.
Further, a confrontation network model is generated by using the generator, the content discriminator and the pixel discriminator, and a loss function is defined:
Figure BDA0002019566670000041
E[R]=E[log(Dc(xr)∧Dp(xr))+log(1-(Dc(G(xl|c))∧Dp(G(xl|c))))];
where E (-) represents the mean of all training data, L (-) represents the L2 loss of the full image and the generated image, xrIs a complete image, xlIs missing image, c is multi-granularity structural information, alpha is hyper-parameter, minG
Figure BDA0002019566670000042
Represents minimizing G (·), and maximizing Dc(. ang) and Dp(ii) the sum of; dc(. is the authentication result of the content authenticator, Dp(. h) is the discrimination result of the pixel discriminator, G (-) generates an image for the generator, G (x)l| c) represents a missing image xlThe image generated by the generator with the multi-granularity structure information c as input; Λ represents and is calculated. A large amount of data is utilized to carry out countermeasure training, so that the generated result is closer to a complete image in the aspects of image content and definition.
Further, the generated image is segmented, the missing position corresponding to the missing image is separated, the generated image is spliced with the original missing image, meanwhile, the spliced edge is smoothed by mean filtering, and the finally formed complete image is the final completion result.
The invention has the beneficial effects that:
the method fully extracts the multi-granularity information of the original missing image, helps the model to fully understand the image semantics by fully acquiring the structural information of the image on three granularities of hue, saturation and lightness, and rarely acquires the multi-granularity information before inputting the depth model in the prior art; two discriminators are used when a confrontation network model is constructed and generated, and the generated result is restrained from the aspects of content and definition respectively, so that the quality of the generated result is improved; the expansion convolution layer in the generator increases the receptive field of the model, fully extracts the characteristic information of the input data and improves the recovery effect; before the countermeasure training, a conventional training mode is used to enable the model to be converged as soon as possible, and then the countermeasure mode for generating the countermeasure network is used for training, so that the condition that the model is broken down in the countermeasure training process is avoided; by combining the traditional image restoration method with deep learning, the restoration effect is improved, and missing images acquired under the automatic driving situation can be effectively restored and removed.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of a generator structure employed in the present invention;
FIG. 3 is a schematic diagram of a content discriminator employed in the present invention;
FIG. 4 is a schematic diagram of a Pixel-CNN structure adopted by the present invention;
fig. 5 is a schematic diagram of the confrontational training process employed in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly and completely apparent, the technical solutions in the embodiments of the present invention are described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The method is mainly used for restoring images in the automatic driving process, and comprises the steps of complementing incomplete images caused by environment or equipment and removing target objects, wherein damaged parts or shielding parts of the images are marked as missing areas, and the missing images are preprocessed by using an area growing algorithm on three granularities of hue, saturation and lightness of the missing images respectively to obtain structural information of the missing images; according to the missing image and the structural information thereof, utilizing a deep neural network model with an encoder-decoder structure to carry out restoration processing; judging the integrity of the content of the completion result by using a convolutional neural network as a content discriminator; using a Pixel-CNN model as a Pixel discriminator to judge whether the definition of the completion result is consistent with that of the original image; carrying out a large amount of countertraining optimization and restoration results on the generator and the two discriminators; and when the generator is trained to be optimal, the model training is finished, and the generated result and the original missing image are spliced to be used as a final restoration result.
Specifically, the invention provides a multi-granularity combined image restoration method for an internet of vehicles, as shown in fig. 1, including:
s1, image enhancement processing is carried out on the images of the Internet of vehicles by utilizing a multi-scale MSR algorithm, and the visual effect of the images is improved.
S2, marking the damaged part or the shelter part in the enhanced image as a missing area, and converting the missing image from an RGB color space into an HSV space containing hue, saturation and brightness characteristics;
s3, preprocessing the hue granularity, the saturation granularity and the brightness granularity of the missing image respectively, and acquiring structural information of the three granularities of the missing image by using a region growing algorithm;
s4, constructing a generator with an encoder-decoder structure, splicing the missing image and the structure information of the three granularities of the missing image to be used as the input of the generator, and performing convolution, expansion convolution and deconvolution on input data;
s5, pre-training the generator until the generator generates an image which accords with the semantic meaning of the image but lacks detailed information;
s6, constructing a content discriminator, and discriminating the generated result of the generator in the aspect of generating content;
s7, constructing a pixel discriminator, and discriminating the generated result of the generator in the aspect of definition;
s8, constructing the generator and the two discriminators to generate a confrontation network model, training the confrontation network model, and optimizing model parameters until the generator generates a complete image consistent with the real image;
s9, splicing the corresponding part of the missing area in the generated image and the missing image to form a car networking image, wherein the corresponding part of the missing area in the generated image is used as a completion result; and smoothing the splicing result, wherein the processed images of the Internet of vehicles are the final completion result.
In the embodiment, firstly, a multi-scale MSR algorithm is utilized to perform image enhancement processing on the images of the Internet of vehicles, in order to ensure that the characteristics of the three scales of high, medium and low of the images are fully considered, the selected scale is 3, and the value of the weight w under each scale is set as
Figure BDA0002019566670000061
The visual effect of the image is improved by performing enhancement processing on the image pixels on three dimensions.
In this embodiment, a missing region or a blocking object in an image is labeled to indicate a pixel position to be restored, and further, for a missing image with a length and a width of h and w, feature extraction is performed from three granularities of hue, saturation and lightness of the image, so as to obtain structural information of the original image on the three granularities, which is convenient for performing subsequent completion work. Therefore, the RGB color space of the original missing image needs to be converted into HSV color space by using a formula
Figure BDA0002019566670000071
The R, G, B three values were normalized and Δ ═ max (R ', G', B ') -min (R', G ', B') was calculated. Calculating the lightness V of the image according to the formula V ═ max (R ', G ', B '), and calculating the lightness V of the image according to the formula
Figure BDA0002019566670000072
Calculating the saturation S of the image; according to the formula
Figure BDA0002019566670000073
The hue H of the image is calculated.
After color space conversion, the present embodiment obtains structural information of an image in three granularities by using a region growing algorithm, that is, three score matrices with length and width dimensions of h and w, wherein pixels in the matrix are divided into three categories, a missing region, pixels similar to the edge of the missing region and other pixels, the missing region is 0, the pixels similar to the edge of the missing region are 1, and the remaining pixels are 0.5.
This embodiment constructs a generator with an encoder-decoder structure using 11 convolutional layers, 4 convolutional layers for expansion, and 2 deconvolution layers, as shown in fig. 2, where the convolutional kernel size is 5 or 3, and the step values are all 2. And splicing the three scoring matrixes with the original HSV image to form a tensor with the size of h multiplied by w multiplied by 6 as the input of the generator, and outputting the HSV image with the size of h multiplied by w multiplied by 3.
In the early training process of this embodiment, the generator is trained by using a large number of images, a complete image and L2 loss of the generated image as evaluation indexes, and parameters in the generator are updated, wherein a loss function of the specific training is as follows:
L(xl,c,xr)=||G(xl|c)⊙xr||2
wherein L (x)l,c,xr) Representing a first loss of the complete image and the generated image; x is the number ofrThe method is used for representing a complete image and integrating complete images of the Internet of vehicles; x is the number oflIs a missing image, i.e., a missing Internet of vehicles image, c is multi-granularity structural information, G (-) is a generator-generated image, G (x)l| c) represents a missing image xlThe image generated by the generator with the multi-granularity structure information c as input; i | · | | represents a two-norm; an inner product is indicated. And performing multiple iterative training of a large amount of data until the generator can generate an image similar to the complete image according to the missing image and the multi-granularity structural information.
Next, in this embodiment, the content discriminator is constructed by using 3 convolutional layers and 2 fully-connected layers, as shown in fig. 3, where the convolutional layers have a size of 3 and the step size has a value of 2. The HSV image of size h × w × 3 is used as the input to the content discriminator, and the output is a value within the interval [0,1], representing the probability that the input image is consistent with the full image in terms of content.
Meanwhile, in this embodiment, a Pixel discriminator is constructed, an HSV image with a size of h × w × 3 is used as an input of the Pixel discriminator, the size of the input image is expanded to 95 × 95 × 3 by a symmetric filling method, and the HSV image is divided into h × w tensors with a size of 32 × 32 × 3, the h × w tensors are output as values in h × w intervals [0,1] after passing through a Pixel-CNN model with 3 convolution layers, 2 pooling layers and 2 full-connection layer structures, as shown in fig. 4, and after the h × w values are spliced together, a value in the interval [0,1] is output through a full-connection layer, so as to represent the probability that the input image is consistent with a complete image in definition.
The embodiment generates a confrontation network model using a generator, content discriminator and pixel discriminator configuration, as shown in fig. 5, and defines a loss function:
Figure BDA0002019566670000081
E[R]=E[log(Dc(xr)∧Dp(xr))+log(1-(Dc(G(xl|c))∧Dp(G(xl|c))))];
wherein E [ R ]]Mean, L (x) representing all training datal,c,xr) Representing a first loss of the complete image and the generated image; x is the number ofrIs a complete image, xlIs missing image, c is multi-granularity structural information, alpha is hyper-parameter, minG
Figure BDA0002019566670000082
Represents minimizing G (·), and maximizing Dc(i) And Dp(ii) the sum of; dc(. is the authentication result of the content authenticator, Dp(. h) is the discrimination result of the pixel discriminator, G (-) generates an image for the generator, G (x)l| c) representsMissing image xlThe image generated by the generator with the multi-granularity structure information c as input; Λ represents and is calculated. And performing later-stage confrontation training by using a large amount of data, so that the generated result is closer to a complete image in the aspects of image content and definition.
Finally, the generated image is segmented, the missing position corresponding to the missing image is separated, the generated image is spliced with the original missing image, meanwhile, the spliced edge is smoothed by mean filtering, and the finally formed complete image is the final completion result.
The existing image restoration method rarely obtains multi-granularity information before inputting a depth model; two discriminators are used when a confrontation network model is constructed and generated, and the generated result is restrained from the aspects of content and definition respectively, so that the quality of the generated result is improved; the expansion convolution layer in the generator increases the receptive field of the model, fully extracts the characteristic information of the input data and improves the recovery effect; before the countermeasure training, a conventional training mode is used to enable the model to be converged as soon as possible, and then the countermeasure mode for generating the countermeasure network is used for training, so that the condition that the model is broken down in the countermeasure training process is avoided; by combining the traditional image restoration method with deep learning, the restoration effect is improved, and missing images acquired under the automatic driving situation can be effectively restored and removed.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A multi-granularity combined Internet of vehicles image restoration method is characterized by comprising the following steps:
s1, performing image enhancement processing on the Internet of vehicles image by using a multi-scale MSR algorithm;
s2, marking the damaged part or the shelter part in the enhanced Internet of vehicles image as a missing area, and converting the missing image from an RGB color space into an HSV space containing hue, saturation and brightness characteristics;
s3, preprocessing the hue granularity, the saturation granularity and the brightness granularity of the missing image respectively, and acquiring the structural information of pixel coordinates corresponding to the three granularities of the missing image respectively by using a region growing algorithm;
s4, constructing a generator with an encoder-decoder structure, splicing the missing image and the pixel coordinate structure information corresponding to the three granularities of the missing image according to the corresponding coordinate positions to be used as the input of the generator, and performing convolution, expansion convolution and deconvolution on input data;
s5, pre-training the generator until the generator generates an image which accords with the semantic meaning of the image and lacks detailed information;
s6, constructing a content discriminator, and discriminating the generated result of the generator in the aspect of generating content;
s7, constructing a pixel discriminator, and discriminating the generated result of the generator in the aspect of definition;
s8, constructing the generator and the two discriminators to generate a confrontation network model, training the confrontation network model, and optimizing model parameters until the generator generates a complete image consistent with the real image;
s9, splicing the corresponding part of the missing area in the generated image and the missing image to form a car networking image, wherein the corresponding part of the missing area in the generated image is used as a completion result; and smoothing the splicing result, wherein the processed images of the Internet of vehicles are the final completion result.
2. The method for restoring images in internet of vehicles according to claim 1, wherein the step S3 includes obtaining structural information of the images in three granularities by using a region growing algorithm, that is, three scoring matrices with length and width dimensions h and w respectively; the pixels in each score matrix are divided into three categories, namely, pixels in the missing area, pixels similar to the edges of the missing area and other pixels, the pixels in the missing area are divided into 0, the pixels similar to the edges of the missing area are divided into 1, and the other pixels are divided into 0.5.
3. The method for restoring images in combination with multiple granularities in car networking according to claim 1, wherein the step S4 comprises constructing a generator with an encoder-decoder structure using 11 convolutional layers, 4 convolutional layers for expansion and 2 deconvolution layers; and splicing the three scoring matrixes with the HSV image of the original missing image to form a tensor with the size of h multiplied by w multiplied by 6 as the input of the generator, and outputting the HSV image with the size of h multiplied by w multiplied by 3, wherein h and w are the length and width of the missing Internet of vehicles image.
4. The method for restoring images in internet of vehicles according to claim 1, wherein the step S5 includes training the generator with the first loss of the missing image, the complete image and the generated image respectively, and updating the parameters in the generator, wherein the first loss function of the specific training is as follows:
L(xl,c,xr)=||G(xl|c)⊙xr||2
wherein, L (x)l,c,xr) Representing a first loss of the complete image and the generated image; x is the number ofrThe method is used for representing a complete image and integrating complete images of the Internet of vehicles; x is the number oflIs a missing image, i.e., a missing Internet of vehicles image, c is multi-granularity structural information, G (-) is a generator-generated image, G (x)l| c) represents a missing image xlThe image generated by the generator with the multi-granularity structure information c as input; i | · | | represents a two-norm; an inner product is indicated.
5. The method for restoring images in car networking according to claim 1, wherein the step S6 comprises constructing the content discriminator using 3 convolutional layers and 2 full link layers, and outputting HSV images of missing images with size h x w x 3 as the input of the content discriminator as a value in the interval [0,1], wherein the value represents the probability that the input image is consistent with the complete image in terms of content.
6. The image restoration method combining multiple granularities for internet of vehicles according to claim 1, wherein the step S7 includes using HSV image with size h x w x 3 as input of Pixel discriminator, expanding the size of input image to 95 x 3 by symmetrical filling method, and dividing it into h x w tensors with size 32 x 3, the h x w tensors passing through Pixel-CNN model with 3 convolution layers, 2 pooling layers and 2 full connection layer structures, outputting values in h x w intervals [0,1], splicing h x w values together, passing through a full connection layer, outputting a value in interval [0,1], the value representing the probability that the input image is consistent with the complete image in definition.
7. The method for restoring images in car networking according to claim 1, wherein the step S8 comprises generating a confrontation network model using the generator, the content discriminator and the pixel discriminator configuration, and defining a second loss function until the second loss function is minimized, wherein the second loss function is expressed as:
Figure FDA0002664045700000031
E[R]=E[log(Dc(xr)∧Dp(xr))+log(1-(Dc(G(xl|c))∧Dp(G(xl|c))))];
wherein E [ R ]]Mean, L (x) representing all training datal,c,xr) Representing a complete image and generating a mapA first loss of image; x is the number ofrIs a complete image, xlIs missing image, c is multi-granularity structural information, alpha is hyper-parameter, minG
Figure FDA0002664045700000032
Represents minimizing G (·), and maximizing Dc(. ang) and Dp(ii) the sum of; dc(. is the authentication result of the content authenticator, Dp(. h) is the discrimination result of the pixel discriminator, G (-) generates an image for the generator, G (x)l| c) represents a missing image xlThe image generated by the generator with the multi-granularity structure information c as input; Λ represents and is calculated.
8. The multi-granularity combined Internet of vehicles image restoration method according to claim 1, wherein the step S9 includes segmenting the generated image, separating the missing position corresponding to the missing image, stitching the missing position with the original missing image, and performing smoothing processing on the stitching edge by means of mean filtering, so that the finally formed complete image is the final completion result.
CN201910274602.9A 2019-04-08 2019-04-08 Multi-granularity combined Internet of vehicles image restoration method Active CN110033416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910274602.9A CN110033416B (en) 2019-04-08 2019-04-08 Multi-granularity combined Internet of vehicles image restoration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910274602.9A CN110033416B (en) 2019-04-08 2019-04-08 Multi-granularity combined Internet of vehicles image restoration method

Publications (2)

Publication Number Publication Date
CN110033416A CN110033416A (en) 2019-07-19
CN110033416B true CN110033416B (en) 2020-11-10

Family

ID=67237640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910274602.9A Active CN110033416B (en) 2019-04-08 2019-04-08 Multi-granularity combined Internet of vehicles image restoration method

Country Status (1)

Country Link
CN (1) CN110033416B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599435B (en) * 2019-09-04 2021-01-12 精英数智科技股份有限公司 Image synthesis method, device, equipment and storage medium
CN110825079A (en) * 2019-10-15 2020-02-21 珠海格力电器股份有限公司 Map construction method and device
CN111080609B (en) * 2019-12-12 2020-12-15 哈尔滨市科佳通用机电股份有限公司 Brake shoe bolt loss detection method based on deep learning
CN111047543A (en) * 2019-12-31 2020-04-21 腾讯科技(深圳)有限公司 Image enhancement method, device and storage medium
CN111311507B (en) * 2020-01-21 2022-09-23 山西大学 Ultra-low light imaging method based on multi-granularity cooperative network
CN114698398A (en) * 2020-10-30 2022-07-01 京东方科技集团股份有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN114897722B (en) * 2022-04-29 2023-04-18 中国科学院西安光学精密机械研究所 Wavefront image restoration method based on self-coding network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281588A (en) * 2013-07-03 2015-01-14 广州盖特软件有限公司 Multi-granularity-based cloth image retrieval method
CN104598912A (en) * 2015-01-23 2015-05-06 湖南科技大学 Traffic light detection and recognition method based CPU and GPU cooperative computing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573222B (en) * 2018-03-28 2020-07-14 中山大学 Pedestrian image occlusion detection method based on cyclic confrontation generation network
CN109345604B (en) * 2018-08-01 2023-07-18 深圳大学 Picture processing method, computer device and storage medium
CN109685072B (en) * 2018-12-22 2021-05-14 北京工业大学 Composite degraded image high-quality reconstruction method based on generation countermeasure network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281588A (en) * 2013-07-03 2015-01-14 广州盖特软件有限公司 Multi-granularity-based cloth image retrieval method
CN104598912A (en) * 2015-01-23 2015-05-06 湖南科技大学 Traffic light detection and recognition method based CPU and GPU cooperative computing

Also Published As

Publication number Publication date
CN110033416A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110033416B (en) Multi-granularity combined Internet of vehicles image restoration method
Yeh et al. Multi-scale deep residual learning-based single image haze removal via image decomposition
Zhou et al. UGIF-Net: An efficient fully guided information flow network for underwater image enhancement
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
CN109409435B (en) Depth perception significance detection method based on convolutional neural network
CN111754438B (en) Underwater image restoration model based on multi-branch gating fusion and restoration method thereof
CN109558806A (en) The detection method and system of high score Remote Sensing Imagery Change
CN112818862A (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN110853026A (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN112184585B (en) Image completion method and system based on semantic edge fusion
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN109858487A (en) Weakly supervised semantic segmentation method based on watershed algorithm and image category label
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN111476213A (en) Method and device for filling covering area of shelter based on road image
CN114359526A (en) Cross-domain image style migration method based on semantic GAN
CN114187520B (en) Building extraction model construction and application method
CN113705579A (en) Automatic image annotation method driven by visual saliency
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN113706407B (en) Infrared and visible light image fusion method based on separation characterization
CN117351374B (en) Remote sensing image saliency target detection method, system, equipment and medium
CN111539434B (en) Infrared weak and small target detection method based on similarity
CN112434576A (en) Face recognition method and system based on depth camera
CN115965844B (en) Multi-focus image fusion method based on visual saliency priori knowledge
CN117437691A (en) Real-time multi-person abnormal behavior identification method and system based on lightweight network
CN116883303A (en) Infrared and visible light image fusion method based on characteristic difference compensation and fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant