CN107590811B - Scene segmentation based landscape image processing method and device and computing equipment - Google Patents

Scene segmentation based landscape image processing method and device and computing equipment Download PDF

Info

Publication number
CN107590811B
CN107590811B CN201710909638.0A CN201710909638A CN107590811B CN 107590811 B CN107590811 B CN 107590811B CN 201710909638 A CN201710909638 A CN 201710909638A CN 107590811 B CN107590811 B CN 107590811B
Authority
CN
China
Prior art keywords
scene segmentation
image
convolution
processed
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710909638.0A
Other languages
Chinese (zh)
Other versions
CN107590811A (en
Inventor
张蕊
颜水成
唐胜
程斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201710909638.0A priority Critical patent/CN107590811B/en
Publication of CN107590811A publication Critical patent/CN107590811A/en
Application granted granted Critical
Publication of CN107590811B publication Critical patent/CN107590811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a scene segmentation-based landscape image processing method, a scene segmentation-based landscape image processing device, a computing device and a computer storage medium, wherein the method comprises the following steps: obtaining a landscape image to be processed; inputting the scenery image to be processed into a scene segmentation network to obtain a scene segmentation result corresponding to the scenery image to be processed; determining the outline information of a specific object according to a scene segmentation result corresponding to the to-be-processed landscape image; and adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image. According to the technical scheme, the scene segmentation result corresponding to the landscape image can be quickly and accurately obtained, the beautifying effect can be more accurately added to the scenes such as the sky, the grassland and the like in the landscape image based on the scene segmentation result, and the picture display effect is improved.

Description

Scene segmentation based landscape image processing method and device and computing equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a scenic image processing method and device based on scene segmentation, a computing device and a computer storage medium.
Background
In the prior art, image scene segmentation processing methods are mainly based on a full convolution neural network in deep learning, and these processing methods utilize the idea of transfer learning to transfer a network obtained by pre-training on a large-scale classification data set to an image segmentation data set for training, so as to obtain a segmentation network for scene segmentation, and then use the segmentation network to perform scene segmentation on an image.
The network architecture used by the segmentation network obtained in the prior art directly utilizes an image classification network, and the size of a convolution block in a convolution layer is fixed and invariable, so that the size of a receptive field is fixed and invariable, wherein the receptive field refers to a region of an input image corresponding to a certain node of an output characteristic diagram, and the receptive field with the fixed size is only suitable for capturing targets with the fixed size and scale. However, for image scene segmentation, objects with different sizes are often contained in the scene, and problems often occur when processing too large and too small objects by using a segmentation network with a fixed-size receptive field, for example, for small objects, the receptive field captures too much background around the object, thereby confusing the object with the background, resulting in the object being missed and misjudged as the background; for a larger target, the receptive field can only capture a part of the target, so that the target class judgment is biased, resulting in a discontinuous segmentation result. Therefore, the image scene segmentation processing method in the prior art has the problem of low accuracy of image scene segmentation, and the obtained segmentation result cannot well add processing effect to the scenes such as sky, grassland and the like in the landscape image, so that the obtained processed landscape image has poor display effect.
Disclosure of Invention
In view of the above, the present invention has been made to provide a scene segmentation based scenic image processing method, apparatus, computing device and computer storage medium that overcome or at least partially solve the above-mentioned problems.
According to an aspect of the present invention, there is provided a scenic image processing method based on scene segmentation, the method being performed based on a trained scene segmentation network, the method comprising:
obtaining a landscape image to be processed; wherein, the scenery image to be processed contains a specific object;
inputting a landscape image to be processed into a scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient output by a scale regression layer to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer; the scale regression layer is a middle convolution layer of the scene segmentation network;
obtaining a scene segmentation result corresponding to the to-be-processed landscape image;
determining the outline information of a specific object according to a scene segmentation result corresponding to the to-be-processed landscape image;
and adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image.
Further, performing convolution operation on the convolutional layer by using the second convolution block, and obtaining an output result of the convolutional layer further includes:
sampling from the second volume block by using a linear interpolation method to obtain a characteristic vector to form a third volume block;
and performing convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
Further, the samples used for training the scene segmentation network include: the method comprises the steps of storing a plurality of sample images and annotation scene segmentation results corresponding to the sample images in a sample library.
Further, the training process of the scene segmentation network is completed through multiple iterations; in an iteration process, a sample image and an annotated scene segmentation result corresponding to the sample image are extracted from a sample library, and training of a scene segmentation network is achieved by using the sample image and the annotated scene segmentation result.
Further, the training process of the scene segmentation network is completed through multiple iterations; wherein, the one-time iteration process comprises the following steps:
inputting the sample image into a scene segmentation network to obtain a sample scene segmentation result corresponding to the sample image;
and obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and realizing the training of the scene segmentation network by using the scene segmentation network loss function.
Further, the training step of the scene segmentation network comprises:
extracting a sample image and an annotation scene segmentation result corresponding to the sample image from a sample library;
inputting a sample image into a scene segmentation network for training, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient or an initial scale coefficient output by a scale regression layer in the last iteration process to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer;
obtaining a sample scene segmentation result corresponding to the sample image;
obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and updating the weight parameters of the scene segmentation network according to the scene segmentation network loss function;
and iteratively executing the training step of the scene segmentation network until a preset convergence condition is met.
Further, the predetermined convergence condition includes: the iteration times reach the preset iteration times; and/or the output value of the scene segmentation network loss function is smaller than a preset threshold value.
Further, the scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer.
Further, the method further comprises: when the training of the scene segmentation network is started, the weight parameters of the scale regression layer are initialized.
Further, adding a beautification effect to the specific object according to the contour information of the specific object, and obtaining the processed landscape image further includes:
and adding a beautifying effect map for the specific object according to the contour information of the specific object to obtain a processed landscape image.
Further, adding a beautification effect to the specific object according to the contour information of the specific object, and obtaining the processed landscape image further includes:
and according to the contour information of the specific object, performing texture processing, color tone processing, contrast processing, illumination processing and/or brightness processing on the specific object to obtain a processed landscape image.
Further, after a beautification effect is added to the specific object according to the contour information of the specific object to obtain a processed landscape image, the method further comprises:
the processed landscape image is displayed.
Further, displaying the processed landscape image further includes:
and displaying the processed landscape image in real time.
Further, after a beautification effect is added to the specific object according to the contour information of the specific object to obtain a processed landscape image, the method further comprises:
and storing the processed landscape image according to a shooting instruction triggered by a user.
Further, after a beautification effect is added to the specific object according to the contour information of the specific object to obtain a processed landscape image, the method further comprises:
and storing the processed landscape image as a video consisting of frame images according to a recording instruction triggered by a user.
According to another aspect of the present invention, there is provided a scenic image processing apparatus based on scene segmentation, the apparatus operating based on a trained scene segmentation network, the apparatus including:
the acquisition module is suitable for acquiring a landscape image to be processed; wherein, the scenery image to be processed contains a specific object;
the segmentation module is suitable for inputting the landscape image to be processed into a scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient output by a scale regression layer to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer; the scale regression layer is a middle convolution layer of the scene segmentation network;
the generation module is suitable for obtaining a scene segmentation result corresponding to the to-be-processed landscape image;
the determining module is suitable for determining the outline information of the specific object according to a scene segmentation result corresponding to the to-be-processed landscape image;
and the processing module is suitable for adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image.
Further, the segmentation module is further adapted to:
sampling from the second volume block by using a linear interpolation method to obtain a characteristic vector to form a third volume block;
and performing convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
Further, the samples used for training the scene segmentation network include: the method comprises the steps of storing a plurality of sample images and annotation scene segmentation results corresponding to the sample images in a sample library.
Further, the apparatus further comprises: a scene segmentation network training module; the training process of the scene segmentation network is completed through multiple iterations;
the scene segmentation network training module is adapted to: in an iteration process, a sample image and an annotated scene segmentation result corresponding to the sample image are extracted from a sample library, and training of a scene segmentation network is achieved by using the sample image and the annotated scene segmentation result.
Further, the apparatus further comprises: a scene segmentation network training module; the training process of the scene segmentation network is completed through multiple iterations;
the scene segmentation network training module is adapted to: in the one-time iteration process, inputting a sample image into a scene segmentation network to obtain a sample scene segmentation result corresponding to the sample image;
and obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and realizing the training of the scene segmentation network by using the scene segmentation network loss function.
Further, the apparatus further comprises: a scene segmentation network training module;
the scene segmentation network training module comprises:
the extraction unit is suitable for extracting a sample image and an annotation scene segmentation result corresponding to the sample image from a sample library;
the training unit is suitable for inputting a sample image into a scene segmentation network for training, wherein at least one layer of convolution layer in the scene segmentation network performs scaling processing on a first convolution block of the convolution layer by using a scale coefficient or an initial scale coefficient output by a scale regression layer in the last iteration process to obtain a second convolution block, and then performs convolution operation on the convolution layer by using the second convolution block to obtain an output result of the convolution layer;
the acquisition unit is suitable for acquiring a sample scene segmentation result corresponding to a sample image;
the updating unit is suitable for obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and updating the weight parameters of the scene segmentation network according to the scene segmentation network loss function;
and the scene segmentation network training module is operated iteratively until a preset convergence condition is met.
Further, the predetermined convergence condition includes: the iteration times reach the preset iteration times; and/or the output value of the scene segmentation network loss function is smaller than a preset threshold value.
Further, the scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer.
Further, the scene segmentation network training module is further adapted to: when the training of the scene segmentation network is started, the weight parameters of the scale regression layer are initialized.
Further, the processing module is further adapted to:
and adding a beautifying effect map for the specific object according to the contour information of the specific object to obtain a processed landscape image.
Further, the processing module is further adapted to:
and according to the contour information of the specific object, performing texture processing, color tone processing, contrast processing, illumination processing and/or brightness processing on the specific object to obtain a processed landscape image.
Further, the apparatus further comprises:
and the display module is suitable for displaying the processed landscape image.
Further, the display module is further adapted to:
and displaying the processed landscape image in real time.
Further, the apparatus further comprises:
and the first storage module is suitable for storing the processed landscape image according to a shooting instruction triggered by a user.
Further, the apparatus further comprises:
and the second storage module is suitable for storing the processed landscape image as a video formed by frame images according to a recording instruction triggered by a user.
According to yet another aspect of the present invention, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the scene image processing method based on the scene segmentation.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the scene image processing method based on scene segmentation as described above.
According to the technical scheme provided by the invention, a to-be-processed landscape image is obtained, the to-be-processed landscape image is input into a scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is utilized to scale a first convolution block of the convolution layer by utilizing a scale coefficient output by a scale regression layer to obtain a second convolution block, then the convolution operation of the convolution layer is carried out by utilizing the second convolution block to obtain an output result of the convolution layer, then a scene segmentation result corresponding to the to-be-processed landscape image is obtained, the contour information of a specific object is determined according to the scene segmentation result corresponding to the to-be-processed landscape image, and a beautifying effect is added to the specific object according to the contour information of the specific object to obtain the processed landscape image. The technical scheme provided by the invention scales the convolution block according to the scale coefficient, realizes the self-adaptive scaling of the receptive field, can quickly and accurately obtain the scene segmentation result corresponding to the landscape image by utilizing the trained scene segmentation network, effectively improves the accuracy and the processing efficiency of image scene segmentation, can more accurately add beautifying effect to the scenes such as sky, grassland and the like in the landscape image based on the obtained scene segmentation result, and improves the image display effect.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating a method for processing a landscape image based on scene segmentation according to an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating a method for training a scene segmentation network according to an embodiment of the invention;
FIG. 3 is a flow chart illustrating a method for processing a landscape image based on scene segmentation according to another embodiment of the present invention;
FIG. 4 is a block diagram of a scene image processing apparatus based on scene segmentation according to an embodiment of the present invention;
FIG. 5 is a block diagram showing a scene image processing apparatus based on scene segmentation according to another embodiment of the present invention;
FIG. 6 shows a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flowchart of a scenic image processing method based on scene segmentation according to an embodiment of the present invention, the method is performed based on a trained scene segmentation network, as shown in fig. 1, the method includes the following steps:
step S100, a to-be-processed landscape image is acquired.
Specifically, the to-be-processed landscape image may be a landscape image shot by the user, a landscape image in a website, or a landscape image shared by other users, which is not limited herein. In addition, the to-be-processed landscape image may be an image including both a person and a landscape, or may be an image including only a landscape without a person. The scenery image to be processed contains a specific object, wherein the specific object can be an object such as sky, grassland, trees, mountains and the like, and the specific object can also be an object such as sea, lake and the like. The specific object can be set by those skilled in the art according to actual needs, and is not limited herein.
Step S101, the scenery image to be processed is input into the scene segmentation network.
The scenery image to be processed contains specific objects such as sky, grassland, etc. In order to accurately add beautification effects to the scenes such as sky, grassland, and the like in the scenic images to be processed, a scene segmentation network is required to perform scene segmentation on the scenic images to be processed. The scene segmentation network is trained, and the trained scene segmentation network can scale the convolution blocks of the convolution layer by utilizing the scale coefficient output by the scale regression layer in the network, so that the scene segmentation can be more accurately carried out on the input to-be-processed landscape images. Specifically, the samples used for training the scene segmentation network include: the method comprises the steps of storing a plurality of sample images and annotation scene segmentation results corresponding to the sample images in a sample library. And the marked scene segmentation result is a segmentation result obtained by artificially segmenting and marking each scene in the sample image.
The training process of the scene segmentation network is completed through multiple iterations. Optionally, in an iteration process, the sample image and the annotated scene segmentation result corresponding to the sample image are extracted from the sample library, and the training of the scene segmentation network is achieved by using the sample image and the annotated scene segmentation result.
Optionally, the one-iteration process comprises: inputting the sample image into a scene segmentation network to obtain a sample scene segmentation result corresponding to the sample image; and obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and realizing the training of the scene segmentation network by using the scene segmentation network loss function.
Step S102, at least one layer of convolution layer in the scene segmentation network utilizes the scale coefficient output by the scale regression layer to carry out scaling processing on the first convolution block of the convolution layer, and a second convolution block is obtained.
The skilled person can select which layer or layers of convolution blocks of convolution layers are scaled according to actual needs, and this is not limited here. For the convenience of distinction, the convolution block to be scaled is referred to as a first convolution block, and the scaled convolution block is referred to as a second convolution block. If the scaling processing is performed on the first convolution block of a certain layer of convolution layer in the scene segmentation network, then, in the convolution layer, the scaling processing is performed on the first convolution block of the convolution layer by using the scale coefficient output by the scale regression layer, so as to obtain a second convolution block.
The scale regression layer is an intermediate convolution layer of the scene segmentation network, the intermediate convolution layer refers to one or more convolution layers in the scene segmentation network, and a person skilled in the art can select an appropriate one or more convolution layers in the scene segmentation network as the scale regression layer according to actual needs, which is not limited herein. In the invention, the characteristic diagram output by the scale regression layer is called a scale coefficient characteristic diagram, and the scale coefficient is a characteristic vector in the scale coefficient characteristic diagram output by the scale regression layer. The method and the device scale the convolution block according to the scale coefficient, thereby realizing the self-adaptive scaling of the receptive field, more accurately carrying out scene segmentation on the input landscape image to be processed and effectively improving the accuracy of image scene segmentation.
Step S103, the convolution operation of the convolution layer is carried out by utilizing the second convolution block, and the output result of the convolution layer is obtained.
After the second convolution block is obtained, the convolution operation of the convolution layer can be performed by using the second convolution block to obtain an output result of the convolution layer.
And step S104, obtaining a scene segmentation result corresponding to the to-be-processed landscape image.
After obtaining the output result of the convolutional layer in step S103, if there are other convolutional layers after the convolutional layer in the scene segmentation network, the subsequent convolution operation is performed using the output result of the convolutional layer as the input of the subsequent convolutional layer. After convolution operation of all convolution layers in the scene segmentation network, a scene segmentation result corresponding to the to-be-processed landscape image is obtained.
Step S105, determining the outline information of the specific object according to the scene segmentation result corresponding to the scenery image to be processed.
After the scene segmentation result corresponding to the to-be-processed landscape image is obtained, the contour information of the specific object can be determined according to the scene segmentation result corresponding to the to-be-processed landscape image. When the specific object is the sky, the outline information of the sky can be determined according to the scene segmentation result so as to add beautifying effect to the sky subsequently.
And step S106, adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image.
For example, when the specific object is the sky, a beautifying effect may be added to the sky according to the outline information of the sky, for example, the sky in the image is processed to increase the brightness, so as to obtain a processed landscape image.
According to the scene segmentation-based scenic image processing method provided by the embodiment, a to-be-processed scenic image is obtained, and the to-be-processed scenic image is input into a scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is used for performing scaling processing on a first convolution block of the convolution layer by using a scale coefficient output by a scale regression layer to obtain a second convolution block, then the convolution operation of the convolution layer is performed by using the second convolution block to obtain an output result of the convolution layer, then a scene segmentation result corresponding to the to-be-processed scenic image is obtained, the contour information of a specific object is determined according to the scene segmentation result corresponding to the to-be-processed scenic image, and a beautification effect is added to the specific object according to the contour information of the specific object to obtain the processed scenic image. The technical scheme provided by the invention scales the convolution block according to the scale coefficient, realizes the self-adaptive scaling of the receptive field, can quickly and accurately obtain the scene segmentation result corresponding to the landscape image by utilizing the trained scene segmentation network, effectively improves the accuracy and the processing efficiency of image scene segmentation, can more accurately add beautifying effect to the scenes such as sky, grassland and the like in the landscape image based on the obtained scene segmentation result, and improves the image display effect.
Fig. 2 is a flowchart illustrating a training method of a scene segmentation network according to an embodiment of the present invention, and as shown in fig. 2, the training step of the scene segmentation network includes the following steps:
step S200, extracting a sample image and an annotation scene segmentation result corresponding to the sample image from a sample library.
The sample library not only stores the sample images, but also stores the segmentation results of the labeled scenes corresponding to the sample images. The number of the sample images stored in the sample library can be set by a person skilled in the art according to actual needs, and is not limited herein. In step S200, a sample image is extracted from the sample library, and an annotation scene segmentation result corresponding to the sample image is extracted.
Step S201, inputting the sample image into the scene segmentation network for training.
After the sample images are extracted, the sample images are input into a scene segmentation network for training.
Step S202, at least one layer of convolution layer in the scene segmentation network utilizes the scale coefficient or the initial scale coefficient output by the scale regression layer in the last iteration process to carry out scaling processing on the first convolution block of the convolution layer, and a second convolution block is obtained.
The skilled person can select which layer or layers of convolution blocks of convolution layers are scaled according to actual needs, and this is not limited here. If the scaling processing is performed on the first convolution block of a certain convolution layer in the scene segmentation network, then, on the convolution layer, the scaling processing is performed on the first convolution block of the convolution layer by using the scale coefficient or the initial scale coefficient output by the scale regression layer in the last iteration process to obtain a second convolution block.
Specifically, in order to train the scene segmentation network effectively, when the training of the scene segmentation network starts, the weight parameters of the scale regression layer may be initialized. The person skilled in the art can set the specific initialized weight parameters according to the actual needs, which is not limited herein. The initial scale coefficient is the feature vector in the scale coefficient feature map output by the scale regression layer after initialization processing.
Step S203, the convolution operation of the convolution layer is carried out by utilizing the second convolution block, and the output result of the convolution layer is obtained.
After the second convolution block is obtained, the convolution operation of the convolution layer can be performed by using the second convolution block to obtain an output result of the convolution layer. Since the second convolution block is obtained by scaling the first convolution block, the coordinates corresponding to the feature vectors in the second convolution block may not be integers, and therefore, the feature vectors corresponding to the non-integer coordinates may be obtained by using a preset calculation method. The skilled person can set the preset calculation method according to the actual needs, and the method is not limited herein. For example, the preset calculation method may be a linear interpolation method, and specifically, a feature vector is sampled from the second convolution block by using the linear interpolation method to form a third convolution block, and then convolution operation is performed according to the third convolution block and a convolution kernel of the convolution layer to obtain an output result of the convolution layer.
After obtaining the output result of the convolutional layer, if there are other convolutional layers after the convolutional layer in the scene segmentation network, the subsequent convolution operation is performed by using the output result of the convolutional layer as the input of the subsequent convolutional layer. After convolution operation of all convolution layers in the scene segmentation network, a scene segmentation result corresponding to the sample image is obtained.
Step S204, a sample scene segmentation result corresponding to the sample image is obtained.
And acquiring a sample scene segmentation result which is obtained by the scene segmentation network and corresponds to the sample image.
Step S205, a scene segmentation network loss function is obtained according to the segmentation loss between the sample scene segmentation result and the labeling scene segmentation result, and the weight parameters of the scene segmentation network are updated according to the scene segmentation network loss function.
Wherein, those skilled in the art may set the specific content of the scene segmentation network loss function according to actual needs, which is not limited herein. And performing back propagation (back propagation) operation according to the loss function of the scene segmentation network, and updating the weight parameters of the scene segmentation network according to the operation result.
And step S206, iteratively executing the training step of the scene segmentation network until a preset convergence condition is met.
Wherein, those skilled in the art can set the predetermined convergence condition according to the actual requirement, and the present disclosure is not limited herein. For example, the predetermined convergence condition may include: the iteration times reach the preset iteration times; and/or the output value of the scene segmentation network loss function is smaller than a preset threshold value. Specifically, whether the predetermined convergence condition is satisfied may be determined by determining whether the iteration count reaches a preset iteration count, or may be determined according to whether an output value of the scene segmentation network loss function is smaller than a preset threshold. In step S206, the training step of the scene segmentation network is iteratively performed until a predetermined convergence condition is satisfied, thereby obtaining a trained scene segmentation network.
In a specific training process, for example, a first convolution block of a convolution layer in a scene segmentation network needs to be scaled, and the convolution layer is assumed to be called convolution layer J, and an input feature map of the convolution layer J is
Figure GDA0001468790810000121
Wherein HAFor the height parameter of the input profile, WAFor the width parameter of the input feature map, CAThe number of channels of the input feature map is obtained; the output characteristic diagram of the convolution layer J is
Figure GDA0001468790810000131
Wherein HBFor the height parameter of the output profile, WBAs a width parameter of the output feature map, CBThe number of channels of the output characteristic diagram; the scale coefficient characteristic diagram output by the scale regression layer is
Figure GDA0001468790810000132
Wherein HSIs the height parameter of the scale factor profile, WSThe width parameter of the scale factor feature map is that the number of channels of the scale factor feature map is 1, specifically, HS=HBAnd W isS=WB
In the scene segmentation network, a common 3 × 3 convolutional layer can be selected as a scale regression layer, and an output feature map with the number of channels corresponding to the scale regression layer being 1 is a scale coefficient feature map. In order to effectively train the scene segmentation network and prevent the scene segmentation network from collapsing in the training process, it is necessary to initialize the weight parameters of the scale regression layer when the training of the scene segmentation network is started. Wherein the initialized weight parameter of the scale regression layer is
Figure GDA0001468790810000133
Wherein, w0A convolution kernel initialized for the scale regression layer, a being any position in the convolution kernel, b0Is the initialized bias term. In the initialization processing of the weight parameters of the scale regression layer, the convolution kernel is set to satisfy the random coefficient sigma of the gaussian distribution, the value of the random coefficient sigma is very small and close to 0, and the bias term is set to be 1, so that the scale regression layer after the initialization processing outputs all values close to 1, namely the initial scale coefficient is close to 1, after the initial scale coefficient is applied to the convolution layer J, the obtained output result is not greatly different from the standard convolution result, a stable training process is provided, and the scene segmentation network is effectively prevented from collapsing in the training process.
For convolutional layer J, assume that the convolutional kernel of convolutional layer J is
Figure GDA0001468790810000134
Is biased to
Figure GDA0001468790810000135
The input characteristic diagram of the convolution layer J is
Figure GDA0001468790810000136
The output characteristic diagram of the convolution layer J is
Figure GDA0001468790810000137
The first volume block of the convolution layer J is XtFor the first rolling block XtThe second volume block obtained after scaling is YtWhere, in general, k is 1. At any position t in the output feature map B, the corresponding feature vector is
Figure GDA0001468790810000138
Feature vector BtCorresponding to the second volume block Y in the input feature map A by the feature vectortInner product with convolution kernel K, where position
Figure GDA0001468790810000139
First volume block XtIs to input a (p) in the feature map At,qt) A central square area with a side length fixed at 2kd +1, wherein,
Figure GDA00014687908100001310
is the coefficient of expansion of the convolution,
Figure GDA00014687908100001311
and
Figure GDA00014687908100001312
are the coordinates in the input feature map a. First volume block XtWherein (2K +1) × (2K +1) feature vectors are uniformly selected to be multiplied by a convolution kernel K, and specifically, the coordinates of the feature vectors are
Figure GDA0001468790810000141
Wherein,
Figure GDA0001468790810000142
suppose stIs a feature vector B in the scale coefficient feature map corresponding to a position t in the output feature map BtScale factor of, stThe position in the scale coefficient feature map is also t, and the feature vector BtThe positions in the output feature map B are the same.
Using a scale factor stFor the first convolution block X of convolution layer JtScaling to obtain a second convolution block YtSecond rolling block YtIs to input a (p) in the feature map At,qt) A square area as a center, the side length of which is determined according to a scale factor stIs changed into
Figure GDA0001468790810000143
Second rolling block YtUniformly selecting (2K +1) × (2K +1) eigenvectors to be subjected to phase matching with a convolution kernel KMultiplication, in particular, the coordinates of these feature vectors are
Figure GDA0001468790810000144
Wherein the scale factor stIs a real number value, then the coordinates of the feature vector x'ijAnd y'ijMay not be an integer. In the invention, the feature vectors corresponding to the non-integer coordinates are obtained by utilizing a linear interpolation method. From the second volume block Y using a linear interpolation methodtThe feature vector is obtained by middle sampling to form a third volume block ZtThen for the third volume block ZtEach feature vector of
Figure GDA0001468790810000145
The specific calculation formula of (2) is:
Figure GDA0001468790810000146
wherein,
Figure GDA0001468790810000147
if (x'ij,y'ij) Beyond the range of the input feature map a, the corresponding feature vector will be set to 0 as a pad. Suppose that
Figure GDA0001468790810000148
Is a convolution vector where the convolution kernel K is multiplied by the corresponding feature vector and the output channel is c, where,
Figure GDA0001468790810000149
then the element-wise multiplication process for all channels in the convolution operation can be used with
Figure GDA00014687908100001410
Expressed by matrix multiplication, the forward propagation (forward propagation) process is
Figure GDA00014687908100001411
In the back propagation process, let us assume that from BtThe gradient g (B) conveyedt) Gradient of
Figure GDA00014687908100001412
Figure GDA00014687908100001413
g(b)=g(Bt)
Wherein g (·) represents a gradient function (·)TRepresenting a matrix transposition. It is worth noting that in calculating the gradient, the final gradient of the convolution kernel K and the bias B is the sum of the gradients obtained from all positions in the output feature map B. For a linear interpolation process, the corresponding eigenvector has a partial derivative of
Figure GDA0001468790810000151
Corresponding to the partial derivative of the coordinates as
Figure GDA0001468790810000152
Corresponding to
Figure GDA0001468790810000153
Partial derivatives of and above
Figure GDA0001468790810000154
The formulas are similar and are not described in detail here.
Since the coordinates are determined by the scale factor stCalculated, then the partial derivative of the coordinate corresponding to the scale coefficient is
Figure GDA0001468790810000155
Based on the above partial derivatives, the gradients of the scale factor feature map S and the input feature map a can be obtained by the following formula:
Figure GDA0001468790810000156
Figure GDA0001468790810000157
therefore, the convolution process forms an overall derivable calculation process, and therefore, the weight parameters of each convolution layer and the weight parameters of the scale regression layer in the scene segmentation network can be trained in an end-to-end mode. In addition, the gradient of the scale factor can be calculated by the gradient transmitted from the next layer, so the scale factor is automatically and implicitly obtained. In a specific implementation process, both the forward propagation process and the backward propagation process can be operated in parallel on a Graphics Processing Unit (GPU), and the calculation efficiency is high.
According to the scene segmentation network training method provided by the embodiment, the scene segmentation network for scaling the convolution block according to the scale coefficient can be trained, the self-adaptive scaling of the receptive field is realized, the corresponding scene segmentation result can be quickly obtained by using the scene segmentation network, and the accuracy and the processing efficiency of image scene segmentation are effectively improved.
Fig. 3 shows a flowchart of a scenic image processing method based on scene segmentation according to another embodiment of the present invention, which is performed based on a trained scene segmentation network, as shown in fig. 3, the method includes the following steps:
in step S300, a to-be-processed landscape image is acquired.
The scenery image to be processed contains a specific object, and the specific object can be objects such as sky, grassland, trees, mountains and the like.
In step S301, a to-be-processed landscape image is input into a scene segmentation network.
The scene segmentation network is trained, and the trained scene segmentation network can utilize the scale coefficient output by the scale regression layer in the network to scale the convolution block of the convolution layer, so that the scene segmentation can be performed on the input landscape image to be processed more accurately.
Step S302, at least one layer of convolution layer in the scene segmentation network utilizes the scale coefficient output by the scale regression layer to carry out scaling processing on the first convolution block of the convolution layer, and a second convolution block is obtained.
The skilled person can select which layer or layers of convolution blocks of convolution layers are scaled according to actual needs, and this is not limited here. The scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer, and in step S302, the scale coefficient is used to perform scaling processing on the first convolution block of the convolution layer to obtain a second convolution block.
Step S303, a linear interpolation method is used to sample feature vectors from the second convolution block to form a third convolution block.
Since the second convolution block is obtained by scaling the first convolution block, the coordinates corresponding to the feature vector in the second convolution block may not be integers, and therefore, the feature vector corresponding to the non-integer coordinates may be obtained by using a linear interpolation method. And sampling from the second volume block by using a linear interpolation method to obtain a characteristic vector, and then forming a third volume block according to the characteristic vector obtained by sampling. Assume the second volume block is YtThe third volume block is ZtThen for the third volume block ZtEach feature vector of
Figure GDA0001468790810000161
The specific calculation formula of (2) is:
Figure GDA0001468790810000162
wherein,
Figure GDA0001468790810000163
d isCoefficient of expansion of convolution, stIs a scale factor, and in general, k is 1.
Step S304, performing convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain the output result of the convolution layer.
After the third convolution block is obtained, performing convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
In step S305, a scene segmentation result corresponding to the to-be-processed landscape image is obtained.
After obtaining the output result of the convolutional layer in step S304, if there are other convolutional layers after the convolutional layer in the scene segmentation network, the subsequent convolution operation is performed using the output result of the convolutional layer as the input of the subsequent convolutional layer. After convolution operation of all convolution layers in the scene segmentation network, a scene segmentation result corresponding to the to-be-processed landscape image is obtained.
Step S306, determining the outline information of the specific object according to the scene segmentation result corresponding to the scenery image to be processed.
After the scene segmentation result corresponding to the to-be-processed scenic image is obtained in step S305, the contour information of the specific object can be determined according to the scene segmentation result corresponding to the to-be-processed scenic image.
Step S307, according to the contour information of the specific object, adding a beautifying effect to the specific object to obtain a processed landscape image.
Specifically, a beautification effect map can be added to the specific object according to the contour information of the specific object, so as to obtain a processed landscape image; in addition, according to the contour information of the specific object, texture processing, color tone processing, contrast processing, illumination processing and/or brightness processing can be performed on the specific object, so that a processed landscape image can be obtained. The skilled person can select a specific manner of adding the beautifying effect according to the actual need, which is not limited herein.
For example, if a to-be-processed landscape image includes a sky, but the sky appears faint, and a user desires to add a beautifying effect to the sky in the to-be-processed landscape image to make the sky appear blue, a specific object may be set as the sky, outline information of the sky may be determined according to a scene segmentation result corresponding to the to-be-processed landscape image, and then a blue-sky effect map may be added to the sky according to the outline information of the sky to make the sky appear blue, thereby obtaining a processed landscape image.
For another example, the to-be-processed landscape image includes a grassland, and the user wants to add a beautification effect to the grassland in the to-be-processed landscape image, so that the specific object may be set as the grassland, the contour information of the grassland is determined according to the scene segmentation result corresponding to the to-be-processed landscape image, then the grassland may be subjected to texture processing according to the contour information of the grassland, and an overall illumination effect is added thereto, and the color tone, the contrast, the brightness, and the like are adjusted and the like, so that the overall effect is more natural and beautiful, and the processed landscape image is obtained.
Step S308, the processed landscape image is displayed in real time.
And displaying the obtained processed landscape image in real time, so that a user can directly see the landscape image obtained after the landscape image to be processed is processed. After the processed scenery image is obtained, the processed scenery image is used for replacing the scenery image to be processed for display, the replacement is generally performed within 1/24 seconds, and for a user, the replacement time is relatively short, so that human eyes do not obviously perceive the scenery image, and the method is equivalent to displaying the processed scenery image in real time.
Step S309 is to save the processed landscape image according to the shooting instruction triggered by the user.
After the processed landscape image is displayed, the processed landscape image can be saved according to a shooting instruction triggered by a user. And if the user clicks a shooting button of the camera, triggering a shooting instruction, and storing the displayed processed landscape image.
And step S310, storing the processed landscape image as a video consisting of frame images according to a recording instruction triggered by a user.
When the processed landscape image is displayed, the processed landscape image can be saved as a video formed by frame images according to a recording instruction triggered by a user. If the user clicks a recording button of the camera, a recording instruction is triggered, and the displayed processed landscape images are stored as frame images in the video, so that the plurality of processed landscape images are stored as the video formed by the frame images.
Step S309 and step S310 are optional steps of this embodiment, and there is no execution sequence, and the corresponding step is selected and executed according to different instructions triggered by the user.
According to the scene segmentation-based scenic image processing method provided by the embodiment, the convolution block is scaled according to the scale coefficient, so that the self-adaptive scaling of the receptive field is realized, the scaled convolution block is further processed by using a linear interpolation method, and the problem of selecting the characteristic vector of which the coordinate is a non-integer in the scaled convolution block is solved; and the trained scene segmentation network can be used for quickly and accurately obtaining the scene segmentation result corresponding to the landscape image, so that the accuracy and the processing efficiency of image scene segmentation are effectively improved, the beautifying effect can be more accurately added to the scenes such as sky, grassland and the like in the landscape image based on the obtained scene segmentation result, the image display effect is improved, and the image processing mode is optimized.
Fig. 4 is a block diagram illustrating a scene segmentation-based scenic image processing apparatus according to an embodiment of the present invention, which operates based on a trained scene segmentation network, as shown in fig. 4, and includes: an acquisition module 410, a segmentation module 420, a generation module 430, a determination module 440, and a processing module 450.
The acquisition module 410 is adapted to: and acquiring a landscape image to be processed.
The scenery image to be processed contains a specific object, and the specific object can be objects such as sky, grassland, trees, mountains and the like.
The segmentation module 420 is adapted to: inputting the landscape image to be processed into a scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient output by a scale regression layer to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer.
The scene segmentation network is trained, and specifically, samples used for training the scene segmentation network include: the method comprises the steps of storing a plurality of sample images and annotation scene segmentation results corresponding to the sample images in a sample library. The scale regression layer is a middle convolution layer of the scene segmentation network. One skilled in the art can select one or more convolution layers in the scene segmentation network as a scale regression layer according to actual needs, which is not limited herein. And the scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer.
The generation module 430 is adapted to: and obtaining a scene segmentation result corresponding to the to-be-processed landscape image.
The determination module 440 is adapted to: and determining the outline information of the specific object according to a scene segmentation result corresponding to the to-be-processed landscape image.
The processing module 450 is adapted to: and adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image.
According to the scene segmentation-based scenic image processing device provided by the embodiment, the convolution block can be scaled according to the scale coefficient, the adaptive scaling of the receptive field is realized, the scene segmentation result corresponding to the scenic image can be quickly and accurately obtained by utilizing the trained scene segmentation network, the accuracy and the processing efficiency of image scene segmentation are effectively improved, the beautifying effect can be more accurately added to the scenes such as the sky, the grassland and the like in the scenic image based on the obtained scene segmentation result, and the image display effect is improved.
Fig. 5 is a block diagram illustrating a scene segmentation-based scenic image processing apparatus according to another embodiment of the present invention, which operates based on a trained scene segmentation network, as shown in fig. 5, and includes: an acquisition module 510, a scene segmentation network training module 520, a segmentation module 530, a generation module 540, a determination module 550, a processing module 560, a display module 570, a first preservation module 580, and a second preservation module 590.
The obtaining module 510 is adapted to: and acquiring a landscape image to be processed.
The training process of the scene segmentation network is completed through multiple iterations. The scene segmentation network training module 520 is adapted to: in an iteration process, a sample image and an annotated scene segmentation result corresponding to the sample image are extracted from a sample library, and training of a scene segmentation network is achieved by using the sample image and the annotated scene segmentation result.
Optionally, the scene segmentation network training module 520 is adapted to: in the one-time iteration process, inputting a sample image into a scene segmentation network to obtain a sample scene segmentation result corresponding to the sample image; and obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and realizing the training of the scene segmentation network by using the scene segmentation network loss function.
In a particular embodiment, the scene segmentation network training module 520 may include: an extraction unit 521, a training unit 522, an acquisition unit 523, and an update unit 524.
In particular, the extraction unit 521 is adapted to: and extracting a sample image and an annotation scene segmentation result corresponding to the sample image from the sample library.
The training unit 522 is adapted to: inputting a sample image into a scene segmentation network for training, wherein at least one layer of convolution layer in the scene segmentation network performs scaling processing on a first convolution block of the convolution layer by using a scale coefficient or an initial scale coefficient output by a scale regression layer in the last iteration process to obtain a second convolution block, and then performs convolution operation on the convolution layer by using the second convolution block to obtain an output result of the convolution layer.
The scale regression layer is a middle convolution layer of the scene segmentation network, and the scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer.
Optionally, the training unit 522 is further adapted to: sampling from the second volume block by using a linear interpolation method to obtain a characteristic vector to form a third volume block; and performing convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
The obtaining unit 523 is adapted to: and acquiring a sample scene segmentation result corresponding to the sample image.
The update unit 524 is adapted to: and obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and updating the weight parameters of the scene segmentation network according to the scene segmentation network loss function.
The scene segmentation network training module 520 runs iteratively until a predetermined convergence condition is met.
Wherein, those skilled in the art can set the predetermined convergence condition according to the actual requirement, and the present disclosure is not limited herein. For example, the predetermined convergence condition may include: the iteration times reach the preset iteration times; and/or the output value of the scene segmentation network loss function is smaller than a preset threshold value. Specifically, whether the predetermined convergence condition is satisfied may be determined by determining whether the iteration count reaches a preset iteration count, or may be determined according to whether an output value of the scene segmentation network loss function is smaller than a preset threshold.
Optionally, the scene segmentation network training module 520 is further adapted to: when the training of the scene segmentation network is started, the weight parameters of the scale regression layer are initialized.
The segmentation module 530 is adapted to: inputting a landscape image to be processed into a scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient output by a scale regression layer to obtain a second convolution block, and then a linear interpolation method is used for sampling from the second convolution block to obtain a feature vector to form a third convolution block; and performing convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
The generating module 540 is adapted to: and obtaining a scene segmentation result corresponding to the to-be-processed landscape image.
The determination module 550 is adapted to: and determining the outline information of the specific object according to a scene segmentation result corresponding to the to-be-processed landscape image.
The processing module 560 is adapted to: and adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image.
Optionally, the processing module 560 is further adapted to: and adding a beautifying effect map for the specific object according to the contour information of the specific object to obtain a processed landscape image.
Optionally, the processing module 560 is further adapted to: and according to the contour information of the specific object, performing texture processing, color tone processing, contrast processing, illumination processing and/or brightness processing on the specific object to obtain a processed landscape image.
The display module 570 is adapted to: the processed landscape image is displayed.
Optionally, the display module 570 is further adapted to: and displaying the processed landscape image in real time. The display module 570 displays the processed landscape image in real time, and the user can directly see the landscape image obtained by processing the landscape image to be processed. After the processing module 560 obtains the processed scenery image, the display module 570 immediately replaces the scenery image to be processed with the processed scenery image for display, the replacement is generally performed within 1/24 seconds, and for the user, the replacement time is relatively short, so that the human eye does not perceive the scenery image obviously, which is equivalent to the display module 570 displaying the processed scenery image in real time.
The first preservation module 580 is adapted to: and storing the processed landscape image according to a shooting instruction triggered by a user.
After displaying the processed landscape image, the first saving module 580 may save the processed landscape image according to a photographing instruction triggered by a user. If the user clicks a photographing button of the camera to trigger a photographing instruction, the first saving module 580 saves the displayed processed landscape image.
The second saving module 590 is adapted to: and storing the processed landscape image as a video consisting of frame images according to a recording instruction triggered by a user.
When the processed landscape image is displayed, the second saving module 590 may save the video composed of the processed landscape image as the frame image according to a recording instruction triggered by the user. If the user clicks a recording button of the camera to trigger a recording instruction, the second saving module 590 saves the displayed processed landscape image as a frame image in the video, so as to save a plurality of processed landscape images as a video composed of frame images.
The corresponding first and second saving modules 580 and 590 are executed according to different instructions triggered by the user.
According to the scene segmentation-based scenic image processing device provided by the embodiment, the convolution block is scaled according to the scale coefficient, so that the self-adaptive scaling of the receptive field is realized, the scaled convolution block is further processed by using a linear interpolation method, and the problem of selecting the characteristic vector of which the coordinate is a non-integer in the scaled convolution block is solved; and the trained scene segmentation network can be used for quickly and accurately obtaining the scene segmentation result corresponding to the landscape image, so that the accuracy and the processing efficiency of image scene segmentation are effectively improved, the beautifying effect can be more accurately added to the scenes such as sky, grassland and the like in the landscape image based on the obtained scene segmentation result, the image display effect is improved, and the image processing mode is optimized.
The invention further provides a non-volatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the executable instruction can execute the scene segmentation-based scenic image processing method in any method embodiment.
Fig. 6 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 6, the computing device may include: a processor (processor)602, a communication Interface 604, a memory 606, and a communication bus 608.
Wherein:
the processor 602, communication interface 604, and memory 606 communicate with one another via a communication bus 608.
A communication interface 604 for communicating with network elements of other devices, such as clients or other servers.
The processor 602 is configured to execute the program 610, and may specifically execute relevant steps in the above-described scenic image processing method embodiment based on scene segmentation.
In particular, program 610 may include program code comprising computer operating instructions.
The processor 602 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 606 for storing a program 610. Memory 606 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 610 may specifically be configured to cause the processor 602 to execute a scene segmentation-based scenic image processing method in any of the above-described method embodiments. For specific implementation of each step in the program 610, reference may be made to corresponding steps and corresponding descriptions in units in the above-described scene image processing embodiment based on scene segmentation, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (20)

1. A scenic image processing method based on scene segmentation, the scenic image processing method based on scene segmentation being performed based on a trained scene segmentation network, the scenic image processing method based on scene segmentation comprising:
obtaining a landscape image to be processed; wherein the scenery image to be processed contains a specific object;
inputting the to-be-processed landscape image into the scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient output by a scale regression layer to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer; the scale regression layer is a middle convolution layer of the scene segmentation network; the scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer;
obtaining a scene segmentation result corresponding to the to-be-processed landscape image;
determining the outline information of a specific object according to a scene segmentation result corresponding to the to-be-processed landscape image;
adding a beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image;
the training process of the scene segmentation network is completed through multiple iterations; in the one-time iteration process, inputting a sample image into the scene segmentation network for training, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient or an initial scale coefficient output by a scale regression layer in the last iteration process to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer;
the training step of the scene segmentation network comprises the following steps:
extracting a sample image and an annotation scene segmentation result corresponding to the sample image from a sample library;
inputting the sample image into the scene segmentation network for training;
obtaining a sample scene segmentation result corresponding to the sample image;
obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and updating the weight parameters of the scene segmentation network according to the scene segmentation network loss function;
iteratively executing the training step of the scene segmentation network until a preset convergence condition is met;
wherein the performing convolution operation on the convolutional layer by using the second convolution block to obtain an output result of the convolutional layer further comprises:
sampling from the second volume block by using a linear interpolation method to obtain a feature vector to form a third volume block;
and carrying out convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
2. The scene segmentation-based scenic image processing method according to claim 1, wherein the predetermined convergence condition includes: the iteration times reach the preset iteration times; and/or the output value of the scene segmentation network loss function is smaller than a preset threshold value.
3. The scene-segmentation-based scenic image processing method according to claim 1, further comprising: and when the scene segmentation network training is started, initializing the weight parameters of the scale regression layer.
4. The method for processing the landscape image based on scene segmentation according to claim 1, wherein the adding beautification effect to the specific object according to the contour information of the specific object, and obtaining the processed landscape image further comprises:
and adding a beautifying effect map for the specific object according to the contour information of the specific object to obtain a processed landscape image.
5. The method for processing the landscape image based on scene segmentation according to claim 1, wherein the adding beautification effect to the specific object according to the contour information of the specific object, and obtaining the processed landscape image further comprises:
and according to the contour information of the specific object, performing texture processing, color tone processing, contrast processing, illumination processing and/or brightness processing on the specific object to obtain a processed landscape image.
6. The method for processing scenery image based on scene segmentation according to claim 1, wherein after the landscaping image is obtained by adding beautification effect to the specific object according to the outline information of the specific object, the method for processing scenery image based on scene segmentation further comprises:
displaying the processed landscape image.
7. The scene segmentation based scenic image processing method as claimed in claim 6, wherein the displaying the processed scenic image further comprises:
and displaying the processed landscape image in real time.
8. The method for processing scenery image based on scene segmentation according to claim 1, wherein after the landscaping image is obtained by adding beautification effect to the specific object according to the outline information of the specific object, the method for processing scenery image based on scene segmentation further comprises:
and saving the processed landscape image according to a shooting instruction triggered by a user.
9. The scenic image processing method based on scene segmentation according to any one of claims 1-8, wherein after the adding beautification effect to the specific object according to the contour information of the specific object to obtain the processed scenic image, the scenic image processing method based on scene segmentation further comprises:
and storing the processed landscape image as a video consisting of frame images according to a recording instruction triggered by a user.
10. A scenic image processing apparatus based on scene segmentation, the apparatus operating based on a trained scene segmentation network, the apparatus comprising:
the acquisition module is suitable for acquiring a landscape image to be processed; wherein the scenery image to be processed contains a specific object;
the segmentation module is suitable for inputting the to-be-processed landscape image into the scene segmentation network, wherein at least one layer of convolution layer in the scene segmentation network utilizes a scale coefficient output by a scale regression layer to carry out scaling processing on a first convolution block of the convolution layer to obtain a second convolution block, and then utilizes the second convolution block to carry out convolution operation on the convolution layer to obtain an output result of the convolution layer; the scale regression layer is a middle convolution layer of the scene segmentation network; the scale coefficient is a feature vector in a scale coefficient feature map output by the scale regression layer;
the generation module is suitable for obtaining a scene segmentation result corresponding to the to-be-processed landscape image;
the determining module is suitable for determining the outline information of the specific object according to a scene segmentation result corresponding to the to-be-processed landscape image;
the processing module is suitable for adding beautifying effect to the specific object according to the contour information of the specific object to obtain a processed landscape image;
the training process of the scene segmentation network is completed through multiple iterations; in the one-time iteration process, inputting a sample image into the scene segmentation network for training, wherein at least one layer of convolution layer in the scene segmentation network is subjected to scaling processing on a first convolution block of the convolution layer by using a scale coefficient or an initial scale coefficient output by a scale regression layer in the last iteration process to obtain a second convolution block, and then the second convolution block is used for performing convolution operation on the convolution layer to obtain an output result of the convolution layer;
wherein the apparatus further comprises: a scene segmentation network training module;
the scene segmentation network training module comprises:
the extraction unit is suitable for extracting a sample image and an annotation scene segmentation result corresponding to the sample image from a sample library;
the training unit is suitable for inputting the sample image into the scene segmentation network for training;
the acquisition unit is suitable for acquiring a sample scene segmentation result corresponding to a sample image;
the updating unit is suitable for obtaining a scene segmentation network loss function according to the segmentation loss between the sample scene segmentation result and the labeled scene segmentation result, and updating the weight parameters of the scene segmentation network according to the scene segmentation network loss function;
the scene segmentation network training module is operated in an iterative mode until a preset convergence condition is met;
wherein the segmentation module is further adapted to:
sampling from the second volume block by using a linear interpolation method to obtain a feature vector to form a third volume block;
and carrying out convolution operation according to the third convolution block and the convolution kernel of the convolution layer to obtain an output result of the convolution layer.
11. The apparatus of claim 10, wherein the predetermined convergence condition comprises: the iteration times reach the preset iteration times; and/or the output value of the scene segmentation network loss function is smaller than a preset threshold value.
12. The apparatus of claim 10, wherein the scene segmentation network training module is further adapted to: and when the scene segmentation network training is started, initializing the weight parameters of the scale regression layer.
13. The apparatus of claim 10, wherein the processing module is further adapted to:
and adding a beautifying effect map for the specific object according to the contour information of the specific object to obtain a processed landscape image.
14. The apparatus of claim 10, wherein the processing module is further adapted to:
and according to the contour information of the specific object, performing texture processing, color tone processing, contrast processing, illumination processing and/or brightness processing on the specific object to obtain a processed landscape image.
15. The apparatus of claim 10, wherein the apparatus further comprises:
and the display module is suitable for displaying the processed landscape image.
16. The apparatus of claim 15, wherein the display module is further adapted to:
and displaying the processed landscape image in real time.
17. The apparatus of claim 10, wherein the apparatus further comprises:
and the first storage module is suitable for storing the processed landscape image according to a shooting instruction triggered by a user.
18. The apparatus of any one of claims 10-17, wherein the apparatus further comprises:
and the second storage module is suitable for storing the processed landscape image as a video formed by frame images according to a recording instruction triggered by a user.
19. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the scene image processing method based on scene segmentation in any one of claims 1-9.
20. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the scene segmentation based scenic image processing method as claimed in any one of claims 1 to 9.
CN201710909638.0A 2017-09-29 2017-09-29 Scene segmentation based landscape image processing method and device and computing equipment Active CN107590811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710909638.0A CN107590811B (en) 2017-09-29 2017-09-29 Scene segmentation based landscape image processing method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710909638.0A CN107590811B (en) 2017-09-29 2017-09-29 Scene segmentation based landscape image processing method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN107590811A CN107590811A (en) 2018-01-16
CN107590811B true CN107590811B (en) 2021-06-29

Family

ID=61052249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710909638.0A Active CN107590811B (en) 2017-09-29 2017-09-29 Scene segmentation based landscape image processing method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN107590811B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035260A (en) 2018-07-27 2018-12-18 京东方科技集团股份有限公司 A kind of sky areas dividing method, device and convolutional neural networks
CN109089040B (en) * 2018-08-20 2021-05-14 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment
CN109151318B (en) * 2018-09-28 2020-12-15 成都西纬科技有限公司 Image processing method and device and computer storage medium
CN109151575B (en) * 2018-10-16 2021-12-14 Oppo广东移动通信有限公司 Multimedia data processing method and device and computer readable storage medium
CN110288607A (en) * 2019-07-02 2019-09-27 数坤(北京)网络科技有限公司 Divide optimization method, system and the computer readable storage medium of network
CN111127307A (en) * 2019-12-09 2020-05-08 上海传英信息技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488534A (en) * 2015-12-04 2016-04-13 中国科学院深圳先进技术研究院 Method, device and system for deeply analyzing traffic scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2015087B1 (en) * 2015-06-05 2016-09-09 Univ Amsterdam Deep receptive field networks.
CN106022221B (en) * 2016-05-09 2021-11-30 腾讯科技(深圳)有限公司 Image processing method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488534A (en) * 2015-12-04 2016-04-13 中国科学院深圳先进技术研究院 Method, device and system for deeply analyzing traffic scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive Local Receptive Field Convolutional Neural Networks for Handwritten Chinese Character Recognition;Li Chen et al;《CCPR 2014, Part II》;20141231;第455-463页 *
Learning Adaptive Receptive Fields for Deep Image Parsing Network;Zhen Wei et al;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20170726;第3947-3955页 *
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation;Vijay Badrinarayanan et al;《arXiv:1511.00561v3[cs.CV]》;20161010;第1-14页 *

Also Published As

Publication number Publication date
CN107590811A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107590811B (en) Scene segmentation based landscape image processing method and device and computing equipment
CN107610146B (en) Image scene segmentation method and device, electronic equipment and computer storage medium
CN107730514B (en) Scene segmentation network training method and device, computing equipment and storage medium
CN110176027B (en) Video target tracking method, device, equipment and storage medium
CN109522874B (en) Human body action recognition method and device, terminal equipment and storage medium
CN107644423B (en) Scene segmentation-based video data real-time processing method and device and computing equipment
CN106778928B (en) Image processing method and device
CN108875523B (en) Human body joint point detection method, device, system and storage medium
CN107277615B (en) Live broadcast stylization processing method and device, computing device and storage medium
Criminisi et al. Object removal by exemplar-based inpainting
CN109816769A (en) Scene map generation method, device and equipment based on depth camera
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
CN107563357B (en) Live-broadcast clothing dressing recommendation method and device based on scene segmentation and computing equipment
CN107277391B (en) Image conversion network processing method, server, computing device and storage medium
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN109583509B (en) Data generation method and device and electronic equipment
Afifi et al. Cie xyz net: Unprocessing images for low-level computer vision tasks
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
CN113688907B (en) A model training and video processing method, which comprises the following steps, apparatus, device, and storage medium
Cornells et al. Real-time connectivity constrained depth map computation using programmable graphics hardware
CN110958469A (en) Video processing method and device, electronic equipment and storage medium
CN113673545A (en) Optical flow estimation method, related device, equipment and computer readable storage medium
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN117876608B (en) Three-dimensional image reconstruction method, three-dimensional image reconstruction device, computer equipment and storage medium
CN107622498B (en) Image crossing processing method and device based on scene segmentation and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant