CN112967355B - Image filling method and device, electronic equipment and medium - Google Patents

Image filling method and device, electronic equipment and medium Download PDF

Info

Publication number
CN112967355B
CN112967355B CN202110247313.7A CN202110247313A CN112967355B CN 112967355 B CN112967355 B CN 112967355B CN 202110247313 A CN202110247313 A CN 202110247313A CN 112967355 B CN112967355 B CN 112967355B
Authority
CN
China
Prior art keywords
image
filled
features
background
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110247313.7A
Other languages
Chinese (zh)
Other versions
CN112967355A (en
Inventor
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University Of Technology Press Co ltd
Original Assignee
Dalian University Of Technology Press Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University Of Technology Press Co ltd filed Critical Dalian University Of Technology Press Co ltd
Priority to CN202110247313.7A priority Critical patent/CN112967355B/en
Publication of CN112967355A publication Critical patent/CN112967355A/en
Application granted granted Critical
Publication of CN112967355B publication Critical patent/CN112967355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/053Detail-in-context presentations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image filling method, an image filling apparatus, an electronic device, a computer readable storage medium and a computer program product, and relates to the field of artificial intelligence, in particular to the technical field of computer vision and deep learning. The implementation scheme is as follows: acquiring an image to be filled and a mask image corresponding to the image to be filled; converting the image to be filled into image features; inputting the image features and the mask image into a trained first neural network to obtain a preliminary filling image; extracting foreground features and background features from the preliminary filling image, wherein the foreground features correspond to the region to be filled, and the background features correspond to the background region; calculating the similarity between the background features and the foreground features and generating attention hard codes; and inputting the attention hard-coded, preliminary fill image and the mask image into a second neural network to obtain a filled image.

Description

Image filling method and device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular to the field of computer vision and deep learning techniques, and more particularly to an image filling method, apparatus, electronic device, computer readable storage medium, and computer program product.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. The artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Image filling techniques have a wide range of applications, such as image editing, image restoration, removing specific objects in images, and the like. Most of the existing image filling technologies are based on block matching or texture matching methods, and the problems of poor filling effect, unnatural textures, fuzzy details and the like exist.
Disclosure of Invention
The present disclosure provides an image population method, apparatus, electronic device, computer readable storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided an image filling method including: acquiring an image to be filled and a mask image corresponding to the image to be filled, wherein the image to be filled comprises an area to be filled and a background area outside the area to be filled, and the mask image is used for indicating the relative position relationship between the area to be filled and the background area; converting the image to be filled into image features; inputting the image features and the mask image into a trained first neural network to obtain a preliminary filling image; extracting foreground features and background features from the preliminary filling image, wherein the foreground features correspond to the region to be filled, and the background features correspond to the background region; calculating the similarity between the background features and the foreground features and generating attention hard codes; and inputting the attention hard code, the preliminary fill image, and the mask image into a second neural network to obtain a filled image.
According to another aspect of the present disclosure, there is provided an image filling apparatus including: the image acquisition module is configured to acquire an image to be filled and a mask image corresponding to the image to be filled, wherein the image to be filled comprises an area to be filled and a background area outside the area to be filled, and the mask image is used for indicating the relative position relationship between the area to be filled and the background area; the first feature extraction module is configured to convert the image to be filled into image features; a preliminary population module configured to input the image features and the mask image into a trained first neural network to obtain a preliminary population image; a second feature extraction module configured to extract foreground features and background features in the preliminary filling image, wherein the foreground features correspond to the region to be filled and the background features correspond to the background region; an attention hard-code module configured to calculate a similarity between the background feature and the foreground feature and generate an attention hard-code; and an image population module configured to input the attention hard code, the preliminary population image, and the mask image into a second neural network to obtain a population image.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image population method of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image population method of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the image population method of the present disclosure.
According to one or more embodiments of the present disclosure, feature extraction of foreground and background is performed on the basis of preliminary filling of an image, and similarity matching between foreground features and background features is performed, so as to maximally utilize effective information, filter ineffective information, ensure that the filling effect is vivid, and details are clear.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of an image population method according to an embodiment of the present disclosure;
FIG. 3 illustrates an image schematic including an object to be removed according to an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of the recliner chair of FIG. 3 after filling corresponding areas thereof, in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of image population according to a method of the present disclosure;
fig. 6 shows a block diagram of a structure of an image filling apparatus according to an embodiment of the present disclosure; and
Fig. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the image population method.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
Client devices 101, 102, 103, 104, 105, and/or 106 may be used to receive images to be filled, and so on. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, apple iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., *** Chrome OS); or include various mobile operating systems such as Microsoft Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual special server (VPS PRIVATE SERVER) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store data such as images to be filled or images after filling. The data store 130 may reside in a variety of locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In image processing, situations such as removal of a target object in an image, for example, removal of a passer-by in a photo, removal of an object that is not completely photographed, and the like, are often encountered. Therefore, the removed area in the image needs to be repaired after the image is analyzed according to the human visual rule, so that the original image conforming to the human visual cognition rule is achieved.
However, the current image filling technology based on block matching and texture matching has the problems of unrealistic effect, unnatural texture, fuzzy details and the like. Especially for high-resolution pictures, such as large pictures with 4k and 8k resolution, the image filling technology based on the convolutional neural network with the best effect cannot realize fine filling at present, and the main reason is that the network can consider all pixel points in a receptive field, so that noise factors are excessive, key information required for filling is submerged by noise, and details are blurred.
Accordingly, there is provided, in accordance with an embodiment of the present disclosure, an image filling method 200, as shown in fig. 2, comprising: acquiring an image to be filled and a mask image corresponding to the image to be filled (step 210); converting the image to be filled into image features (step 220); inputting the image features and the mask image into a trained first neural network to obtain a preliminary population image (step 230); extracting foreground features and background features in the preliminary filling image, wherein the foreground features correspond to regions to be filled and the background features correspond to background regions (step 240); calculating the similarity between the background features and the foreground features and generating a hard-code of attention (step 250); and inputting the attention hard-coded, preliminary fill image and the mask image into a second neural network to obtain a filled image (step 260).
According to the embodiment of the disclosure, the feature extraction of the foreground and the background is performed on the basis of the preliminary filling image, and the similarity matching between the foreground feature and the background feature is performed, so that effective information is utilized to the maximum extent, invalid information is filtered, the filling effect is guaranteed to be vivid, and details are clear.
In step 210, an image to be filled and a mask image corresponding to the image to be filled are acquired. The image to be filled comprises an area to be filled and a background area outside the area to be filled, and the mask image is used for indicating the relative position relation between the area to be filled and the background area.
In some embodiments, the target image may be preprocessed: and removing the target area in the target image to obtain an image to be filled, wherein the missing area in the image to be filled is the area to be filled.
Referring to fig. 3, there is schematically shown an image of a schematic representation containing an object to be removed, and for convenience of description, the object to be removed is shown in fig. 3 in a painted manner, i.e., a pair of lounge chairs on a seaside beach. In the embodiment of fig. 3, the target original image is a beach image including a couch, and the couch is scratched out of the original image by a known image segmentation technique (e.g., edge segmentation, semantic segmentation), so that an image to be filled, i.e., a beach image of a region of the couch missing, is obtained.
In some embodiments, the target original image may take a variety of forms including, but not limited to, a captured still image, a video frame image extracted from a video picture, and the like. The video picture can come from a video which is manufactured in advance, and can also be a video corresponding to a real-time video stream, such as a live video, an instant messaging video and the like. In some embodiments, the area to be filled in the image to be filled obtained after the original target image is segmented may be one or more.
In some embodiments, after the image to be filled is acquired, its corresponding mask image may be further acquired. The mask image includes a background region including a region to be filled and outside the region to be filled. In the mask image, the data of the pixel points in the area to be filled are different from the data of the pixel points in the background area, so that the data of each pixel point in the mask image can represent whether the pixel point is positioned in the area to be filled or in the background area, namely, the relative position relationship between the area to be filled and the background area in the image to be filled can be represented through the mask image.
The mask image corresponding to the image to be filled represents the relative position relation, so that the relative position relation between the area to be filled and the background area outside the area to be filled can be accurately obtained, and more accurate image filling is facilitated.
According to some embodiments, the mask image may be a two-dimensional matrix comprising a first data area corresponding to the area to be filled and a second data area corresponding to the background area. The first data and the second data are different. The value of each pixel point in the region to be filled may be set as first data, and the value of each pixel point in the background region may be set as second data. For example, the first data may be 1, the second data may be 0 (or vice versa), and the relative positional relationship between the region to be filled and the background region in the image to be filled is represented by the difference between the values of "1" and "0".
In some embodiments, the mask image may be acquired using an image segmentation network. For example, the original image marked by the user may be input into a trained image segmentation network, so that the image segmentation network identifies a region to be removed (i.e., a region to be filled) of the user mark in the original image, and performs binarization processing on the region to be filled and a background region outside the region to be filled to obtain a mask image, where the values of the pixels in the mask image after binarization may be used to characterize the above relative positional relationship. The marked region to be filled in the image to be filled is identified through the trained image segmentation network, the region to be filled is further distinguished from other regions (background regions) in the image to be filled, and a mask image is generated according to the position relationship between the region to be filled and the background region, so that the value of each pixel point in the mask image can be ensured to accurately represent the relative position relationship.
At step 220, the image to be filled is converted into image features.
To enable a computer to "understand" an image, useful data or information needs to be extracted from the image, resulting in a representation or description of the "non-image" of the image, such as values, vectors, symbols, and the like. This process is the feature extraction, and the extracted representations or descriptions of these "non-images" are the image features.
According to some embodiments, the image to be filled is converted into image features by a trained convolutional neural network model. The image features can be automatically extracted through the trained convolutional neural network, and the method is convenient and quick.
It should be appreciated that other methods of converting the image to be filled into image features are possible and are not limited in this regard.
According to some embodiments, the method 200 may further comprise: after converting the image to be padded to image features, the image features and the mask image are downsampled to the first resolution such that the downsampled image features and mask image are input into the trained first neural network.
Downsampling, also referred to as downsampling, may enable reduced images. And (3) performing s times downsampling on an image with the resolution of M x N, so as to obtain an image with the resolution of (M/s) x (N/s). In the embodiment of the disclosure, by respectively performing downsampling operation on the image features and the mask image, the subsequent calculation amount can be reduced, and each pixel of the downsampled image corresponds to a corresponding region of the original image, so that overfitting is avoided to a certain extent, and therefore, the similarity of the image features can be better compared in the subsequent calculation, and the image filling effect is further improved.
In some examples, the image to be padded and the mask image may be input into a trained convolutional neural network to convert the original image to be padded into image features through the convolutional neural network while downsampling the image features and the mask image to a first resolution. The convolutional neural network outputs image features of a first resolution and a mask image of the first resolution.
In some embodiments, the first resolution may be set according to the user's requirement or practical experience, for example 512×512, which is not limited herein. The downsampling operation may be repeated until the resolution of the image features of the image to be filled meets the first resolution.
In some examples, the downsampling operation may be performed in accordance with the principles of convolution operations. When the convolution step is greater than 1, the resolution of the new image feature obtained after the partial convolution process is smaller than the resolution of the image feature before the partial convolution process, i.e. the partial convolution process can be regarded as a down-sampling process of the image to be filled. The sampling ratio of the downsampled samples may be different depending on the size of the convolution kernel selected and the step size at the time of the partial convolution process. For example, after a partial convolution process with a step size of 2, the resolution of the new image feature obtained may be one half of the resolution of the image feature before the partial convolution process, i.e. assuming that the resolution of the image feature before the partial convolution process is 256×256, the resolution of the new image feature obtained is 128×128. The downsampling operation of the mask image may also be as described above.
At step 230, the image features and mask images are input into a trained first neural network to obtain a preliminary population image.
According to some embodiments, the first neural network is a U-net neural network. The first neural network may be constructed based on a U-Net structure to fully exert the advantage of the algorithm that the U-Net structure realizes image restoration through downsampling and upsampling, and of course, the first neural network may also be constructed based on a network of other algorithms, which is not limited herein.
According to some embodiments, the first neural network is generated using training data, training the neural network based on repair loss of a mask image generated from the training data. The repair loss may be used to represent the deviation of the repaired image from its corresponding original image. Therefore, the neural network trained based on the repair loss has higher precision, so that a refined image filling effect is realized.
In step 240, foreground features and background features are extracted from the preliminary filling image. The foreground features correspond to regions to be filled in the image to be filled and the background features correspond to background regions outside the regions to be filled in the image to be filled.
In some embodiments, the preliminary fill image is an image of the original resolution (the resolution of the image to be filled). In embodiments where the image features and mask images are downsampled to the first resolution, the step of converting the preliminary fill image output by the neural network to an image of the original resolution by upsampling may be included, or the neural network may be pre-trained and set to generate the preliminary fill image of the original resolution from the image features and mask images of the first resolution.
In some embodiments, foreground features and background features are extracted based on the preliminary fill image and the initially acquired mask image.
In some embodiments, foreground features may be extracted by a foreground feature extraction module based on the preliminary fill image and the initially acquired mask image. The background features may also be extracted by a background feature extraction module based on the preliminary fill image and the initially acquired mask image. The foreground feature extraction module and the background feature extraction module can be realized through a trained convolutional neural network model. Corresponding features can be automatically extracted through the trained convolutional neural network model, and the method is convenient and quick.
It should be appreciated that other methods of extracting foreground and background features are possible, and are not limited in this regard.
In step 250, the similarity between the background features and the foreground features is calculated and a hard-coding of attention is generated.
Attention hard-coding is used to indicate the similarity relationship between each background feature and each foreground feature. In some embodiments, the similarity between the background features and the foreground features is calculated and converted to a attention hard code of boolean values, i.e., {0,1} code. The key information required by filling is scored as 1, other irrelevant information is scored as 0, effective information can be utilized as much as possible, and meanwhile, interference of invalid information is reduced, so that a better image filling effect is achieved.
According to some embodiments, for each foreground feature: determining the corresponding background feature with the largest similarity with the foreground feature; and setting the attention hard code corresponding to the determined background feature with the largest similarity to the foreground feature to be 1, and setting the attention hard code corresponding to other background features corresponding to the foreground feature to be 0. Thus, a hard-coding of attention in the form of a matrix may be generated, each row or column in the matrix corresponding to a foreground feature, all elements in the row or column being hard-coded of attention for a background feature corresponding to the foreground feature, wherein only one of the hard-coding of attention for the background feature corresponding to the foreground feature is 1 and the others are 0, the similarity of the background feature corresponding to the hard-coding of attention being 1 with the foreground feature being the largest for indicating the pixel to be filled.
The most relevant background area information of the filling foreground area is set to be 1, other irrelevant information is set to be 0, effective information can be utilized as much as possible, and meanwhile interference of invalid information is reduced, so that a better filling effect is achieved.
At step 260, the attention hard coded, preliminary fill image and mask image are input into a second neural network to obtain a filled image.
According to some embodiments, inputting the attention hard-coded, preliminary fill-in image and the mask image into the second neural network to obtain the filled-in image comprises: and filling the pixel points corresponding to the background features with the attention being hard-coded to be 1 into the pixel points corresponding to the corresponding foreground features. That is, the region to be filled and the background region in the preliminary filling image are determined based on the mask image, and the pixel points corresponding to the background feature hard-coded with attention as 1 (the pixel points in the background region of the preliminary filling image) are filled into the pixel points corresponding to the corresponding foreground feature (the pixel points in the region to be filled of the preliminary filling image).
Fig. 4 schematically illustrates a schematic diagram of the region corresponding to the couch of fig. 3 after filling, and it can be seen that the filling of the region corresponding to the couch of fig. 4 merges the characteristics of the reflection of the beach and the tree, and the filling effect is real and natural.
According to some embodiments, the second neural network is a U-net neural network. The second neural network may be constructed based on the U-Net structure to fully exploit the algorithm advantages of the U-Net structure for implementing image restoration through downsampling and upsampling, and of course, may also be constructed based on networks of other algorithms, which is not limited herein.
According to some embodiments, the second neural network is generated by training the neural network with training data based on repair loss of a mask image generated from the training data. For example, the repair loss may be an average absolute error (MAE). Therefore, the neural network trained based on the repair loss has higher precision, so that a refined image filling effect is realized.
In one embodiment according to the present disclosure, as shown in fig. 5, the acquired image to be filled 501 is transformed into image features 503 by a process 511, while the image features 503 and the acquired mask image 502 are downsampled to the first resolution by a process 522 to reduce the amount of computation. The first resolution image features 504 and the first resolution mask image 505 are then passed through a neural network to generate a preliminary fill image 506 (process 533). The preliminary fill image is an image of the original resolution (resolution of the image to be filled). Foreground features 507, i.e., features of the region to be filled, are then extracted based on the preliminary fill image 506 and the acquired mask image 502 (process 544); background features 508, i.e., features of areas outside the area to be filled, are extracted based on the preliminary fill image 506 and the acquired mask image 502 (process 555). Then, the similarity between the background features 508 and the foreground features 507 is calculated and converted to a attention hard code 509 of boolean values (process 566). Finally, a padded image 510 is generated based on the attention hard code 509, the preliminary padded image 506, and the mask image 502 (process 577).
In some examples, the network-output preliminary fill image may be converted to an image of the original resolution by upsampling or by pre-training and setting the neural network to generate the preliminary fill image of the original resolution from the mask image and image features.
There is also provided, in accordance with an embodiment of the present disclosure, an image filling apparatus 600, as shown in fig. 6, including: an image obtaining module 610, configured to obtain an image to be filled and a mask image corresponding to the image to be filled, where the image to be filled includes an area to be filled and a background area outside the area to be filled, and the mask image is used to indicate a relative positional relationship between the area to be filled and the background area; a first feature extraction module 620 configured to convert the image to be filled into image features; a preliminary population module 630 configured to input the image features and the mask image into a trained first neural network to obtain a preliminary population image; a second feature extraction module 640 configured to extract foreground features and background features in the preliminary filling image, wherein the foreground features correspond to the region to be filled and the background features correspond to the background region; an attention hard coding module 650 configured to calculate a similarity between the background features and the foreground features and generate an attention hard code; and an image population module 660 configured to input the attention hard-coded, the preliminary population image, and the mask image into a second neural network to obtain a population image.
Here, the operations of the above units 610 to 660 of the image filling apparatus 600 are similar to those of the steps 210 to 260 described above, respectively, and are not repeated here.
There is also provided, in accordance with an exemplary embodiment of the present disclosure, an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image population method described above.
There is also provided in accordance with an exemplary embodiment of the present disclosure a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described image population method.
There is also provided in accordance with an exemplary embodiment of the present disclosure a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described image population method.
Referring to fig. 7, a block diagram of an electronic device 700 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the device 700, the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 708 may include, but is not limited to, magnetic disks, optical disks. The communication unit 709 allows the device 700 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. One or more of the steps of the method 200 described above may be performed when a computer program is loaded into RAM 703 and executed by the computing unit 701. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (18)

1. An image filling method, comprising:
acquiring an image to be filled and a mask image corresponding to the image to be filled, wherein the image to be filled comprises an area to be filled and a background area outside the area to be filled, and the mask image is used for indicating the relative position relationship between the area to be filled and the background area;
converting the image to be filled into image features;
inputting the image features and the mask image into a trained first neural network to obtain a preliminary filling image;
Extracting foreground features and background features from the preliminary filling image, wherein the foreground features correspond to the region to be filled, and the background features correspond to the background region;
calculating the similarity between the background features and the foreground features and generating attention hard codes; and
The attention hard-coded, the preliminary fill image, and the mask image are input into a second neural network to obtain a filled image.
2. The method of claim 1, further comprising: after converting the image to be filled into image features, downsampling the image features and the mask image to a first resolution such that the downsampled image features and the mask image are input into a trained first neural network.
3. The method of claim 1, wherein the mask image comprises a two-dimensional matrix comprising a first data region corresponding to the region to be filled and a second data region corresponding to the background region, wherein the first data and the second data are different.
4. The method of claim 1, wherein calculating the similarity between the background features and the foreground features and generating attention hard codes comprises:
For each foreground feature:
determining the corresponding background characteristic with the largest similarity with the foreground characteristic; and
And setting the determined attention hard code corresponding to the background feature with the largest similarity to the foreground feature to be 1, and setting the attention hard code corresponding to other background features corresponding to the foreground feature to be 0.
5. The method of claim 4, wherein inputting the attention hard-coded, the preliminary fill image, and the mask image into a second neural network to obtain a filled image comprises:
And filling the pixel points corresponding to the background features with the attention being hard-coded to be 1 into the pixel points corresponding to the corresponding foreground features.
6. The method of claim 1, wherein at least one of the first neural network and the second neural network is a U-net neural network.
7. The method of claim 1 or 2, wherein the image to be filled is converted into image features by a trained convolutional neural network model.
8. The method of claim 1, wherein the first neural network and the second neural network are generated by training a neural network with training data based on repair loss of mask images generated from the training data.
9. An image filling apparatus comprising:
The image acquisition module is configured to acquire an image to be filled and a mask image corresponding to the image to be filled, wherein the image to be filled comprises an area to be filled and a background area outside the area to be filled, and the mask image is used for indicating the relative position relationship between the area to be filled and the background area;
the first feature extraction module is configured to convert the image to be filled into image features;
A preliminary population module configured to input the image features and the mask image into a trained first neural network to obtain a preliminary population image;
A second feature extraction module configured to extract foreground features and background features in the preliminary filling image, wherein the foreground features correspond to the region to be filled and the background features correspond to the background region;
an attention hard-code module configured to calculate a similarity between the background feature and the foreground feature and generate an attention hard-code; and
An image population module configured to input the attention hard code, the preliminary population image, and the mask image into a second neural network to obtain a population image.
10. The apparatus of claim 9, the first feature extraction module further configured to:
after converting the image to be filled into image features, downsampling the image features and the mask image to a first resolution such that the downsampled image features and the mask image are input into a trained first neural network.
11. The apparatus of claim 9, wherein the mask image comprises a two-dimensional matrix comprising a first data region corresponding to the region to be filled and a second data region corresponding to the background region, wherein the first data and the second data are different.
12. The apparatus of claim 9, wherein the attention hard coding module is configured to:
For each foreground feature:
determining the corresponding background characteristic with the largest similarity with the foreground characteristic; and
And setting the determined attention hard code corresponding to the background feature with the largest similarity to the foreground feature to be 1, and setting the attention hard code corresponding to other background features corresponding to the foreground feature to be 0.
13. The apparatus of claim 12, wherein the image population module is configured to:
And filling the pixel points corresponding to the background features with the attention being hard-coded as 1 into the pixel points of the region to be filled corresponding to the background features with the attention being hard-coded as 1.
14. The apparatus of claim 9, wherein at least one of the first neural network and the second neural network is a U-net neural network.
15. The apparatus of claim 9 or 10, wherein the image to be filled is converted into image features by a trained convolutional neural network model.
16. The apparatus of claim 9, wherein the first neural network and the second neural network are generated by training a neural network with training data based on repair loss of mask images generated from the training data.
17. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202110247313.7A 2021-03-05 2021-03-05 Image filling method and device, electronic equipment and medium Active CN112967355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110247313.7A CN112967355B (en) 2021-03-05 2021-03-05 Image filling method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110247313.7A CN112967355B (en) 2021-03-05 2021-03-05 Image filling method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112967355A CN112967355A (en) 2021-06-15
CN112967355B true CN112967355B (en) 2024-06-11

Family

ID=76276804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110247313.7A Active CN112967355B (en) 2021-03-05 2021-03-05 Image filling method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112967355B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793393B (en) * 2021-09-28 2023-05-09 中国人民解放军国防科技大学 Unmanned vehicle multi-resolution video generation method and device based on attention mechanism
CN114049290A (en) * 2021-11-10 2022-02-15 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114092337B (en) * 2022-01-19 2022-04-22 苏州浪潮智能科技有限公司 Method and device for super-resolution amplification of image at any scale
CN117670641A (en) * 2022-08-31 2024-03-08 荣耀终端有限公司 Data processing method, device, equipment and storage medium
CN115187950B (en) * 2022-09-13 2022-11-22 安徽中科星驰自动驾驶技术有限责任公司 Novel balance mask secondary sampling method for deep learning image data enhancement

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612718A (en) * 2020-05-21 2020-09-01 中山大学 Human face image restoration method introducing attention mechanism
CN111768467A (en) * 2020-06-30 2020-10-13 北京百度网讯科技有限公司 Image filling method, device, equipment and storage medium
WO2021031506A1 (en) * 2019-08-22 2021-02-25 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10755391B2 (en) * 2018-05-15 2020-08-25 Adobe Inc. Digital image completion by learning generation and patch matching jointly

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021031506A1 (en) * 2019-08-22 2021-02-25 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN111612718A (en) * 2020-05-21 2020-09-01 中山大学 Human face image restoration method introducing attention mechanism
CN111768467A (en) * 2020-06-30 2020-10-13 北京百度网讯科技有限公司 Image filling method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
人体前景的自动抠图算法;冉清;冯结青;;计算机辅助设计与图形学学报(02);全文 *
基于并行对抗与多条件融合的生成式高分辨率图像修复;邵杭;王永雄;;模式识别与人工智能(04);全文 *

Also Published As

Publication number Publication date
CN112967355A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN112967355B (en) Image filling method and device, electronic equipment and medium
CN113313650B (en) Image quality enhancement method, device, equipment and medium
CN112967196A (en) Image restoration method and device, electronic device and medium
CN114648638A (en) Training method of semantic segmentation model, semantic segmentation method and device
CN112967356A (en) Image filling method and device, electronic device and medium
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
CN116205819B (en) Character image generation method, training method and device of deep learning model
CN116245998B (en) Rendering map generation method and device, and model training method and device
CN116843833A (en) Three-dimensional model generation method and device and electronic equipment
CN114119935B (en) Image processing method and device
CN114327718B (en) Interface display method, device, equipment and medium
CN114494797A (en) Method and apparatus for training image detection model
CN115331077B (en) Training method of feature extraction model, target classification method, device and equipment
CN116228897B (en) Image processing method, image processing model and training method
CN114842474B (en) Character recognition method, device, electronic equipment and medium
CN115797455B (en) Target detection method, device, electronic equipment and storage medium
CN115423827B (en) Image processing method, image processing device, electronic equipment and storage medium
CN116740510B (en) Image processing method, model training method and device
CN116385641B (en) Image processing method and device, electronic equipment and storage medium
CN113793290B (en) Parallax determining method, device, equipment and medium
CN114821233B (en) Training method, device, equipment and medium of target detection model
CN114118379B (en) Neural network training method, image processing method, device, equipment and medium
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN117291838A (en) Image color processing method, processing model training method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240513

Address after: 116000, No. 2 Linggong Road, Lingshui Town, Dalian High tech Industrial Park, Dalian, Liaoning Province

Applicant after: Dalian University of Technology Press Co.,Ltd.

Country or region after: China

Address before: 2 / F, *** building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant