CN111815505A - Method, apparatus, device and computer readable medium for processing image - Google Patents

Method, apparatus, device and computer readable medium for processing image Download PDF

Info

Publication number
CN111815505A
CN111815505A CN202010675409.9A CN202010675409A CN111815505A CN 111815505 A CN111815505 A CN 111815505A CN 202010675409 A CN202010675409 A CN 202010675409A CN 111815505 A CN111815505 A CN 111815505A
Authority
CN
China
Prior art keywords
image
subject
target image
blurred
category information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010675409.9A
Other languages
Chinese (zh)
Inventor
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010675409.9A priority Critical patent/CN111815505A/en
Publication of CN111815505A publication Critical patent/CN111815505A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, electronic devices, and computer-readable media for processing images. One embodiment of the method comprises: determining whether a subject is included in the target image based on the category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type; in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image; blurring the background image; and generating a blurred target image with category information based on the blurred background image, the image of the main body and the category information. The embodiment can easily and effectively highlight each subject in the target image by blurring the target image.

Description

Method, apparatus, device and computer readable medium for processing image
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, a device, and a computer-readable medium for processing an image.
Background
At present, in the process of processing images, people often adopt a manual image repairing method to blur the background, and further highlight the main body. The manual image repairing method has the problem of complex operation. In addition, blurring of the background image in the target image is often directed to a certain subject type. For example, a background image other than the human body image in the target image is blurred.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, apparatus, device and computer readable medium for processing an image to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for processing an image, the method comprising: determining whether a subject is included in a target image based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type; in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image; blurring the background image; generating a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
In a second aspect, some embodiments of the present disclosure provide an apparatus for processing an image, the apparatus comprising: a determining unit configured to determine whether a subject is included in a target image based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type; a segmentation unit configured to segment the target image to obtain an image of the subject and a background image in response to determining that the target image includes the subject; a blurring unit configured to blur the background image; a generating unit configured to generate a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first and second aspects.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements a method as in any of the first and second aspects.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: first, based on the category information corresponding to the target image, it is determined whether or not the subject is included in the target image as a basis for blurring the target image, and thus different processing is possible for the case where the subject is included and the subject is not included in the target image. And in response to determining that the target image comprises a subject, segmenting the target image to obtain an image of the subject and a background image. And then blurring the background image to reduce the degree of the background image. And finally, generating a blurred target image with the category information through the blurred background image, the image of the main body and the category information. The method for processing the image can simply, conveniently and effectively carry out background blurring on the target image comprising a plurality of types of subjects, and further highlight the subjects, wherein the plurality of types of subjects can mean that the subjects on the target image are at least one different object.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
1-4 are schematic diagrams of one application scenario of a method for processing an image according to some embodiments of the present disclosure;
FIG. 5 is a flow diagram of some embodiments of a method for processing an image according to the present disclosure;
FIG. 6 is a flow diagram of further embodiments of methods for processing an image according to the present disclosure;
FIG. 7 is a schematic block diagram of some embodiments of an apparatus for processing images according to the present disclosure;
FIG. 8 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1-4 are schematic diagrams of one application scenario of a method for processing an image according to some embodiments of the present disclosure.
As shown in fig. 1, the electronic device 101 determines that a subject is included in the target image 102, where the subject corresponding to at least one predetermined subject type includes: female, the female is provided with a drug.
As shown in fig. 2, in response to the electronic device 101 determining that the target image 102 includes a subject, the target image 102 is segmented to obtain an image 103 of the subject and a background image 105, wherein the category information 104 may be "female".
As shown in fig. 3, the electronic device 101 blurs the background image 105 to obtain a blurred background image 106.
As shown in fig. 4, the electronic device 101 generates a blurred target image 107 with category information based on the blurred background image 106, the subject image 103, and the category information 104.
It should be noted that the method for processing the image may be performed by the electronic device 101. The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device 101 is embodied as software, it may be implemented as multiple pieces of software or software modules, for example, to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices, as desired for implementation.
With continued reference to fig. 5, a flow 500 of some embodiments of a method for processing an image according to the present disclosure is shown. The method for processing the image comprises the following steps:
step 501, determining whether a subject is included in a target image.
In some embodiments, a subject (e.g., the electronic device shown in fig. 1) performing the method for processing the image may determine whether the subject is included in the target image based on the category information corresponding to the target image. Wherein the body corresponds to at least one predetermined body type. Here, the above-mentioned subject types may include, but are not limited to, at least one of the following: human, animal, plant. The above-mentioned target image may refer to an image that has been determined. The subject may be a foreground of an image, and the foreground of the image may be an image corresponding to the target object in the target image. The target object may be an object to be photographed. The category information may be category information of a subject included in the image of the subject. As an example, manually entered category information may be received to determine whether a subject is included in a target image. The manually input category information may be information about whether the target image is manually calibrated in advance and includes a subject, according to the category information corresponding to the target image.
In some optional implementations of some embodiments, in response to determining that the target image does not include a subject, pixels of the target image are set to a predetermined threshold. As an example, in response to determining that the target image does not include a subject, all pixels of the target image are set to 0, i.e., the target image is set to black.
In some optional implementations of some embodiments, determining whether the subject is included in the target image may further include:
and inputting the target image into a pre-trained subject recognition network to obtain information whether the target image comprises a subject. The subject identification network may be a target detection network. Here, the object detection network may include one of: SSD (Single Shot MultiBox Detector) algorithm, R-CNN (Region-conditional Neural Networks) algorithm, Fast R-CNN (Fast Region-conditional Neural Networks) algorithm, SPP-NET (spatial Pyramid network) algorithm, YOLO (YouOnly Look one) algorithm, FPN (feature Pyramid Networks) algorithm, DCN (DeformableconvNet) algorithm, RetinaNet target detection algorithm, and the like.
Step 502, in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image.
In some embodiments, in response to determining that the target image includes a subject, the executing entity may segment the target image to obtain an image of the subject and a background image. The image of the subject may be a foreground image in the target image. The background image may be an image other than the foreground image in the target image. As an example, in response to determining that the target image includes a subject, the target image is segmented using a grayscale threshold segmentation method to obtain an image of the subject and a background image.
In some optional implementations of some embodiments, in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image may include:
in response to determining that the target image includes a subject, the target image is input to a pre-trained image segmentation network to obtain an image of the subject. Wherein, the image segmentation network may include one of: FCN Network (full volumetric Network), SegNet Network (Semantic segmentation Network), deep lab Semantic segmentation Network, PSPNet Network (Semantic segmentation Network), Mask-RCNN Network (Mask-Region-CNN, image instance segmentation Network).
And secondly, determining a background image based on the image of the main body and the target image.
Step 503, blurring the background image.
In some embodiments, the execution subject blurs the background image based on the background image obtained in step 502. The method for blurring the background image may include, but is not limited to, at least one of the following: and performing Gaussian blur processing on the background of the image, and performing uniform blur processing on the background of the image. As an example, the background image may be blurred by using a Normalized Box Filter (Normalized Box Filter) or a bilateral Filter (bilateral Filter), so as to obtain a blurred background image.
Step 504 is to generate a blurred target image with category information based on the blurred background image, the subject image, and the category information.
In some embodiments, an executing subject of the method for processing an image may generate a blurred target image with category information based on the blurred background image, the image of the subject, and the category information. As an example, the blurred background image and the subject image may be fused by using OpenCV (Open Source Computer Vision Library) to obtain a fused image. And then adding the corresponding category information to the fused image to obtain the blurred target image with the category information.
Some embodiments of the present disclosure disclose a method of processing an image, first, determining whether a subject is included in a target image as a basis for blurring the target image, and thereby performing different processing for the subject included in the target image and the subject not included in the target image. And in response to determining that the target image comprises a subject, segmenting the target image to obtain an image of the subject and a background image. And then blurring the background image to reduce the degree of the background image. And finally, generating a blurred target image with the category information through the blurred background image, the image of the main body and the category information. The method for processing the image can simply, conveniently and effectively carry out background blurring on the target image comprising a plurality of types of subjects, and further highlight the subjects, wherein the plurality of types of subjects can mean that the subjects on the target image are at least one different object.
With further reference to FIG. 6, a flow 600 of further embodiments of a method for processing an image is shown. The flow 600 of the method for processing an image comprises the steps of:
step 601, determining whether the target image includes the subject.
Step 602, in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image.
Step 603, blurring the background image.
In some embodiments, the specific implementation and technical effects of steps 601-603 may refer to steps 501-503 in those embodiments corresponding to fig. 5, which are not described herein again.
Step 604, processing the edge of the blurred background image and the edge of the main body image.
In some embodiments, the execution subject may process an edge of the blurred background image and an edge of the image of the subject. As an example, pixels of the edge of the blurred background image and pixels of the image of the subject may be normalized based on OpenCV (Open Source Computer Vision Library).
And step 605, fusing the processed blurred background image and the processed main body image to obtain a blurred target image.
In some embodiments, the executing subject may fuse the processed blurred background image and the processed image of the subject to obtain a blurred target image. The blurred target image comprises a main body and a background after blurring processing. The fusion process can be performed on different levels and can be divided into: signal level, pixel level, feature level, decision level. For example, based on OpenCV (Open Source Computer Vision Library), a fused image may be obtained by adding and synthesizing pixel values of an overlapping region of the blurred background image and the subject image by a certain weight.
And 606, adding the category information to the blurred target image to obtain the blurred target image with the category information.
In some embodiments, the execution subject may add the category information to the blurred target image to obtain the blurred target image with the category information.
As can be seen from fig. 6, compared with the description of some embodiments corresponding to fig. 5, the flow 600 of the method for processing an image in some embodiments corresponding to fig. 6 embodies the specific steps of performing the fusion of the image of the subject, the blurred background image and the category information. Therefore, the scheme described in the embodiments can perform diversified extraction by fusing the image of the subject, the blurred background image and the category information corresponding to the target image, so that the blurred target image with the category information can be obtained more naturally, reasonably and effectively.
With further reference to fig. 7, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of an apparatus for processing images, which correspond to those illustrated in fig. 2, and which may be particularly applicable in various electronic devices.
As shown in fig. 7, an apparatus 700 for processing an image of some embodiments includes: a determination unit 701, a segmentation unit 702, a blurring unit 703 and a generation unit 704. The determining unit 701 is configured to determine whether a subject is included in a target image based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type; a segmentation unit 702 configured to segment the target image to obtain an image of the subject and a background image in response to determining that the target image includes the subject; a blurring unit 703 configured to blur the background image; a generating unit 704 configured to generate a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
In some optional implementations of some embodiments, the determining unit 701 is further configured to: and inputting the target image into a pre-trained subject recognition network to obtain information whether the target image comprises a subject.
In some optional implementations of some embodiments, the apparatus 700 may further include: a setting unit (not shown in the figure). Wherein the setting unit may be configured to set the pixels of the target image to a predetermined threshold in response to determining that the target image does not include the subject.
In some optional implementations of some embodiments, the segmentation unit 702 is further configured to: in response to determining that the target image includes a subject, inputting the target image into a pre-trained image segmentation network to obtain an image of the subject; determining a background image based on the image of the subject and the target image.
In some optional implementations of some embodiments, the generating unit 704 is further configured to: processing the edge of the blurred background image and the edge of the image of the main body; fusing the processed blurred background image and the processed main body image to obtain a blurred target image; and adding the category information to the blurred target image to obtain the blurred target image with the category information.
It will be understood that the elements described in the apparatus 700 correspond to various steps in the method described with reference to fig. 5. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 700 and the units included therein, and will not be described herein again.
Referring now to fig. 8, a schematic diagram of an electronic device (e.g., the electronic device of fig. 1) 800 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 8 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through communications device 809, or installed from storage device 808, or installed from ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining whether a subject is included in the target image, wherein the type of the subject corresponding to the subject is at least one; in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image; blurring the background image; generating a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a determination unit, a segmentation unit, a blurring unit, and a generation unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the determination unit may also be described as "a unit that determines whether or not a subject is included in the target image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a method for processing an image, including: determining whether a subject is included in a target image based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type; in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image; blurring the background image; generating a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
According to one or more embodiments of the present disclosure, the method further includes: in response to determining that the target image does not include a subject, pixels of the target image are set to a predetermined threshold.
According to one or more embodiments of the present disclosure, the determining whether the target image includes the main body based on the category information corresponding to the target image includes: and inputting the target image into a pre-trained subject recognition network based on the category information corresponding to the target image to obtain information whether the target image comprises a subject.
According to one or more embodiments of the present disclosure, the segmenting the target image to obtain an image of the subject and a background image in response to determining that the target image includes the subject includes: in response to determining that the target image includes a subject, inputting the target image into a pre-trained image segmentation network to obtain an image of the subject; determining a background image based on the image of the subject and the target image.
According to one or more embodiments of the present disclosure, the generating a blurred target image with category information based on the blurred background image, the image of the subject, and the category information includes: processing the edge of the blurred background image and the edge of the image of the main body; fusing the processed blurred background image and the processed main body image to obtain a blurred target image; and adding the category information to the blurred target image to obtain the blurred target image with the category information.
According to one or more embodiments of the present disclosure, there is provided an apparatus for processing an image, including: a determining unit configured to determine whether a subject is included in a target image based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type; a segmentation unit configured to segment the target image to obtain an image of the subject and a background image in response to determining that the target image includes the subject; a blurring unit configured to blur the background image; a generating unit configured to generate a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
According to one or more embodiments of the present disclosure, the determining unit is further configured to: and inputting the target image into a pre-trained subject recognition network to obtain information whether the target image comprises a subject.
According to one or more embodiments of the present disclosure, an apparatus may further include: a setting unit (not shown in the figure). Wherein the setting unit may be configured to set the pixels of the target image to a predetermined threshold in response to determining that the target image does not include the subject.
According to one or more embodiments of the present disclosure, the segmentation unit is further configured to: in response to determining that the target image includes a subject, inputting the target image into a pre-trained image segmentation network to obtain an image of the subject; determining a background image based on the image of the subject and the target image; and inputting the image of the subject into a classification network trained in advance, and outputting the class information.
According to one or more embodiments of the present disclosure, the generating unit is further configured to: processing the edge of the blurred background image and the edge of the image of the main body; fusing the processed blurred background image and the processed main body image to obtain a blurred target image; and adding the category information to the blurred target image to obtain the blurred target image with the category information.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for processing an image, comprising:
determining whether a subject is included in a target image or not based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type;
in response to determining that the target image includes a subject, segmenting the target image to obtain an image of the subject and a background image;
blurring the background image;
and generating a blurred target image with category information based on the blurred background image, the image of the main body and the category information.
2. The method of claim 1, wherein the method further comprises:
in response to determining that the target image does not include a subject, setting pixels of the target image to a predetermined threshold.
3. The method of claim 1, wherein the determining whether the target image includes the subject based on the category information corresponding to the target image comprises:
and inputting the target image into a pre-trained subject recognition network based on the category information corresponding to the target image to obtain information whether the target image comprises a subject.
4. The method of claim 1, wherein said segmenting the target image to obtain an image of the subject and a background image in response to determining that the target image includes the subject comprises:
in response to determining that the target image includes a subject, inputting the target image into a pre-trained image segmentation network to obtain an image of the subject;
determining a background image based on the image of the subject and the target image.
5. The method of claim 1, wherein generating the blurred target image with category information based on the blurred background image, the image of the subject, and the category information comprises:
processing the edge of the blurred background image and the edge of the image of the main body;
fusing the processed blurred background image and the processed main body image to obtain a blurred target image;
and adding the category information to the blurred target image to obtain the blurred target image with the category information.
6. An apparatus for processing an image, comprising:
a determining unit configured to determine whether a subject is included in a target image based on category information corresponding to the target image, wherein the subject corresponds to at least one predetermined subject type;
a segmentation unit configured to segment the target image, resulting in an image of the subject and a background image, in response to determining that the target image includes the subject;
a blurring unit configured to blur the background image;
a generating unit configured to generate a blurred target image with category information based on the blurred background image, the image of the subject, and the category information.
7. The apparatus of claim 6, wherein the apparatus further comprises:
in response to determining that the target image does not include a subject, setting pixels of the target image to a predetermined threshold.
8. The apparatus of claim 6, wherein the determination unit is further configured to:
and inputting the target image into a pre-trained subject recognition network based on the category information corresponding to the target image to obtain information whether the target image comprises a subject.
9. The apparatus of claim 6, wherein the segmentation unit is further configured to:
in response to determining that the target image includes a subject, inputting the target image into a pre-trained image segmentation network to obtain an image of the subject;
determining a background image based on the image of the subject and the target image.
10. The apparatus of claim 6, wherein the generating unit is further configured to:
processing the edge of the blurred background image and the edge of the image of the main body;
fusing the processed blurred background image and the processed main body image to obtain a blurred target image;
and adding the category information to the blurred target image to obtain the blurred target image with the category information.
11. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any one of claims 1-5.
CN202010675409.9A 2020-07-14 2020-07-14 Method, apparatus, device and computer readable medium for processing image Pending CN111815505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010675409.9A CN111815505A (en) 2020-07-14 2020-07-14 Method, apparatus, device and computer readable medium for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010675409.9A CN111815505A (en) 2020-07-14 2020-07-14 Method, apparatus, device and computer readable medium for processing image

Publications (1)

Publication Number Publication Date
CN111815505A true CN111815505A (en) 2020-10-23

Family

ID=72864745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010675409.9A Pending CN111815505A (en) 2020-07-14 2020-07-14 Method, apparatus, device and computer readable medium for processing image

Country Status (1)

Country Link
CN (1) CN111815505A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610149A (en) * 2017-09-25 2018-01-19 北京奇虎科技有限公司 Image segmentation result edge optimization processing method, device and computing device
CN108038817A (en) * 2017-10-30 2018-05-15 努比亚技术有限公司 A kind of image background weakening method, terminal and computer-readable recording medium
WO2020000879A1 (en) * 2018-06-27 2020-01-02 北京字节跳动网络技术有限公司 Image recognition method and apparatus
WO2020078027A1 (en) * 2018-10-15 2020-04-23 华为技术有限公司 Image processing method, apparatus and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610149A (en) * 2017-09-25 2018-01-19 北京奇虎科技有限公司 Image segmentation result edge optimization processing method, device and computing device
CN108038817A (en) * 2017-10-30 2018-05-15 努比亚技术有限公司 A kind of image background weakening method, terminal and computer-readable recording medium
WO2020000879A1 (en) * 2018-06-27 2020-01-02 北京字节跳动网络技术有限公司 Image recognition method and apparatus
WO2020078027A1 (en) * 2018-10-15 2020-04-23 华为技术有限公司 Image processing method, apparatus and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨威;李俊山;史德琴;: "时空联合的红外运动目标提取算法", 光电工程, no. 05 *

Similar Documents

Publication Publication Date Title
CN109816589B (en) Method and apparatus for generating cartoon style conversion model
CN108710885B (en) Target object detection method and device
CN111368685A (en) Key point identification method and device, readable medium and electronic equipment
CN110288625B (en) Method and apparatus for processing image
CN109377508B (en) Image processing method and device
CN110516678B (en) Image processing method and device
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN112381717A (en) Image processing method, model training method, device, medium, and apparatus
CN111757100B (en) Method and device for determining camera motion variation, electronic equipment and medium
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN111783777B (en) Image processing method, apparatus, electronic device, and computer readable medium
CN112418249A (en) Mask image generation method and device, electronic equipment and computer readable medium
CN111815654A (en) Method, apparatus, device and computer readable medium for processing image
CN110636331B (en) Method and apparatus for processing video
CN111784726A (en) Image matting method and device
CN111815505A (en) Method, apparatus, device and computer readable medium for processing image
CN111915532B (en) Image tracking method and device, electronic equipment and computer readable medium
CN111369472B (en) Image defogging method and device, electronic equipment and medium
CN112085035A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111797931A (en) Image processing method, image processing network training method, device and equipment
CN111815656B (en) Video processing method, apparatus, electronic device and computer readable medium
CN112508801A (en) Image processing method and computing device
CN110599437A (en) Method and apparatus for processing video
CN112215789B (en) Image defogging method, device, equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

CB02 Change of applicant information