WO2022213798A1 - 图像处理方法、装置、电子设备和存储介质 - Google Patents

图像处理方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022213798A1
WO2022213798A1 PCT/CN2022/081938 CN2022081938W WO2022213798A1 WO 2022213798 A1 WO2022213798 A1 WO 2022213798A1 CN 2022081938 W CN2022081938 W CN 2022081938W WO 2022213798 A1 WO2022213798 A1 WO 2022213798A1
Authority
WO
WIPO (PCT)
Prior art keywords
processed
image
images
preset
area
Prior art date
Application number
PCT/CN2022/081938
Other languages
English (en)
French (fr)
Inventor
林倩霞
陆露
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US18/551,684 priority Critical patent/US20240177272A1/en
Publication of WO2022213798A1 publication Critical patent/WO2022213798A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present disclosure relates to the field of information technology, and in particular, to an image processing method, apparatus, electronic device, and storage medium.
  • a terminal or a server can process an existing image to obtain a processed image.
  • the embodiments of the present disclosure provide an image processing method, apparatus, electronic device and storage medium, which realize the fusion processing of the images to be processed, and enrich the image processing modes. Help improve the user experience.
  • Embodiments of the present disclosure provide an image processing method, including:
  • the plurality of images to be processed and the one or more first preset images are subjected to fusion processing to obtain one or more fused target images, and the target image includes the plurality of to-be-processed images. Process the objects to be processed corresponding to the images respectively;
  • the one or more fused target images are displayed.
  • Embodiments of the present disclosure also provide an image processing apparatus, including:
  • the acquisition module is used to acquire multiple images to be processed
  • a fusion module configured to perform fusion processing on the plurality of images to be processed and one or more first preset images in response to obtaining a fusion instruction to obtain one or more fused target images, the target images including Objects to be processed corresponding to the plurality of images to be processed respectively;
  • a display module configured to display the one or more fused target images.
  • Embodiments of the present disclosure also provide an electronic device, the electronic device comprising:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the image processing method as described above.
  • Embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the above-mentioned image processing method.
  • Embodiments of the present disclosure also provide a computer program product, where the computer program product includes a computer program or instructions, and when the computer program or instructions are executed by a processor, implement the image processing method as described above.
  • the image processing method provided by the embodiment of the present disclosure obtains one or more fused target images by performing fusion processing on multiple images to be processed and one or more first preset images, where the target image includes the multiple images.
  • Each to-be-processed image corresponds to the to-be-processed object, which realizes the processing of the to-be-processed image, enriches the image processing mode, and helps to improve the user experience and the interest in the user's use process.
  • FIG. 1 is a flowchart of an image processing method in an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of a display interface in an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a first user interface in an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a second user interface in an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a third user interface in an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a user interface in an embodiment of the disclosure.
  • FIG. 7 is a schematic diagram of a captured image in an embodiment of the disclosure.
  • FIG. 8 is a schematic diagram of another captured image in an embodiment of the disclosure.
  • FIG. 9 is a schematic diagram of a fourth user interface in an embodiment of the disclosure.
  • FIG. 10 is a flowchart of another image processing method in an embodiment of the disclosure.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the disclosure.
  • FIG. 12 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a flowchart of an image processing method in an embodiment of the present disclosure. This embodiment can be applied to the case where image processing is performed in a client.
  • the method can be executed by an image processing apparatus, and the apparatus can use software and/or
  • the device can be configured in electronic devices, such as terminals, specifically including but not limited to smart phones, PDAs, tablet computers, wearable devices with display screens, desktop computers, notebook computers, all-in-one computers, and smart homes. equipment, etc.
  • the method may specifically include:
  • Step 110 Acquire multiple images to be processed.
  • acquiring a plurality of images to be processed includes:
  • a user interface When a trigger operation for the preset identifier is detected, a user interface is displayed, the user interface includes a first area and a second area, the first area is used to display the first image to be processed, and the second area is used to display the second image to be processed , the second image to be processed includes an image collected by the photographing device; the first image to be processed and the second image to be processed are acquired.
  • the schematic diagram of the first user interface as shown in FIG. 3 is displayed , the user interface includes a first area 310 and a second area 320, the first area 310 is used to display the first image to be processed, the second area 320 is used to display the second image to be processed, and the second area 320 can be understood as a user interface
  • the main screen has a larger display area, the first area 310 is a part of the second area 320, and the display area is smaller.
  • Different preset identifiers 210 may represent image processing modes with different effects, that is, image objects participating in fusion are different, so the effects of target images obtained through image processing are different.
  • the background of the target image obtained by triggering the preset identification 210 related to spring may be a spring scene
  • the background of the target image obtained by triggering the preset identification 210 related to winter may be a winter scene, such as white snow.
  • Different preset identifiers 210 may also represent different props, and by providing a plurality of optional props with different special effects (ie, preset identifiers 210 ), the gameplay of the props is increased, thereby improving the user experience.
  • the user interface further includes one or more second preset images 410 ; before acquiring the first image to be processed and the second image to be processed , the image processing method of this embodiment further includes: displaying the first image to be processed in the one or more second preset images in the first area 310 .
  • the first to-be-processed image 410 is displayed in the first area 310 , and the first to-be-processed image 410 may be an image selected by a user from a plurality of second preset images.
  • the identifier 210 is a schematic diagram of the third user interface shown in FIG. 5 .
  • the user interface only displays the first area 310 and the second area 320 .
  • the image to be processed is a face image as an example for description
  • the second preset image may be a single-person photo or a multi-person photo. If the second preset image is a single-person photo, the second preset image is directly displayed in the first area, or the face in the second preset image is cropped out, and the face image is displayed in the in the first area. If the second preset image is a photo of a group of people, the face with the largest, and/or the clearest, and/or the most positive face in the photo can be identified as the first image to be processed and displayed in the first area.
  • each face region in the second preset image is separately identified, and the user selects which face region image to display in the first region as the first image to be processed.
  • the positions of the first area and the second area on the user interface can also be referred to as shown in FIG. 6 , which includes a first area 601 and a second area 602 .
  • the first area 310 displays a third preset image, and when a selection instruction of the user to select the first image to be processed from the plurality of second preset images is detected, the first image to be processed selected by the user is used.
  • the processed image replaces the third preset image, that is, the first image to be processed is displayed in the first area 310, or in other words, the third preset image in the first area 310 is replaced and displayed with one or more second preset images.
  • the third preset image may be a system default template image or a base image.
  • the second image to be processed displayed in the second area can be fused with the third preset image to obtain a fused image, that is, the second image to be processed.
  • the to-be-processed image is merged into the third preset image. Taking the second image to be processed and the third preset image both as human images containing human faces as an example, when the second image to be processed and the third preset image are fused, the person in the second image to be processed can be merged.
  • the face image is fused with the third preset image to obtain a fused image, and combined with the background image in the third preset image, the image processing mode can be enriched, and the user experience and interest can be enhanced.
  • the first image to be processed selected by the user is displayed in the first area, and the second image to be processed displayed in the second area is displayed at this time.
  • the image is fused with the first to-be-processed image displayed in the first area to obtain a fused image.
  • the first image to be processed and the second image to be processed are both photos taken in real time by a photographing device (for example, the front camera or rear camera of the terminal), then the first image to be processed and the second image to be processed are obtained.
  • the image processing method further includes:
  • a captured image of the photographing device is acquired, and the captured image includes a first object to be processed and a second object to be processed; a first image to be processed corresponding to the first object to be processed and a second image to be processed corresponding to the second object to be processed are acquired from the captured image
  • the image to be processed; the first image to be processed is displayed in the first area, and the second image to be processed is displayed in the second area.
  • the first to-be-processed object in the first to-be-processed image includes a to-be-processed object that satisfies a preset condition among the remaining to-be-processed objects except the second to-be-processed object among the plurality of to-be-processed objects collected by the photographing device.
  • the second object to be processed includes the object to be processed in the second image to be processed. That is, the first object to be processed and the second object to be processed are two different objects. Taking a face image as an example, the first object to be processed and the second object to be processed correspond to different face images, for example, the first object to be processed is the face area of user A, and the second object to be processed is the face of user B. area.
  • the captured image includes two face images
  • acquiring a first to-be-processed image corresponding to the first to-be-processed object and a second to-be-processed image corresponding to the second to-be-processed object from the captured image including : Recognize each face in the collected image, and mark the recognized face with a rectangular frame.
  • FIG. 7 a schematic diagram of captured images, the image in the area occupied by the larger rectangular frame 610 can be determined as the second image to be processed, and the face 611 in it is the second object to be processed;
  • the image in the area occupied by the rectangular frame 620 is determined as the first image to be processed, and the human face 621 therein is the first object to be processed.
  • the rectangular frame 610 can be enlarged.
  • the rectangular frame 710 is larger than the rectangular frame 620.
  • the face area with the highest definition in the captured image is taken as the second image to be processed, and the other face area is taken as the first image to be processed.
  • the face image that enters the lens first is used as the second image to be processed, and the face image that enters the lens later is used as the first image to be processed.
  • each face image is identified and the user defines which face image is to be used as the first image to be processed and which face image is to be used as the second image to be processed.
  • a user can drag a specific face image to the first area or the second area by dragging.
  • the face image dragged to the first area is the first image to be processed
  • the face image dragged to the second area is the first image to be processed.
  • the second image to be processed is the second image to be processed.
  • the dragged face image can be automatically resized according to the size of the area.
  • the first to-be-processed image corresponding to the first to-be-processed object and the second to-be-processed object corresponding to the captured image are obtained from the captured image.
  • the second image to be processed includes: taking the face image obtained based on the first user entering the lens as the second image to be processed, and taking the face image obtained based on the second user entering the lens as the first image to be processed image. Or take the face image with the highest definition as the second image to be processed, and take the face image with the highest definition in the remaining face images as the first image to be processed.
  • the larger face area is used as the second image to be processed, and the second largest face area is used as the first image to be processed.
  • each face area is identified, and the user autonomously selects which face image is to be used as the first image to be processed and which face image is to be used as the second image to be processed.
  • the objects to be processed that meet the preset conditions include at least one of the following: the object to be processed that first enters the acquisition range of the photographing device; the object to be processed with the largest size in the captured images of the photographing device; the photographing device The object to be processed in the acquired image with the highest definition; the object to be processed with the smallest difference between the angle and the preset angle (that is, taking a face image as an example, try to select a face image that is close to the frontal face).
  • displaying the first image to be processed in the first area includes: replacing the third preset image in the first area with the first image to be processed show.
  • displaying the first image to be processed in the first area includes: replacing the third preset image in the first area with the first image to be processed show.
  • the interface includes a first area 310 and a second area 320 , and the user interface is similar to the chat interface during video, and is displayed in the first area 310
  • a first image to be processed for example, a face image
  • the second area 320 displays a second image to be processed (for example, a face image)
  • both the first image to be processed and the second image to be processed are captured in real time by a camera , where the symbol 810 represents a shooting button, and the shooting interface can be exited by triggering the shooting button 810 .
  • the third preset image will be restored to display in the first area 310, that is, the system defaults
  • the template image or base image of the image is displayed in the first area 310 .
  • the second object to be processed moves out of the capture range of the photographing device, the first image to be processed is displayed in the second area, and the third preset image is restored to display in the first area . That is, when the second object to be processed leaves the lens, the first object to be processed is moved to the second area for display, and the system default template image or base image is restored and displayed in the first area.
  • the user can be prompted to face the camera in the form of voice or text to capture the frontal image of the user. If the frontal image has not been captured, a certain image processing algorithm can be used. Adjust the slightly slanted face image to a frontal face image.
  • the second image to be processed is an image captured in real time, and when the second image to be processed is acquired, multiple users enter the camera, that is, in a scene where multiple users are within the shooting range of the camera, If multiple objects to be processed enter the collection range of the photographing device (eg, the shooting range of the camera), the multiple second objects to be processed in the second image to be processed include objects to be processed that satisfy preset conditions.
  • the objects to be processed that meet the preset conditions include at least one of the following: the object to be processed that first enters the acquisition range of the photographing device; the object to be processed with the largest size in the acquired image of the photographing device; the sharpness in the acquired image of the photographing device The highest object to be processed; the object to be processed with the smallest difference between the angle and the preset angle (ie, an image that is as close to the frontal face as possible). That is, the user image that enters the lens first, or the largest user image, or the clearest user image, or the user image with the most positive face is displayed in the second area.
  • the second The plurality of second objects to be processed in the image to be processed include objects to be processed that satisfy preset conditions.
  • the objects to be processed that meet the preset conditions include at least one of the following: the object to be processed that first enters the acquisition range of the photographing device; the object to be processed with the largest size in the acquired image of the photographing device; the object with the highest definition in the acquired image of the photographing device Object to be processed; the object to be processed with the smallest difference between the angle and the preset angle (ie, an image that is as close to the frontal face as possible). That is, the user image that enters the lens first, or the largest user image, or the clearest user image, or the user image with the most positive face is displayed in the second area.
  • Step 120 In response to obtaining a fusion instruction, perform fusion processing on the plurality of images to be processed and one or more first preset images to obtain one or more fused target images, where the target images include the The objects to be processed corresponding to the multiple images to be processed respectively.
  • the fusion instruction may be a linkage instruction generated when the user triggers a preset fusion icon, or may be an instruction triggered when a blank position of the display screen is touched.
  • the image to be processed is an image including a person
  • the first preset image may also be an image including a person.
  • Performing fusion processing on the to-be-processed image and the first preset image may include replacing the face in the to-be-processed image with the position of the face in the first preset image to obtain the target image.
  • the face in the to-be-processed image and the target The face in the image is the same face; it is also possible to replace the face in the image to be processed with the face position in the first preset image to obtain the target image, and according to the special effect of the first preset image to the face Carry out modification and beautification, and at this time, the face in the target image is the face after the face in the image to be processed is beautified; it may also include performing a comparison between the face in the image to be processed and the face in the first preset image. In fusion processing, the face in the image to be processed and the face in the target image are different faces.
  • the special effect gameplay provided by the above image processing enriches the image processing mode and enhances the user experience and interest.
  • the first image to be processed and the second image to be processed are both images including one person, and the first preset image includes two images. an image of a person, then the fusion processing of the to-be-processed image and the first preset image is specifically: replacing the face in the first to-be-processed image with the position of a person's face in the first preset image; The face in the second image to be processed is replaced with the face position of another person in the first preset image, and the face in the first image to be processed, the face in the second image to be processed, and the first preset image are obtained by replacing the face in the image to be processed.
  • the target image composed of the bodies of the two characters in the image and the background, so that the clothes, dresses, and postures of the characters in the first preset image can be extended to expand the poses that the user poses when taking pictures and the clothes and dresses of the users when taking pictures. , to improve the user's gameplay experience.
  • the faces in the to-be-processed images can also be sequentially replaced with the positions of the faces in each first preset image to obtain multiple corresponding target images, thereby improving the User's gameplay experience.
  • Step 130 Display the one or more fused target images.
  • a small video with effects can be generated based on the target image, for example, the shots of one or more fused target images are played in sequence from near to far (similar to the automatic playback effect of a slideshow), and Displaying dynamic effects such as shining little stars enhances the processing effect of image fusion and improves the user experience.
  • the image processing method provided in this embodiment obtains one or more fused target images by performing fusion processing on a plurality of images to be processed and one or more first preset images, where the target image includes the plurality of first preset images.
  • the to-be-processed images correspond to the to-be-processed objects respectively, which realizes the processing of the to-be-processed images, enriches the image processing mode, and is beneficial to improve the user experience and the interest in the user's use process.
  • FIG. 10 is a flowchart of another image processing method in an embodiment of the disclosure.
  • step 120 in response to obtaining a fusion instruction, perform fusion processing on the plurality of images to be processed and one or more first preset images to obtain one or more images.
  • the fused target image, the target image includes the to-be-processed objects corresponding to the plurality of to-be-processed images respectively" provides an optional implementation.
  • the same or similar content as in the above-mentioned embodiment will not be explained in this embodiment, and reference may be made to the above-mentioned embodiment for related content.
  • the image processing method includes the following steps:
  • Step 1010 When a trigger operation for a preset identifier is detected, display a user interface, where the user interface includes a first area and a second area, the first area is used to display the first image to be processed, and the second area is used for displaying the first image to be processed.
  • the area is used to display a second image to be processed, where the second image to be processed includes an image captured by the photographing device.
  • Step 1020 Acquire the first image to be processed and the second image to be processed.
  • Step 1030 Acquire a first object to be processed from the first image to be processed, and acquire a second object to be processed from the second image to be processed.
  • acquiring the first object to be processed from the first image to be processed includes: in the case where the first image to be processed includes multiple objects to be processed, acquiring a preset condition in the first image to be processed
  • the first object to be processed is a photo of a group of people, and the face in which the face image area is the largest, and/or the clearest, and/or the most positive can be identified as the first object to be processed.
  • acquiring the first object to be processed from the first image to be processed includes: in the case that the first image to be processed includes a plurality of objects to be processed, processing a plurality of objects in the first image to be processed The objects to be processed are identified respectively; the identification information of each object to be processed in the plurality of objects to be processed is displayed; in response to a selection instruction for acquiring the identification information of the first object to be processed in the plurality of objects to be processed, the first object to be processed is acquired The first object to be processed in the image.
  • the first image to be processed is a photo of a group of people, each of the face image regions can be circled separately, and the user can choose which face image region to use as the first object to be processed.
  • the first object to be processed and the second object to be processed can also be directly acquired, without first acquiring the first image to be processed and the second image to be processed, and then acquiring the first image to be processed from the first image to be processed
  • An object to be processed, and the second object to be processed is acquired from the second image to be processed.
  • an image is collected by a photographing device, which includes a first object to be processed and a second object to be processed.
  • the first object to be processed and the second object to be processed can be directly identified based on the large image.
  • the object to be processed does not need to firstly identify the rectangular image including the first object to be processed as the first image to be processed, the rectangular image including the second object to be processed as the second image to be processed, and then identify based on the first image to be processed The first object to be processed, and the second object to be processed is identified based on the second image to be processed.
  • Step 1040 In response to obtaining a fusion instruction, fuse the first object to be processed and the second object to be processed into one or more first preset images to obtain one or more fused target images, where The target image includes the fused first object to be processed and the fused second object to be processed.
  • the fusion instruction can be triggered by a preset fusion icon, button or control, or can be triggered directly by touching a blank area of the screen.
  • the first object to be processed and the second object to be processed are fused into one or more first preset images to obtain one or more fused target images, including: A target preset image is selected from the preset images, and the target preset image includes a third object to be processed and a fourth object to be processed; any one of the third object to be processed and the fourth object to be processed is replaced with the first object to be processed , and the third object to be processed and the fourth object to be processed are replaced with the second object to be processed except for the other one of the objects to be processed, to obtain a fused target image.
  • the target preset image is an image including two people, including M (M can be regarded as the third object to be processed) and N (N can be regarded as the fourth object to be processed), and the first object to be processed is user A
  • the second object to be processed is the face image of user B
  • the process of image fusion may specifically be to fuse the face of N in the target preset image with the face of user A, and the face of M. It can be fused with the face of user B to obtain the target image, or the face of N can be fused with the face of user B, and the face of M can be fused with the face of user A to obtain the target image .
  • the clothing, dressing and posture of the characters in the target preset image can be extended, which expands the user's pose when taking pictures, and expands the user's clothing and clothing when taking pictures, improves the user's gameplay experience, improves the image processing effect, and the user's use of this method.
  • the specific process of fusion processing is illustrated by taking the fusion processing of N's face and user A's face as an example: in one embodiment, the target image is obtained by directly replacing N's face with the face of user A.
  • the fourth object to be processed ie, the face of N
  • the fourth object to be processed is the same as the first object to be processed (ie, the face of user A).
  • the face of N can be replaced with the face of user A, and the face of user A can be modified and beautified based on the special effects of N to obtain the target image, for example, adding bright flashes such as small stars special effects, etc.
  • the face of N and the face of user A can be fused to obtain the target image, and at this time the fourth object to be processed (ie the face of N) and the first object to be processed (ie the face of N) User A's face) is different.
  • the special effects gameplay provided by the above image processing enriches the image processing mode and improves the user's gameplay experience.
  • the styles of the plurality of first preset images include many styles, such as images of various styles.
  • the terminal selects a target preset image from a plurality of first preset images, and performs fusion processing based on the target preset image.
  • the selected target preset image is different from the target preset image selected in the last fusion processing, so that the user can experience image effects of various styles.
  • the terminal when it performs fusion processing for the first time, it selects the first preset image, when it performs fusion processing for the second time, selects the second preset image, and so on, so that the preset images selected for each fusion are different. , so that the target images obtained after each fusion are different, thereby improving the user experience.
  • the first object to be processed and the second object to be processed are fused into one or more first preset images to obtain one or more fused target images, including: displaying a plurality of first A preset image; in response to obtaining a selection instruction for a target preset image in the plurality of first preset images, replace any one of the third object to be processed and the fourth object to be processed in the target preset image with the first object to be processed processing the object, and replacing the third object to be processed and the fourth object to be processed with the second object to be processed, to obtain a fused target image.
  • the terminal displays a plurality of first preset images, the user independently selects which first preset image to use for image fusion, and performs image fusion based on the target preset image selected by the user.
  • the user independently selects which first preset image to use for image fusion, and performs image fusion based on the target preset image selected by the user.
  • the first object to be processed and the second object to be processed are fused into one or more first preset images to obtain one or more fused target images, including: for a plurality of first For each first preset image in the preset images, any one of the third object to be processed and the fourth object to be processed in the first preset image is replaced with the first object to be processed, and the third object to be processed is replaced and the fourth object to be processed is replaced with the second object to be processed, to obtain a plurality of fused target images, and display the plurality of fused target images. For example, when multiple fused target images are displayed in the same user interface, the user can select one of the target images.
  • the terminal may automatically display multiple fused target images in a loop, that is, multiple target images are automatically displayed one by one on the terminal screen in sequence, so as to improve the friendliness of the interactive interface and improve the user experience.
  • the image processing method further includes: in response to acquiring the interchange instruction, combining the first object to be processed and the second object in the fused target image with The objects to be processed are swapped.
  • the obtained target image may not be an image satisfactory to the user, such as the target image It is an image including two people, in which the face of person M corresponds to the face of user B, the face of person N corresponds to the face of user A, and user A or B is not satisfied with the current target image and the latter wants to try other fusions
  • the user can trigger the exchange instruction.
  • the terminal receives the exchange instruction, it will fuse the face of person M with the face of user B, and fuse the face of person N with the face of user B. , in order to enhance the interest in the use process and provide users with satisfactory results.
  • the first image to be processed displayed in the first area can also be the default template image of the system
  • the second image to be processed displayed in the second area can also be fused with the template image
  • the first to-be-processed image displayed in the area is the user's own image
  • the second to-be-processed image displayed in the second area is the user's own image captured by the photographing device. At this time, the user's own image may be fused with his own image .
  • Step 1050 Display one or more fused target images.
  • the target image can also be shared to other platforms such as friends, forums, microblogs, or circle of friends in response to the acquisition and sharing instruction.
  • the image triggering method provided by the embodiment of the present disclosure provides an optional implementation manner for fusion processing.
  • the essence of fusion processing is face-changing processing, that is, the face in the first preset image is fused with the face in the image to be processed.
  • Processing in order to extend the clothing, dress and posture of the characters in the first preset image, expand the user's pose when taking pictures and expand the user's clothing and clothing when taking pictures, improve the user's gameplay experience, improve the image processing effect and the user's
  • the fun of using this image processing function enhances the user's enjoyment of using special effects.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the disclosure.
  • the apparatuses provided by the embodiments of the present disclosure may be configured in a client, or may be configured in a server.
  • the image processing apparatus specifically includes: an acquisition module 1110 , a fusion module 1120 and a display module 1130 .
  • the acquisition module 1110 is used to acquire multiple images to be processed; the fusion module 1120 is used to fuse the multiple images to be processed and one or more first preset images in response to the acquisition of the fusion instruction, One or more fused target images are obtained, and the target images include the objects to be processed corresponding to the plurality of images to be processed respectively; the display module 1130 is configured to display the one or more fused target images.
  • the acquisition module 1110 includes: a display unit, configured to display a user interface when a trigger operation for the preset identifier is detected, the user interface includes a first area and a second area, and the first area is used for Displaying a first image to be processed, the second area is used to display a second image to be processed, and the second image to be processed includes an image captured by a photographing device; a first acquisition unit, configured to acquire the first image to be processed and the second image to be processed.
  • the user interface further includes one or more second preset images; the display unit is further configured to display the one or more images before acquiring the first image to be processed and the second image to be processed.
  • a first image to be processed among the plurality of second preset images is displayed in the first area.
  • the display unit is further configured to: display the third preset image displayed in the first area The image is replaced with the first image to be processed in the one or more second preset images.
  • the acquisition module 1110 further includes: a first acquisition unit, configured to acquire a captured image of the photographing device, where the captured image includes a first object to be processed and a second object to be processed; the first to-be-processed image corresponding to the first to-be-processed object and the second to-be-processed image corresponding to the second to-be-processed object, and the corresponding display unit is further configured to display the first to-be-processed image on the In the first area, the second image to be processed is displayed in the second area.
  • a first acquisition unit configured to acquire a captured image of the photographing device, where the captured image includes a first object to be processed and a second object to be processed
  • the corresponding display unit is further configured to display the first to-be-processed image on the In the first area,
  • the second object to be processed in the second image to be processed includes objects to be processed that satisfy a preset condition among the multiple objects to be processed. object.
  • the first acquisition unit is configured to acquire a plurality of to-be-processed objects collected by the photographing device. Among the remaining objects to be processed except the second object to be processed, the object to be processed that meets the preset condition is used as the second object to be processed.
  • the first object to be processed in the first image to be processed includes the remaining objects to be processed except the second object to be processed among the plurality of objects to be processed collected by the photographing device that satisfy a preset condition.
  • the object to be processed, the second object to be processed includes the object to be processed in the second image to be processed.
  • the display unit is further configured to: restore and display the third preset image in the first area.
  • the display unit is further configured to: display the first image to be processed in the second area, in the The third preset image is restored to be displayed in the first area.
  • the objects to be processed that meet the preset conditions include at least one of the following:
  • the object to be processed that first enters the acquisition range of the photographing device
  • the fusion module 1120 includes: a second acquisition unit, configured to acquire a first object to be processed from the first image to be processed, and a second object to be processed from the second image to be processed; fusion a unit for fusing the first object to be processed and the second object to be processed into one or more first preset images to obtain one or more fused target images, the target images including fusion The first object to be processed and the second object to be processed after fusion.
  • the fusion unit includes: a selection subunit, configured to select a target preset image from a plurality of first preset images, where the target preset image includes a third object to be processed and a fourth object to be processed; A replacement subunit, configured to replace any one of the third object to be processed and the fourth object to be processed with the first object to be processed, and replace the third object to be processed and the fourth object to be processed One of the objects to be processed is replaced with the second object to be processed, and a fused target image is obtained.
  • a selection subunit configured to select a target preset image from a plurality of first preset images, where the target preset image includes a third object to be processed and a fourth object to be processed
  • a replacement subunit configured to replace any one of the third object to be processed and the fourth object to be processed with the first object to be processed, and replace the third object to be processed and the fourth object to be processed
  • One of the objects to be processed is replaced with the second object to be processed, and a fused target image is obtained
  • the fusion unit includes: a display subunit, configured to display a plurality of first preset images; the replacement subunit is further configured to: in response to acquiring a target preset in the plurality of first preset images. Assuming the image selection instruction, replace any one of the third object to be processed and the fourth object to be processed in the target preset image with the first object to be processed, and replace the third object to be processed and all objects to be processed.
  • the fourth object to be processed is replaced with the second object to be processed except for the any one, to obtain a fused target image.
  • the replacement subunit is further configured to: for each first preset image in the plurality of first preset images, replace the third object to be processed and the fourth object to be processed in the first preset image. Any one of the objects is replaced with the first object to be processed, and the third object to be processed and the fourth object to be processed other than the any one is replaced with the second object to be processed. Process the object to obtain multiple fused target images.
  • the fusion module 1120 further includes: an exchange unit, configured to, after displaying the one or more fused target images, in response to acquiring an exchange instruction, convert the first image in the fused target image.
  • an exchange unit configured to, after displaying the one or more fused target images, in response to acquiring an exchange instruction, convert the first image in the fused target image.
  • a to-be-processed object is exchanged with the second to-be-processed object.
  • the apparatus provided by the embodiment of the present disclosure can execute the steps performed by the client in the method provided by the method embodiment of the present disclosure, and the execution steps and beneficial effects are not repeated here.
  • FIG. 12 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring specifically to FIG. 12 below, it shows a schematic structural diagram of an electronic device 500 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 500 in the embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), an in-vehicle terminal ( Mobile terminals such as in-vehicle navigation terminals), wearable electronic devices, etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
  • the electronic device shown in FIG. 12 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 500 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 501 that may be loaded into random access according to a program stored in a read only memory (ROM) 502 or from a storage device 508
  • a program in the memory (RAM) 503 executes various appropriate actions and processes to implement the . . . method of the embodiments as described in the present disclosure.
  • RAM 503 various programs and data required for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to bus 504 .
  • I/O interface 505 input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration
  • An output device 507 such as a computer
  • a storage device 508 including, for example, a magnetic tape, a hard disk, etc.
  • Communication means 509 may allow electronic device 500 to communicate wirelessly or by wire with other devices to exchange data. While Figure 12 shows electronic device 500 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowchart, thereby achieving the above the method described.
  • the computer program may be downloaded and installed from the network via the communication device 509, or from the storage device 508, or from the ROM 502.
  • the processing apparatus 501 When the computer program is executed by the processing apparatus 501, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • clients and servers can communicate using any currently known or future developed network protocols such as HTTP (HyperTe11t Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device causes the electronic device to: acquire a plurality of images to be processed;
  • the plurality of images to be processed and the one or more first preset images are subjected to fusion processing to obtain one or more fused target images, and the target image includes the plurality of to-be-processed images. Process the objects to be processed corresponding to the images respectively;
  • the one or more fused target images are displayed.
  • the electronic device may also perform other steps described in the above embodiments.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides an image processing method, comprising: acquiring a plurality of images to be processed; in response to acquiring a fusion instruction, combining the plurality of images to be processed, and one or more Performing fusion processing on the first preset image to obtain one or more fused target images, the target images including the to-be-processed objects corresponding to the plurality of to-be-processed images respectively; displaying the one or more fused targets image.
  • acquiring a plurality of images to be processed includes: when a trigger operation for a preset identifier is detected, displaying a user interface, where the user interface It includes a first area and a second area, the first area is used to display the first image to be processed, the second area is used to display the second image to be processed, and the second image to be processed includes the image captured by the camera ; Acquire the first image to be processed and the second image to be processed.
  • the user interface further includes one or more second preset images; acquiring the first image to be processed and the second image to be processed Before the image to be processed, the method further includes: displaying a first image to be processed in the one or more second preset images in the first area.
  • the first area displays a third preset image; Displaying the first image to be processed in the preset images in the first area includes: replacing the third preset image displayed in the first area with the one or more second preset images The first image to be processed in .
  • the method before acquiring the first to-be-processed image and the second to-be-processed image, the method further includes: acquiring an image captured by a photographing device an image, the collected image includes a first object to be processed and a second object to be processed; a first image to be processed corresponding to the first object to be processed and a corresponding image of the second object to be processed are obtained from the collected image
  • the second to-be-processed image is displayed; the first to-be-processed image is displayed in the first area, and the second to-be-processed image is displayed in the second area.
  • the method includes: replacing the third preset image displayed in the first area with the first image to be processed.
  • the second to-be-processed image in the second to-be-processed image includes to-be-processed objects that satisfy preset conditions among the plurality of to-be-processed objects.
  • the method further includes: if the second object to be processed in the second image to be processed moves out of the acquisition range of the photographing device , the to-be-processed object that meets the preset condition among the remaining to-be-processed objects except the second to-be-processed object collected by the photographing device is used as the second to-be-processed object.
  • the first to-be-processed object in the first to-be-processed image includes a third of the plurality of to-be-processed objects collected by the photographing device.
  • the remaining to-be-processed objects other than the second to-be-processed objects satisfy the preset condition, and the second to-be-processed objects include the to-be-processed objects in the second to-be-processed image.
  • the method further includes: if the first object to be processed moves out of the capture range of the photographing device, The third preset image is restored to be displayed in the area.
  • the method further includes: if the second object to be processed moves out of the capture range of the photographing device, moving the first object to be processed The image to be processed is displayed in the second area, and the third preset image is resumed and displayed in the first area.
  • the objects to be processed that meet the preset conditions include at least one of the following: the objects to be processed that first enter the acquisition range of the photographing device; The object to be processed with the largest size in the captured image of the shooting device; the object to be processed with the highest definition in the captured image of the shooting device; the object to be processed with the smallest difference between the angle and the preset angle.
  • the plurality of images to be processed and one or more first preset images are fused to obtain one or more fused images
  • the resulting target image includes: acquiring a first object to be processed from the first image to be processed, and acquiring a second object to be processed from the second image to be processed; combining the first object to be processed with the object to be processed
  • the second object to be processed is fused into one or more first preset images to obtain one or more fused target images, and the target image includes the fused first object to be processed and all fused objects. Describe the second object to be processed.
  • the first object to be processed and the second object to be processed are fused into one or more first preset images
  • Obtaining one or more fused target images includes: selecting a target preset image from a plurality of first preset images, where the target preset image includes a third object to be processed and a fourth object to be processed; Any one of the third object to be processed and the fourth object to be processed is replaced with the first object to be processed, and the third object to be processed and the fourth object to be processed are replaced by the any one The other one is replaced with the second object to be processed to obtain a fused target image.
  • the first object to be processed and the second object to be processed are fused into one or more first preset images
  • Obtaining one or more fused target images includes: displaying a plurality of first preset images; in response to acquiring a selection instruction for a target preset image in the plurality of first preset images, preset the target images Any one of the third object to be processed and the fourth object to be processed in the image is replaced with the first object to be processed, and any one of the third object to be processed and the fourth object to be processed is removed The other one is replaced with the second object to be processed to obtain a fused target image.
  • the first object to be processed and the second object to be processed are fused into one or more first preset images
  • Obtaining one or more fused target images includes: for each first preset image in the plurality of first preset images, combining the third object to be processed and the fourth object to be processed in the first preset image Any one of the objects is replaced with the first object to be processed, and the third object to be processed and the fourth object to be processed other than the any one is replaced with the second object to be processed. Process the object to obtain multiple fused target images.
  • the method further includes: in response to acquiring the interchange instruction, converting the The first object to be processed and the second object to be processed in the fused target image are exchanged.
  • the present disclosure provides an image processing apparatus, comprising: an acquisition module for acquiring a plurality of images to be processed; a fusion module for, in response to acquiring a fusion instruction, Perform fusion processing on the images to be processed and one or more first preset images to obtain one or more fused target images, where the target images include objects to be processed corresponding to the plurality of images to be processed respectively; display a module for displaying the one or more fused target images.
  • the present disclosure provides an electronic device, comprising:
  • processors one or more processors
  • memory for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the image processing method as provided in any one of the present disclosure.
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, realizes image processing as described in any one of the present disclosure method.
  • Embodiments of the present disclosure also provide a computer program product, where the computer program product includes a computer program or instructions, and when the computer program or instructions are executed by a processor, implement the image processing method as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例公开了一种图像处理方法、装置、电子设备和存储介质,该方法包括:获取多个待处理图像;响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;显示所述一个或多个融合后的目标图像。通过本公开的技术方案,实现了对待处理图像的融合处理,丰富了图像处理模式,有助于提升用户的使用体验。

Description

图像处理方法、装置、电子设备和存储介质
本申请要求于2021年04月08日提交中国专利局、申请号为202110379814.0、申请名称为“图像处理方法、装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及信息技术领域,尤其涉及一种图像处理方法、装置、电子设备和存储介质。
背景技术
随着图像处理技术的发展,终端或服务器可以对已有的图像进行处理,得到处理后的图像。
但是,现有技术中涉及到图像处理的特效和/或道具的玩法较为单一,降低了用户体验。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种图像处理方法、装置、电子设备和存储介质,实现了对待处理图像的融合处理,丰富了图像处理模式,有助于提升用户的使用体验。
本公开实施例提供了一种图像处理方法,包括:
获取多个待处理图像;
响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;
显示所述一个或多个融合后的目标图像。
本公开实施例还提供了一种图像处理装置,包括:
获取模块,用于获取多个待处理图像;
融合模块,用于响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;
显示模块,用于显示所述一个或多个融合后的目标图像。
本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如上所述的图像处理方法。
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的图像处理方法。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如上所述的图像处理方法。
本公开实施例提供的技术方案与现有技术相比至少具有如下优点:
本公开实施例提供的图像处理方法,通过将多个待处理图像,以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,该目标图像包括所述多个待处理图像分别对应的待处理对象,实现了对待处理图像的处理,丰富了图像处理模式,有利于提升用户使用体验以及用户使用过程中的趣味性。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例中的一种图像处理方法的流程图;
图2为本公开实施例中的一种显示界面示意图;
图3为本公开实施例中的第一种用户界面的示意图;
图4为本公开实施例中的第二种用户界面的示意图;
图5为本公开实施例中的第三种用户界面的示意图;
图6为本公开实施例中的一种用户界面的示意图;
图7为本公开实施例中的一种采集图像的示意图;
图8为本公开实施例中的另一种采集图像的示意图;
图9为本公开实施例中的第四种用户界面的示意图;
图10为本公开实施例中的另一种图像处理方法的流程图;
图11为本公开实施例中的一种图像处理装置的结构示意图;
图12为本公开实施例中的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开实施例中的一种图像处理方法的流程图,本实施例可适用于客户端中进行图像处理的情况,该方法可以由图像处理装置执行,该装置可以采用软件和/或硬件的方式实现,该装置可配置于电子设备中,例如终端,具体包括但不限于智能手机、掌上电脑、平板电脑、带显示屏的可穿戴设备、台式机、笔记本电脑、一体机、智能家居设备等。
如图1所示,该方法具体可以包括:
步骤110、获取多个待处理图像。
其中,多个的含义通常表示至少两个。在一种实施方式中,获取多个待处理图像包括:
在检测到针对预设标识的触发操作时,显示用户界面,用户界面包括第一区域和第二区域,第一区域用于显示第一待处理图像,第二区域用于显示第二待处理图像,第二待处理图像包括拍摄装置采集的图像;获取第一待处理图像和第二待处理图像。
其中,参考如图2所示的一种显示界面示意图,其中,包括预设标识210,当用户触发某个具体的预设标识210时,显示如图3所示的第一种用户界面的示意图,用户界面包括第一区域310和第二区域320,第一区域310用于显示第一待处理图像,第二区域320用于显示第二待处理图像,第二区域320可以理解为是用户界面的主屏幕,显示区域较大,第一区域310为第二区域320的部分区域,显示区域较小。不同的预设标识210可以表示不同效果的图像处理模式,即参与融合的图像对象不同,因此通过图像处理后获得的目标图像的效果不同。例如通过触发与春天相关的预设标识210获得的目标图像的背景可能是春景,通过触发与冬天相关的预设标识210获得的目标图像的背景可能是冬景,例如白雪。不同的预设标识210还可以表示不同的道具,通过提供多个不同特效的可选道具(即预设标识210),增加了道具玩法,从而提升了用户的使用体验。
在一种实施方式中,参考如图4所示的第二种用户界面的示意图,用户界面还包括一个或多个第二预设图像410;获取第一待处理图像和第二待处理图像之前,本实施例的图像处理方法还包括:将一个或多个第二预设图像中的第一待处理图像显示在第一区域310中。如图4所示,将第一待处理图像410显示在第一区域310,第一待处理图像410可以是用户从多个第二预设图像中选取的一幅图像。当检测到用户的选取指令时,将用户选取的第一待处理图像显示于第一区域310中,同时面板420从用户界面消失,即用户界面不再显示其他第二预设图像410以及预设标识210,如图5所示的第三种用户界面的示意图,用户界面仅显示有第一区域310和第二区域320。
在一种可选实施方式中,以待处理图像为人脸图像为例进行描述,第二预设图像可以是单人照,也可以是多人照。如果第二预设图像是单人照,则将第二预设图像直接显示于第一区域中,或者将第二预设图像中的人脸裁剪出来,调整大小后将该人脸图像显示于第一区域中。如果第二预设图像是多人照,则可以识别照片中人脸最大,和/或最清晰,和/ 或位置最正的人脸作为第一待处理图像显示于第一区域中。或者分别将第二预设图像中的每个人脸区域标识出来,由用户自己选择将哪个人脸区域的图像作为第一待处理图像显示于第一区域中。第一区域和第二区域在用户界面的位置还可以参考如图6所示,其中包括第一区域601和第二区域602。
在一种实施方式中,第一区域310显示有第三预设图像,当检测到用户从多个第二预设图像中选取第一待处理图像的选取指令时,利用用户选取的第一待处理图像替换第三预设图像,即将第一待处理图像显示于第一区域310中,或者换句话说,将第一区域310中的第三预设图像替换显示为一个或多个第二预设图像中的第一待处理图像。其中,第三预设图像可以是***默认的模板图像或底图。若用户不从多个第二预设图像中选取第一待处理图像,则显示于第二区域的第二待处理图像可以与第三预设图像进行融合,获得融合后的图像,即将第二待处理图像融合至第三预设图像中。还以第二待处理图像和第三预设图像均为包含人脸的人物图像为例,当第二待处理图像与第三预设图像进行融合时,可以将第二待处理图像中的人脸图像与第三预设图像进行融合,从而获得融合后的图像,结合第三预设图像中的背景图像,可以丰富图像处理模式,增强用户的使用体验以及趣味性。若检测到用户从多个第二预设图像选取第一待处理图像的指令时,将用户选取的第一待处理图像显示于第一区域中,此时显示于第二区域的第二待处理图像与显示于第一区域的第一待处理图像进行融合,获得融合后的图像。
在一种实施方式中,第一待处理图像以及第二待处理图像都是通过拍摄装置(例如终端的前置摄像头或者后置摄像头)实时拍摄的照片,则获取第一待处理图像和第二待处理图像之前,所述图像处理方法还包括:
获取拍摄装置的采集图像,采集图像中包括第一待处理对象和第二待处理对象;从采集图像中获取第一待处理对象对应的第一待处理图像和第二待处理对象对应的第二待处理图像;将第一待处理图像显示在第一区域中,将第二待处理图像显示在第二区域中。
其中,第一待处理图像中的第一待处理对象包括拍摄装置采集到的多个待处理对象中除第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象,第二待处理对象包括第二待处理图像中的待处理对象。即第一待处理对象与第二待处理对象是两个不同的对象。以人脸图像为例,第一待处理对象与第二待处理对象对应不同的人脸图像,例如第一待处理对象为用户A的人脸区域,第二待处理对象为用户B的人脸区域。
在一种实施方式中,若采集图像中包括两个人脸图像时,从采集图像中获取第一待处理对象对应的第一待处理图像和第二待处理对象对应的第二待处理图像,包括:对采集图像中的每个人脸进行识别,并将识别出的人脸用矩形框标识出来。如图7所示的一种采集图像的示意图,可将较大的矩形框610所占区域的图像确定为第二待处理图像,其中的人脸611为第二待处理对象;将较小的矩形框620所占区域的图像确定为第一待处理图像,其中的人脸621为第一待处理对象。在图7所示的采集图像中,由于人脸611在矩形框610的区域占比较大,将第二待处理图像显示在第二区域中时会给人一种人脸占满整个屏幕的感觉,显示效果不太好,因此,可以将矩形框610放大一些,如图8所示的另一种采集图像的示意图,矩形框710较矩形框620放大了一些,例如将一些背景也圈进来。或者,将采集图像中清晰度最高的人脸区域作为第二待处理图像,另一个人脸区域作为第一待处理 图像。或者,将最先进入镜头的人脸图像作为第二待处理图像,将后进入镜头的人脸图像作为第一待处理图像。或者,将每个人脸图像标识出来由用户自定义将哪个人脸图像作为第一待处理图像,将哪个人脸图像作为第二个待处理图像。例如,用户可以通过拖动,将具体人脸图像拖到第一区域或者第二区域,拖到第一区域的人脸图像即为第一待处理图像,拖到第二区域的人脸图像即为第二待处理图像。且拖动后的人脸图像可以根据所在区域的大小进行尺寸自动调整。
在另一种实施方式中,若采集图像中包括三个甚至三个以上的人脸图像时,从采集图像中获取第一待处理对象对应的第一待处理图像和第二待处理对象对应的第二待处理图像,包括:将基于第一个进入镜头的用户获取到的人脸图像作为第二待处理图像,将基于第二个进入镜头的用户获取到的人脸图像作为第一待处理图像。或者将清晰度最高的人脸图像作为第二待处理图像,将剩下的人脸图像中清晰度最高的作为第一待处理图像。或者按照人脸区域的大小,将较大的人脸区域作为第二待处理图像,将第二大的人脸区域作为第一待处理图像。或者将每个人脸区域标识出来,由用户自主选择将哪个人脸图像作为第一待处理图像,将哪个人脸图像作为第二个待处理图像。
其中,满足预设条件的待处理对象包括如下至少一种:最先进入所述拍摄装置的采集范围的待处理对象;所述拍摄装置的采集图像中尺寸最大的待处理对象;所述拍摄装置的采集图像中清晰度最高的待处理对象;角度与预设角度之间的差异最小的待处理对象(即,以人脸图像为例,尽量选择接近正脸的人脸图像)。
在显示用户界面时,第一区域显示第三预设图像时,将第一待处理图像显示在第一区域中,包括:将第一区域中的第三预设图像替换为第一待处理图像显示。对应的,可以参考如图9所示的第四种用户界面的示意图,该界面包括第一区域310以及第二区域320,并且,用户界面类似于视频时的聊天界面,在第一区域310显示有第一待处理图像(例如,人脸图像),第二区域320显示有第二待处理图像(例如,人脸图像),第一待处理图像和第二待处理图像均为通过摄像头实时拍摄的图像,其中标号810表示拍摄按钮,通过触发拍摄按钮810可退出拍摄界面。若第一待处理对象(例如,人脸区域)移动到拍摄装置的采集范围之外,即第一待处理对象离开镜头,则在第一区域310中恢复显示第三预设图像,即将***默认的模板图像或者底图显示于第一区域310。若第二待处理对象(例如,人脸区域)移动到拍摄装置的采集范围之外,则将第一待处理图像显示在第二区域中,并在第一区域中恢复显示第三预设图像。即第二待处理对象离开镜头,则将第一待处理对象挪至第二区域进行显示,在第一区域中恢复显示***默认的模板图像或者底图。在通过拍摄装置采集图像的过程中,可通过语音的形式或者文字形式提示用户正对摄像头,以拍摄到用户的正脸图像,若一直未拍摄到正脸图像,则可以借助一定的图像处理算法将稍微倾斜的脸部图像调整为正脸图像。
在一种实施方式中,第二待处理图像是实时拍摄的图像,并且在获取第二待处理图像时,有多个用户入镜,即多个用户处于摄像头的拍摄范围之内的场景下,若多个待处理对象进入拍摄装置的采集范围(例如摄像头的拍摄范围),则第二待处理图像中的多个第二待处理对象包括满足预设条件的待处理对象。其中,满足预设条件的待处理对象包括如下至 少一种:最先进入拍摄装置的采集范围的待处理对象;拍摄装置的采集图像中尺寸最大的待处理对象;拍摄装置的采集图像中清晰度最高的待处理对象;角度与预设角度之间的差异最小的待处理对象(即尽量选择接近正脸的图像)。也就是将最先进入镜头的用户图像,或者最大的用户图像,或者最清晰的用户图像,或者人脸最正的用户图像在第二区域中进行显示。
在一种实施方式中,若第一区域显示的第一待处理图像包括预置图像,且有多个用户入镜的场景下,若多个待处理对象进入拍摄装置的采集范围,则第二待处理图像中的多个第二待处理对象包括满足预设条件的待处理对象。满足预设条件的待处理对象包括如下至少一种:最先进入拍摄装置的采集范围的待处理对象;拍摄装置的采集图像中尺寸最大的待处理对象;拍摄装置的采集图像中清晰度最高的待处理对象;角度与预设角度之间的差异最小的待处理对象(即尽量选择接近正脸的图像)。也就是将最先进入镜头的用户图像,或者最大的用户图像,或者最清晰的用户图像,或者人脸最正的用户图像在第二区域中进行显示。
步骤120、响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象。
其中,融合指令可以是用户触发预设融合图标时产生的联动指令,也可以是触摸显示屏空白位置时触发的指令。具体的,若待处理图像为一幅包括一个人物的图像,第一预设图像也可以为一幅包括一个人物的一幅图像。将待处理图像以及第一预设图像进行融合处理可以包括将待处理图像中的人脸替换至第一预设图像中的人脸位置获得目标图像,此时待处理图像中的人脸与目标图像中人脸为相同的人脸;也可以是将待处理图像中的人脸替换至第一预设图像中的人脸位置获得目标图像,并按照第一预设图像的特效对该人脸进行修饰、美化,此时目标图像中的人脸为对待处理图像中的人脸进行美化之后的人脸;还可以包括将待处理图像中的人脸与第一预设图像中的人脸进行融合处理,此时待处理图像中的人脸和目标图像中的人脸为不同的人脸。通过上述图像处理提供的特效玩法,丰富了图像处理模式,增强了用户的使用体验以及趣味性。若待处理图像为两幅,分别为第一待处理图像和第二待处理图像,第一待处理图像和第二待处理图像均为包括一个人物的图像,第一预设图像为包括两个人物的一幅图像,则将待处理图像以及第一预设图像进行融合处理具体为:将第一待处理图像中的人脸替换至第一预设图像中一个人物的人脸位置,将第二待处理图像中的人脸替换至第一预设图像中另一个人物的人脸位置,获得由第一待处理图像中的人脸、第二待处理图像中的人脸以及第一预设图像中两个人物的身体以及背景组成的目标图像,如此可以延用第一预设图像中人物的衣着、打扮以及姿势等,扩展了用户拍照所摆姿势以及扩展用户在拍照时候的衣着、打扮,提升用户的玩法体验。同样的,若第一预设图像为多幅,则还可以将待处理图像中的人脸依次替换到每幅第一预设图像中人脸的位置,得到对应的多幅目标图像,从而提升用户的玩法体验。
步骤130、显示所述一个或多个融合后的目标图像。
可选的,在获得目标图像之后可以基于目标图像生成带动效的小视频,例如一个或多个融合后的目标图像的镜头由近及远依次播放(类似于幻灯片的自动播放效果),并显示诸 如闪亮小星星的动效,增强了图像融合的处理效果,提升了用户的使用体验。
本实施例提供的图像处理方法,通过将多个待处理图像,以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,该目标图像包括所述多个待处理图像分别对应的待处理对象,实现了对待处理图像的处理,丰富了图像处理模式,有利于提升用户使用体验以及用户使用过程中的趣味性。
图10为本公开实施例中的另一种图像处理方法的流程图。本实施例在上述实施例的基础上,进一步对步骤120“响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象”给出了可选实施方式。其中,与上述实施例中相同或相似的内容本实施例不再展开解释,相关内容可参考上述实施例。
如图10所示,所述图像处理方法包括如下步骤:
步骤1010、在检测到针对预设标识的触发操作时,显示用户界面,所述用户界面包括第一区域和第二区域,所述第一区域用于显示第一待处理图像,所述第二区域用于显示第二待处理图像,所述第二待处理图像包括拍摄装置采集的图像。
步骤1020、获取所述第一待处理图像和所述第二待处理图像。
步骤1030、从所述第一待处理图像中获取第一待处理对象,以及从所述第二待处理图像中获取第二待处理对象。
在一种实施方式中,从第一待处理图像中获取第一待处理对象,包括:在第一待处理图像包括多个待处理对象的情况下,获取第一待处理图像中满足预设条件的第一待处理对象。例如第一待处理图像为多人照,可以识别其中人脸图像区域最大,和/或最清晰,和/或最正的人脸作为第一待处理对象。
在另一种实施方式中,从第一待处理图像中获取第一待处理对象,包括:在第一待处理图像包括多个待处理对象的情况下,对第一待处理图像中的多个待处理对象分别进行标识;显示多个待处理对象中每个待处理对象的标识信息;响应于获取针对多个待处理对象中第一待处理对象的标识信息的选择指令,获取第一待处理图像中的第一待处理对象。例如第一待处理图像为多人照,可以分别将其中每个人脸图像区域圈出来,由用户自己选择将哪个人脸图像区域作为第一待处理对象。
在一种实施方式中,还可以直接获取第一待处理对象和第二待处理对象,而不需要先获取第一待处理图像以及第二待处理图像,再从第一待处理图像中获取第一待处理对象,从第二待处理图像中获取第二待处理对象。例如,如图7所示,通过拍摄装置采集到一张图像,其中包括第一待处理对象和第二待处理对象,此时可基于该大图直接识别其中的第一待处理对象和第二待处理对象,而不需要优先识别包括第一待处理对象的矩形图像作为第一待处理图像、包括第二待处理对象的矩形图像作为第二待处理图像、然后再基于第一待处理图像识别第一待处理对象、基于第二待处理图像识别第二待处理对象。
步骤1040、响应于获取融合指令,将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,所述目标图像包括融合后的所述第一待处理对象和融合后的所述第二待处理对象。
具体的,融合指令可以通过预设融合图标、按钮或者控件触发,也可以直接触摸屏幕的空白区域触发。在一种实施方式中,将第一待处理对象和第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:从多个第一预设图像中选取目标预设图像,目标预设图像包括第三待处理对象和第四待处理对象;将第三待处理对象和第四待处理对象中的任意一个替换为第一待处理对象,以及将第三待处理对象和第四待处理对象中除所述任意一个之外的另一个替换为第二待处理对象,得到融合后的目标图像。例如,目标预设图像为包括双人的图像,其中包括M(M可以看作是第三待处理对象)和N(N可以看作是第四待处理对象),第一待处理对象为用户A的人脸图像,第二待处理对象为用户B的人脸图像,则图像融合的过程具体可以是将目标预设图像中N的人脸与用户A的人脸进行融合处理,M的人脸可以与用户B的人脸进行融合处理,得到目标图像,也可以是N的人脸与用户B的人脸进行融合处理,M的人脸可以与用户A的人脸进行融合处理,得到目标图像。如此可以延用目标预设图像中人物的衣着、打扮以及姿势等,扩展了用户拍照所摆姿势以及扩展用户在拍照时候的衣着、打扮,提升用户的玩法体验,提高图像处理效果以及用户使用该图像处理功能时的趣味性。以将N的人脸与用户A的人脸进行融合处理为例说明融合处理的具体过程:在一种实施方式中,将N的人脸直接替换为用户A的人脸获得目标图像,此时第四待处理对象(即N的人脸)与第一待处理对象(即用户A的人脸)相同。在另一种实施方式中,可以将N的人脸替换为用户A的人脸,并基于N的特效对用户A的人脸进行修饰、美化从而获得目标图像,例如增加诸如小星星的亮闪特效等。在又一种实施方式中,可以将N的人脸与用户A的人脸进行融合,从而获得目标图像,此时第四待处理对象(即N的人脸)与第一待处理对象(即用户A的人脸)不同。通过上述图像处理提供的特效玩法,丰富了图像处理模式,提升了用户的玩法体验。
为了丰富图像处理后获得的目标图像的样式,进一步提升用户的玩法体验,多个第一预设图像的样式包括很多种,例如各种风格的图像等。在进行图像融合时,终端从多个第一预设图像中选取一个目标预设图像,基于该目标预设图像进行融合处理。当终端下次进行融合处理时,选取的目标预设图像和上次融合处理时选取的目标预设图像不同,以使用户可以体验到各种样式风格的图像效果。例如,终端第一次进行融合处理时,选择第一个预设图像,第二次融合处理时,选择第二个预设图像,以此类推,使得每次融合时选择的预设图像均不同,从而使得每次融合后得到的目标图像不同,从而提升用户的体验。
在一种实施方式中,将第一待处理对象和第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:显示多个第一预设图像;响应于获取针对多个第一预设图像中目标预设图像的选择指令,将目标预设图像中第三待处理对象和第四待处理对象中的任意一个替换为第一待处理对象,以及将第三待处理对象和第四待处理对象中除所述任意一个之外的另一个替换为第二待处理对象,得到融合后的目标图像。具体的,终端将多个第一预设图像进行显示,由用户自主选择使用哪个第一预设图像进行图像融合,基于用户选择的目标预设图像进行图像融合。通过支持用户自主选择目标预设图像,可实现个性化的图像融合处理,有利于提升用户的使用体验。
在一种实施方式中,将第一待处理对象和第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:针对多个第一预设图像中的每个第一 预设图像,将第一预设图像中第三待处理对象和第四待处理对象中的任意一个替换为第一待处理对象,以及将第三待处理对象和第四待处理对象中除所述任意一个之外的另一个替换为第二待处理对象,得到多个融合后的目标图像,并对多个融合后的目标图像进行显示。例如,在同一用户界面中显示多个融合后的目标图像,用户可以选择其中一个目标图像,用户选择具体目标图像后,对用户选择的目标图像进行放大显示,其他目标图像消失,进一步的,用户滑动屏幕,可以查看下一张放大后的目标图像,再滑动屏幕,可以查看再下一张放大后的目标图像。或者,终端可以自动循环显示多个融合后的目标图像,即多个目标图像依次在终端屏幕一张一张地自动显示,以提升交互界面的友好性,提升用户的使用体验。
在一种实施方式中,显示一个或多个融合后的目标图像之后,所述图像处理方法还包括:响应于获取互换指令,将融合后的目标图像中的第一待处理对象和第二待处理对象进行互换。例如,由于图像融合过程中是随机将第一待处理对象以及第二待处理对象替换第三待处理对象或者第四待处理对象的,因此获得的目标图像未必是用户满意的图像,例如目标图像为包括双人的图像,其中人物M的人脸对应用户B的人脸,人物N的人脸对应用户A的人脸,而用户A或者B对当前的目标图像不满意后者想尝试其他的融合效果,此时用户可以触发互换指令,终端接收到互换指令时,将人物M的人脸与用户B的人脸进行融合处理,将人物N的人脸与用户B的人脸进行融合处理,以增强使用过程中的趣味性并提供给用户满意的效果。
可以理解的是,若第一区域中显示的第一待处理图像还可以是***默认的模板图像,则还可以将第二区域中的显示的第二待处理图像与模板图像进行融合;第一区域中显示的第一待处理图像如果是用户自己的图像,第二区域中显示的第二待处理图像是通过拍摄装置采集的用户自己的图像,此时则可以是用户自己与自己的图像融合。
步骤1050、显示一个或多个融合后的目标图像。
可以理解的是,在显示目标图像之后,还可以响应于获取分享指令,将目标图像分享给好友、论坛、微博或朋友圈等其他平台。
本公开实施例提供的图像触发方法,针对融合处理给出了可选实施方式,融合处理的实质为换脸处理,即将第一预设图像中的人脸与待处理图像中的人脸进行融合处理,以延用第一预设图像中人物的衣着、打扮以及姿势等,扩展了用户拍照所摆姿势以及扩展用户在拍照时候的衣着、打扮,提升用户的玩法体验,提高图像处理效果以及用户使用该图像处理功能时的趣味性,提升用户使用特效的玩法乐趣。
图11为本公开实施例中的一种图像处理装置的结构示意图。本公开实施例所提供的装置可以配置于客户端中,或者可以配置于服务端中。如图11所示,该图像处理装置具体包括:获取模块1110、融合模块1120和显示模块1130。其中,获取模块1110,用于获取多个待处理图像;融合模块1120,用于响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;显示模块1130,用于显示所述一个或多个融合后的目标图像。
可选的,获取模块1110包括:显示单元,用于在检测到针对预设标识的触发操作时,显示用户界面,所述用户界面包括第一区域和第二区域,所述第一区域用于显示第一待处理图像,所述第二区域用于显示第二待处理图像,所述第二待处理图像包括拍摄装置采集的图像;第一获取单元,用于获取所述第一待处理图像和所述第二待处理图像。
可选的,所述用户界面还包括一个或多个第二预设图像;所述显示单元还用于在获取所述第一待处理图像和所述第二待处理图像之前,将所述一个或多个第二预设图像中的第一待处理图像显示在所述第一区域中。
可选的,在显示所述用户界面时,所述第一区域显示第三预设图像;对应的,所述显示单元还用于:将所述第一区域中显示的所述第三预设图像替换为所述一个或多个第二预设图像中的第一待处理图像。
可选的,获取模块1110还包括:第一获取单元,用于获取拍摄装置的采集图像,所述采集图像中包括第一待处理对象和第二待处理对象;从所述采集图像中获取所述第一待处理对象对应的第一待处理图像和所述第二待处理对象对应的第二待处理图像,对应的所述显示单元还用于将所述第一待处理图像显示在所述第一区域中,将所述第二待处理图像显示在所述第二区域中。
可选的,若多个待处理对象进入所述拍摄装置的采集范围,则所述第二待处理图像中的第二待处理对象包括所述多个待处理对象中满足预设条件的待处理对象。
可选的,若所述第二待处理图像中的第二待处理对象移动到所述拍摄装置的采集范围之外,所述第一获取单元用于将所述拍摄装置采集到的多个待处理对象中除所述第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象作为所述第二待处理对象。
可选的,所述第一待处理图像中的第一待处理对象包括所述拍摄装置采集到的多个待处理对象中除第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象,所述第二待处理对象包括所述第二待处理图像中的待处理对象。
可选的,若所述第一待处理对象移动到所述拍摄装置的采集范围之外,所述显示单元还用于:在所述第一区域中恢复显示所述第三预设图像。
可选的,若所述第二待处理对象移动到所述拍摄装置的采集范围之外,所述显示单元还用于:将所述第一待处理图像显示在所述第二区域中,在所述第一区域中恢复显示所述第三预设图像。
可选的,满足预设条件的待处理对象包括如下至少一种:
最先进入所述拍摄装置的采集范围的待处理对象;
所述拍摄装置的采集图像中尺寸最大的待处理对象;
所述拍摄装置的采集图像中清晰度最高的待处理对象;
角度与预设角度之间的差异最小的待处理对象。
可选的,融合模块1120包括:第二获取单元,用于从所述第一待处理图像中获取第一待处理对象,以及从所述第二待处理图像中获取第二待处理对象;融合单元,用于将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,所述目标图像包括融合后的所述第一待处理对象和融合后的所述第二待处理对象。
可选的,所述融合单元包括:选取子单元,用于从多个第一预设图像中选取目标预设图像,所述目标预设图像包括第三待处理对象和第四待处理对象;替换子单元,用于将所述第三待处理对象和所述第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到融合后的目标图像。
可选的,所述融合单元包括:显示子单元,用于显示多个第一预设图像;所述替换子单元还用于:响应于获取针对所述多个第一预设图像中目标预设图像的选择指令,将所述目标预设图像中第三待处理对象和第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到融合后的目标图像。
可选的,所述替换子单元还用于:针对多个第一预设图像中的每个第一预设图像,将所述第一预设图像中第三待处理对象和第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到多个融合后的目标图像。
可选的,融合模块1120还包括:交换单元,用于在显示所述一个或多个融合后的目标图像之后,响应于获取互换指令,将所述融合后的目标图像中的所述第一待处理对象和所述第二待处理对象进行互换。
本公开实施例提供的装置,可执行本公开方法实施例所提供的方法中客户端所执行的步骤,具备执行步骤和有益效果此处不再赘述。
图12为本公开实施例中的一种电子设备的结构示意图。下面具体参考图12,其示出了适于用来实现本公开实施例中的电子设备500的结构示意图。本公开实施例中的电子设备500可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴电子设备等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。图12示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图12所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理以实现如本公开所述的实施例的…..方法。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图12示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出 的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码,从而实现如上所述的方法。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperTe11t Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取多个待处理图像;
响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;
显示所述一个或多个融合后的目标图像。
可选的,当上述一个或者多个程序被该电子设备执行时,该电子设备还可以执行上述实施例所述的其他步骤。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序 代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上***(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,包括:获取多个待处理图像;响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;显示所述一个或多个融合后的目标图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,获取多个待处理图像,包括:在检测到针对预设标识的触发操作时,显示用户界面,所述用户界面包括第一区域和第二区域,所述第一区域用于显示第一待处理图像,所述第二区域用于显示第二 待处理图像,所述第二待处理图像包括拍摄装置采集的图像;获取所述第一待处理图像和所述第二待处理图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,所述用户界面还包括一个或多个第二预设图像;获取所述第一待处理图像和所述第二待处理图像之前,所述方法还包括:将所述一个或多个第二预设图像中的第一待处理图像显示在所述第一区域中。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,在显示所述用户界面时,所述第一区域显示第三预设图像;将所述一个或多个第二预设图像中的第一待处理图像显示在所述第一区域中,包括:将所述第一区域中显示的所述第三预设图像替换为所述一个或多个第二预设图像中的第一待处理图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,获取所述第一待处理图像和所述第二待处理图像之前,所述方法还包括:获取拍摄装置的采集图像,所述采集图像中包括第一待处理对象和第二待处理对象;从所述采集图像中获取所述第一待处理对象对应的第一待处理图像和所述第二待处理对象对应的第二待处理图像;将所述第一待处理图像显示在所述第一区域中,将所述第二待处理图像显示在所述第二区域中。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,在显示所述用户界面时,所述第一区域显示第三预设图像;将所述第一待处理图像显示在所述第一区域中,包括:将所述第一区域中显示的所述第三预设图像替换为所述第一待处理图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,若多个待处理对象进入所述拍摄装置的采集范围,则所述第二待处理图像中的第二待处理对象包括所述多个待处理对象中满足预设条件的待处理对象。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,还包括:若所述第二待处理图像中的第二待处理对象移动到所述拍摄装置的采集范围之外,则将所述拍摄装置采集到的多个待处理对象中除所述第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象作为所述第二待处理对象。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,所述第一待处理图像中的第一待处理对象包括所述拍摄装置采集到的多个待处理对象中除第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象,所述第二待处理对象包括所述第二待处理图像中的待处理对象。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,还包括:若所述第一待处理对象移动到所述拍摄装置的采集范围之外,则在所述第一区域中恢复显示所述第三预设图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,还包括:若所述第二待处理对象移动到所述拍摄装置的采集范围之外,则将所述第一待处理图像显示在所述第二区域中,在所述第一区域中恢复显示所述第三预设图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,满足预设条件的待处理对象包括如下至少一种:最先进入所述拍摄装置的采集范围的待处理对象;所述拍摄装置的采集图像中尺寸最大的待处理对象;所述拍摄装置的采集图像中清晰度最高的待 处理对象;角度与预设角度之间的差异最小的待处理对象。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,包括:从所述第一待处理图像中获取第一待处理对象,以及从所述第二待处理图像中获取第二待处理对象;将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,所述目标图像包括融合后的所述第一待处理对象和融合后的所述第二待处理对象。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:从多个第一预设图像中选取目标预设图像,所述目标预设图像包括第三待处理对象和第四待处理对象;将所述第三待处理对象和所述第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到融合后的目标图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:显示多个第一预设图像;响应于获取针对所述多个第一预设图像中目标预设图像的选择指令,将所述目标预设图像中第三待处理对象和第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到融合后的目标图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:针对多个第一预设图像中的每个第一预设图像,将所述第一预设图像中第三待处理对象和第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到多个融合后的目标图像。
根据本公开的一个或多个实施例,在本公开提供的图像处理方法中,显示所述一个或多个融合后的目标图像之后,所述方法还包括:响应于获取互换指令,将所述融合后的目标图像中的所述第一待处理对象和所述第二待处理对象进行互换。
根据本公开的一个或多个实施例,本公开提供了一种图像处理装置,包括:获取模块,用于获取多个待处理图像;融合模块,用于响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;显示模块,用于显示所述一个或多个融合后的目标图像。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开提供的任一所述的图像处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本公开提供的任一所述的图像处理方法。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如上所述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (20)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取多个待处理图像;
    响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;
    显示所述一个或多个融合后的目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述多个待处理图像包括第一待处理图像和第二待处理图像,在所述获取多个待处理图像之前,所述方法还包括:
    在检测到针对预设标识的触发操作时,显示用户界面,所述用户界面包括第一区域和第二区域,所述第一区域用于显示第一待处理图像,所述第二区域用于显示第二待处理图像,所述第二待处理图像包括拍摄装置采集的图像。
  3. 根据权利要求2所述的方法,其特征在于,所述用户界面还显示一个或多个第二预设图像;
    获取所述第一待处理图像和所述第二待处理图像之前,所述方法还包括:
    将所述一个或多个第二预设图像中的第一待处理图像显示在所述第一区域中。
  4. 根据权利要求3所述的方法,其特征在于,在显示所述用户界面时,所述第一区域显示第三预设图像;
    所述将所述一个或多个第二预设图像中的第一待处理图像显示在所述第一区域中,包括:
    将所述第一区域中显示的所述第三预设图像替换为所述一个或多个第二预设图像中的第一待处理图像。
  5. 根据权利要求2所述的方法,其特征在于,所述获取所述第一待处理图像和所述第二待处理图像之前,所述方法还包括:
    获取拍摄装置的采集图像,所述采集图像中包括第一待处理对象和第二待处理对象;
    从所述采集图像中获取所述第一待处理对象对应的第一待处理图像和所述第二待处理对象对应的第二待处理图像;
    将所述第一待处理图像显示在所述第一区域中,将所述第二待处理图像显示在所述第二区域中。
  6. 根据权利要求5所述的方法,其特征在于,在显示所述用户界面时,所述第一区域显示第三预设图像;
    所述将所述第一待处理图像显示在所述第一区域中,包括:
    将所述第一区域中显示的所述第三预设图像替换为所述第一待处理图像。
  7. 根据权利要求2所述的方法,其特征在于,若多个待处理对象进入所述拍摄装置的采集范围,则所述第二待处理图像中的第二待处理对象包括所述多个待处理对象中满足预设条件的待处理对象。
  8. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    若所述第二待处理图像中的第二待处理对象移动到所述拍摄装置的采集范围之外,则 将所述拍摄装置采集到的多个待处理对象中除所述第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象作为所述第二待处理对象。
  9. 根据权利要求5所述的方法,其特征在于,所述第一待处理图像中的第一待处理对象包括所述拍摄装置采集到的多个待处理对象中除第二待处理对象之外的剩余待处理对象中满足预设条件的待处理对象,所述第二待处理对象包括所述第二待处理图像中的待处理对象。
  10. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    若所述第一待处理对象移动到所述拍摄装置的采集范围之外,则在所述第一区域中恢复显示所述第三预设图像。
  11. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    若所述第二待处理对象移动到所述拍摄装置的采集范围之外,则将所述第一待处理图像显示在所述第二区域中,并在所述第一区域中恢复显示所述第三预设图像。
  12. 根据权利要求7-9任一项所述的方法,其特征在于,满足预设条件的待处理对象包括如下至少一种:
    最先进入所述拍摄装置的采集范围的待处理对象;
    所述拍摄装置的采集图像中尺寸最大的待处理对象;
    所述拍摄装置的采集图像中清晰度最高的待处理对象;
    角度与预设角度之间的差异最小的待处理对象。
  13. 根据权利要求2-11任一项所述的方法,其特征在于,所述将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,包括:
    从所述第一待处理图像中获取第一待处理对象,以及从所述第二待处理图像中获取第二待处理对象;
    将所述第一待处理对象和所述第二待处理对象融合到所述一个或多个第一预设图像中,得到所述一个或多个融合后的目标图像,所述目标图像包括融合后的第一待处理对象和融合后的第二待处理对象。
  14. 根据权利要求13所述的方法,其特征在于,所述将所述第一待处理对象和所述第二待处理对象融合到所述一个或多个第一预设图像中,得到所述一个或多个融合后的目标图像,包括:
    从多个第一预设图像中选取目标预设图像,所述目标预设图像包括第三待处理对象和第四待处理对象;
    将所述第三待处理对象和所述第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到融合后的目标图像。
  15. 根据权利要求13所述的方法,其特征在于,所述将所述第一待处理对象和所述第二待处理对象融合到所述一个或多个第一预设图像中,得到所述一个或多个融合后的目标图像,包括:
    显示多个第一预设图像;
    响应于获取针对所述多个第一预设图像中目标预设图像的选择指令,将所述目标预设 图像中第三待处理对象和第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到融合后的目标图像。
  16. 根据权利要求13所述的方法,其特征在于,所述将所述第一待处理对象和所述第二待处理对象融合到一个或多个第一预设图像中,得到一个或多个融合后的目标图像,包括:
    针对多个第一预设图像中的每个第一预设图像,将所述第一预设图像中第三待处理对象和第四待处理对象中的任意一个替换为所述第一待处理对象,以及将所述第三待处理对象和所述第四待处理对象中除所述任意一个之外的另一个替换为所述第二待处理对象,得到多个融合后的目标图像。
  17. 根据权利要求14-16任一项所述的方法,其特征在于,所述显示所述一个或多个融合后的目标图像之后,所述方法还包括:
    响应于获取互换指令,将所述融合后的目标图像中的所述第一待处理对象和所述第二待处理对象进行互换。
  18. 一种图像处理装置,其特征在于,包括:
    获取模块,用于获取多个待处理图像;
    融合模块,用于响应于获取融合指令,将所述多个待处理图像、以及一个或多个第一预设图像进行融合处理,得到一个或多个融合后的目标图像,所述目标图像包括所述多个待处理图像分别对应的待处理对象;
    显示模块,用于显示所述一个或多个融合后的目标图像。
  19. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-17中任一项所述的方法。
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-17中任一项所述的方法。
PCT/CN2022/081938 2021-04-08 2022-03-21 图像处理方法、装置、电子设备和存储介质 WO2022213798A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/551,684 US20240177272A1 (en) 2021-04-08 2022-03-21 Image processing method and apparatus, and electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110379814.0 2021-04-08
CN202110379814.0A CN115205169A (zh) 2021-04-08 2021-04-08 图像处理方法、装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022213798A1 true WO2022213798A1 (zh) 2022-10-13

Family

ID=83545146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081938 WO2022213798A1 (zh) 2021-04-08 2022-03-21 图像处理方法、装置、电子设备和存储介质

Country Status (3)

Country Link
US (1) US20240177272A1 (zh)
CN (1) CN115205169A (zh)
WO (1) WO2022213798A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004288222A (ja) * 2004-07-13 2004-10-14 Nec Corp 画像照合装置及びその画像照合方法並びにその制御プログラムを記録した記録媒体
CN108921795A (zh) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 一种图像融合方法、装置及存储介质
CN109801249A (zh) * 2018-12-27 2019-05-24 深圳豪客互联网有限公司 图像融合方法、装置、计算机设备和存储介质
CN110992256A (zh) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备及存储介质
CN112488085A (zh) * 2020-12-28 2021-03-12 深圳市慧鲤科技有限公司 人脸融合方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004288222A (ja) * 2004-07-13 2004-10-14 Nec Corp 画像照合装置及びその画像照合方法並びにその制御プログラムを記録した記録媒体
CN108921795A (zh) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 一种图像融合方法、装置及存储介质
CN109801249A (zh) * 2018-12-27 2019-05-24 深圳豪客互联网有限公司 图像融合方法、装置、计算机设备和存储介质
CN110992256A (zh) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备及存储介质
CN112488085A (zh) * 2020-12-28 2021-03-12 深圳市慧鲤科技有限公司 人脸融合方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US20240177272A1 (en) 2024-05-30
CN115205169A (zh) 2022-10-18

Similar Documents

Publication Publication Date Title
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
TWI706379B (zh) 圖像處理方法及裝置、電子設備和儲存介質
CN112199016B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2021218325A1 (zh) 视频处理方法、装置、计算机可读介质和电子设备
WO2022100735A1 (zh) 视频处理方法、装置、电子设备及存储介质
WO2022105846A1 (zh) 虚拟对象显示方法及装置、电子设备、介质
CN111669502B (zh) 目标对象显示方法、装置及电子设备
WO2022171024A1 (zh) 图像显示方法、装置、设备及介质
CN111629151B (zh) 视频合拍方法、装置、电子设备及计算机可读介质
WO2022206335A1 (zh) 图像显示方法、装置、设备及介质
WO2022055420A2 (zh) 榜单信息显示方法、装置、电子设备及存储介质
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2022037484A1 (zh) 图像处理方法、装置、设备及存储介质
EP4343580A1 (en) Media file processing method and apparatus, device, readable storage medium, and product
WO2023169305A1 (zh) 特效视频生成方法、装置、电子设备及存储介质
JP2023515607A (ja) 画像特殊効果の処理方法及び装置
WO2023138425A1 (zh) 虚拟资源的获取方法、装置、设备及存储介质
CN111970571A (zh) 视频制作方法、装置、设备及存储介质
WO2022242497A1 (zh) 视频拍摄方法、装置、电子设备和存储介质
CN115002359A (zh) 视频处理方法、装置、电子设备及存储介质
CN116934577A (zh) 一种风格图像生成方法、装置、设备及介质
WO2024027819A1 (zh) 图像处理方法、装置、设备及存储介质
US20220272283A1 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
CN117244249A (zh) 多媒体数据生成方法、装置、可读介质及电子设备
CN112906553A (zh) 图像处理方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22783865

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18551684

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19-02-2024)