WO2022171024A1 - 图像显示方法、装置、设备及介质 - Google Patents

图像显示方法、装置、设备及介质 Download PDF

Info

Publication number
WO2022171024A1
WO2022171024A1 PCT/CN2022/074918 CN2022074918W WO2022171024A1 WO 2022171024 A1 WO2022171024 A1 WO 2022171024A1 CN 2022074918 W CN2022074918 W CN 2022074918W WO 2022171024 A1 WO2022171024 A1 WO 2022171024A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
original
stylized
display
Prior art date
Application number
PCT/CN2022/074918
Other languages
English (en)
French (fr)
Inventor
叶欣靖
吴俊生
梁雅涵
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to EP22752185.3A priority Critical patent/EP4276738A4/en
Priority to JP2023548277A priority patent/JP2024506639A/ja
Publication of WO2022171024A1 publication Critical patent/WO2022171024A1/zh
Priority to US18/366,939 priority patent/US20230386001A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the field of multimedia technologies, and in particular, to an image display method, apparatus, device, and medium.
  • the present disclosure provides an image display method, apparatus, device and medium.
  • the present disclosure provides an image display method, including:
  • the original images are images including original objects
  • a composite image is displayed, and the composite image is an image obtained by synthesizing a plurality of stylized images and a background image corresponding to the target style.
  • an image display device comprising:
  • an image acquisition unit configured to acquire a plurality of original images, where the original images are images including original objects
  • a first processing unit configured to perform style transfer processing on the original objects in the plurality of original images, respectively, to obtain a plurality of stylized images corresponding to the target style of the plurality of original images
  • the first display unit is configured to display a composite image, where the composite image is an image obtained by synthesizing a plurality of stylized images and a background image corresponding to the target style.
  • an image display device comprising:
  • the processor is configured to read executable instructions from the memory and execute the executable instructions to implement the image display method described in the first aspect.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, enables the processor to implement the image display method described in the first aspect.
  • the image display method, device, device, and medium of the embodiments of the present disclosure can respectively perform style transfer processing on the original objects in the multiple original images after acquiring multiple original images including the original objects, so as to obtain the corresponding images of the multiple original images.
  • Multiple stylized images of the target style and display a composite image obtained by synthesizing multiple stylized images and a background image corresponding to the target style, and then automatically beautify and collage the original image with the target style, without the need for users to manually perform image beautification operation or image editing operation, the original image can be used to automatically generate a composite image with a target style, thereby reducing the time cost of producing the composite image, improving the quality of the composite image, and improving the user experience.
  • FIG. 1 is a schematic flowchart of an image display method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another image display method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another image display method provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of an image processing process according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of another image processing process provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • Embodiments of the present disclosure provide an image display method, apparatus, device and medium capable of automatically beautifying and collaging images.
  • the image display method may be performed by an electronic device.
  • electronic devices may include mobile phones, tablet computers, desktop computers, notebook computers, vehicle-mounted terminals, wearable electronic devices, all-in-one computers, smart home devices and other devices with communication functions, and may also be virtual machines or devices simulated by simulators .
  • FIG. 1 shows a schematic flowchart of an image display method provided by an embodiment of the present disclosure.
  • the image display method may include the following steps.
  • a plurality of original images including the original object may be acquired through an electronic device.
  • the original object may be preset according to actual needs, which is not limited herein.
  • the original object may include at least one of a person, an animal, or a thing.
  • the original object may also include designated parts, such as designated body parts in people and animals.
  • the original object may include the heads of all people or the heads of animals.
  • the original image may be an image obtained by a user using an electronic device to capture in real time.
  • the user may control the electronic device to enter the image capturing mode, and during the process of the electronic device being in the image capturing mode, multiple original images are continuously captured by the capturing manner specified in the image capturing mode.
  • the original object can be the user's head.
  • the user can trigger the electronic device to capture the original image by blinking when the electronic device is in the image capturing mode, so that the electronic device can detect that the user has blinked every time.
  • the original image including the user's head is obtained by taking one shot.
  • the original image may also be an image selected by the user from local images stored in the electronic device.
  • the user can control the electronic device to enter the image selection mode, and during the process of the electronic device being in the image selection mode, among the local images displayed by the electronic device, select a plurality of original images.
  • the number of original images may be preset according to actual needs, which is not limited herein.
  • the number of original images can be 3, 5, etc.
  • one original image may include one original object, and one original image may also include multiple original objects, which is not limited herein.
  • the original object may be the head of a user, one original object may include the head of one user, and one original object may also include the heads of multiple users, which is not limited herein.
  • the electronic device may perform style transfer processing on the original objects in each original image according to a preset stylization processing method, so as to obtain a target style corresponding to each original image.
  • the stylized image is an image obtained by beautifying each original image.
  • the preset stylization processing method may be style transfer processing for the target style.
  • the target style can be preset according to actual needs, which is not limited here.
  • the target style may be comic style.
  • the electronic device may directly perform style transfer processing on the original object in the original image to obtain a stylized image.
  • S120 when the original image is an image including multiple original objects, S120 may specifically include: respectively performing style transfer processing on the original object with the largest size in each original image to obtain multiple styles image.
  • the electronic device may only perform style transfer processing on the original object with the largest size in the original image, to obtain a stylized image corresponding to the original image, so that an original image can generate a most important original image.
  • the stylized image corresponding to the object may only perform style transfer processing on the original object with the largest size in the original image, to obtain a stylized image corresponding to the original image, so that an original image can generate a most important original image.
  • S120 may further specifically include: performing style transfer processing on all original objects in each original image, respectively, to obtain an image in each original image.
  • the stylized image corresponding to all original objects of .
  • the electronic device may perform style transfer processing on all the original objects in the original image, to obtain a stylized image corresponding to each original object in the original image.
  • the electronic device may obtain a background image corresponding to the target style, and display a composite image obtained by synthesizing the multiple stylized images and the background image corresponding to the target style.
  • a composite image is an image obtained by piecing together multiple stylized images.
  • multiple background images may be pre-stored in the electronic device, and multiple background images may have the same image style, such as a target style, and multiple background images may also have different image styles, which are not limited here. .
  • the electronic device may obtain a background image corresponding to the target style for generating a composite image from among the plurality of pre-stored background images.
  • the background image corresponding to the target style used to generate the composite image may be an image randomly selected from a plurality of pre-stored background images corresponding to the target style.
  • a background image corresponding to the target style may be randomly selected from the pre-stored background images as the background image corresponding to the target style for generating the composite image.
  • the background image corresponding to the target style used for generating the composite image may be an image capable of placing a target number of stylized images in the pre-stored background images corresponding to the target style, and the target number may be the original The total number of images.
  • the total number of acquired original images may be determined, and then a background image capable of placing the target number of stylized images may be selected from the pre-stored background images corresponding to the target style. , as the background image corresponding to the target style used to generate the composite image.
  • the background image corresponding to the target style used to generate the composite image may be an image capable of placing a target number of stylized images among the pre-stored background images corresponding to the target style, and the target number may be electronic The total number of stylized images generated by the device will not be repeated here.
  • one background image may belong to one scene type.
  • the scene types may include party scene types, shopping scene types, Spring Festival reunion scene types, and the like.
  • the background image corresponding to the target style used to generate the composite image may belong to a randomly selected scene type.
  • a background image belonging to any scene type may be randomly selected from the pre-stored background images corresponding to the target style as the background corresponding to the target style for generating the composite image image.
  • the background image corresponding to the target style used for generating the composite image may belong to the target scene type, and the target scene type may be determined according to the image background of the original image.
  • the electronic device can identify the scene type corresponding to the image background of each original image based on a preset scene recognition algorithm, and then use the scene type corresponding to the most original images as the target scene type , and further select a background image belonging to the target scene type from the pre-stored background images corresponding to the target style, as the background image corresponding to the target style for generating the composite image.
  • the original objects in the multiple original images are respectively subjected to style transfer processing, so as to obtain multiple stylized images corresponding to the target style of the multiple original images , and display a composite image obtained by synthesizing multiple stylized images and a background image corresponding to the target style, and then automatically beautify and collage the original image in the target style without the need for the user to manually perform image beautification operations or image editing operations.
  • the original image is used to automatically generate a composite image with a target style, thereby reducing the time cost of producing the composite image, improving the quality of the composite image, and improving the user experience.
  • S120 may specifically include:
  • the electronic device can directly use a pre-trained object recognition model to perform object recognition processing on each original image to obtain the original object in each original image corresponding original object images, and then use the pre-trained style transfer model to perform style transfer processing on each of the obtained original object images to obtain stylized images corresponding to each original object image.
  • the electronic device may use a pre-trained object recognition model to perform object recognition processing on each original image to obtain the size of each original image.
  • the original object image corresponding to the largest original object is then used to perform style transfer processing on each of the obtained original object images by using a style transfer model obtained by pre-training to obtain a stylized image corresponding to each original object image.
  • the electronic device may also use a pre-trained object recognition model to perform object recognition processing on each original image, and obtain the object recognition process in each original image.
  • a pre-trained object recognition model to perform object recognition processing on each original image, and obtain the object recognition process in each original image.
  • Each original object image corresponding to the original object image is obtained, and then the style transfer process is performed on each original object image obtained by using the pre-trained style transfer model to obtain a stylized image corresponding to each original object image.
  • a stylized image with a target style can be obtained quickly, and the interest and convenience of image beautification can be improved.
  • S120 in order to improve the aesthetics of the stylized image obtained by the style transfer, S120 may specifically include:
  • the electronic device can directly use the pre-trained object recognition model to perform object recognition processing on each original image, and obtain the corresponding original object in each original image.
  • the original object image and the region matrix of the original object image and then use the pre-trained style transfer model to perform style transfer processing on each original object image to obtain the style transfer image corresponding to each original object image.
  • Region matrix as style transfer image.
  • the electronic device may use a pre-trained object recognition model to perform object recognition processing on each original image to obtain the largest size in each original image.
  • the style transfer image of takes the region matrix of the original object image as the region matrix of the style transfer image.
  • the electronic device may also use a pre-trained object recognition model to perform object recognition processing on each original image to obtain the object recognition process in each original image.
  • the original object image corresponding to each original object and the region matrix of the original object image and then use the pre-trained style transfer model to perform style transfer processing on each obtained original object image, and obtain the style transfer image corresponding to each original object image.
  • the region matrix of the original object image is used as the region matrix of the style transfer image.
  • S122 Perform object modification processing on the plurality of style transfer images respectively to obtain a plurality of stylized images corresponding to the plurality of style transfer images.
  • the electronic device can perform back button processing on each original image to obtain a subject image corresponding to the subject to which each original object in each original image belongs, and analyze the Each style transfer image is subjected to the following object modification processing: use the region matrix of the style transfer image to perform positive matrix transformation, fuse the style transfer image into the corresponding subject image to obtain a fused image, and then perform object modification on the fused image to obtain a modified image. image, and then use the region matrix of the style transfer image to perform inverse matrix transformation on the modified image to obtain a stylized image.
  • the electronic device After the electronic device obtains the original object with the largest size, the style-transferred image, and the region matrix of the style-transferred image in each original image, the electronic device can perform button-back processing on each original image to obtain the largest size in each original image.
  • the subject image corresponding to the subject to which the original object belongs, and each style transfer image is subjected to the following object modification processing: use the region matrix of the style transfer image to perform positive matrix transformation, and fuse the style transfer image into the corresponding subject image to obtain The image is fused, and then object modification is performed on the fused image to obtain a modified image, and then the modified image is inversely transformed by using the region matrix of the style transfer image to obtain a stylized image.
  • the object modification may include object enlargement and deformation, adding filters, etc., and may also include other modification methods, which are not limited herein.
  • a stylized image with a target style and a target modification effect can be obtained quickly, and the interest of image beautification can be improved.
  • the embodiment of the present disclosure also provides another image display method, in which, before displaying the composite image, the method further includes: displaying a plurality of stylized images according to a preset image display mode, so that the display
  • the composite image may specifically include: switching a plurality of stylized images into composite images for display according to a preset image switching manner.
  • the image display method may be performed by an electronic device.
  • electronic devices may include mobile phones, tablet computers, desktop computers, notebook computers, vehicle-mounted terminals, wearable electronic devices, all-in-one computers, smart home devices and other devices with communication functions, and may also be virtual machines or devices simulated by simulators .
  • FIG. 2 shows a schematic flowchart of another image display method provided by an embodiment of the present disclosure.
  • the image display method may include the following steps.
  • S220 Perform style transfer processing on the original objects in the multiple original images, respectively, to obtain multiple stylized images corresponding to the target style of the multiple original images.
  • S210-S220 are similar to S110-S120 in the embodiment shown in FIG. 1 , and details are not described here.
  • the electronic device may further display the plurality of stylized images according to the image display mode corresponding to the acquisition mode of the original image.
  • the electronic device may perform style transfer processing on the acquired original image after each original image is shot, to obtain the corresponding original image.
  • stylized image then display the stylized image corresponding to the original image according to the specified size in the middle of the shooting preview interface, and rotate and move the stylized image corresponding to the original image to the top of the shooting preview interface until all the original images are collected. , all the stylized images corresponding to the original images can be displayed at the top of the shooting preview interface.
  • the electronic device may further perform style transfer processing on the original objects in each original image after shooting all the original images, respectively, Obtain the stylized image corresponding to each original image, and then display the stylized image corresponding to each original image in the shooting preview interface in the order in which the images were taken: Display the stylized image corresponding to the original image in the middle of the shooting preview interface according to the specified size , and rotate and move to the top of the shooting preview interface until all the stylized images corresponding to the original images can be displayed at the top of the shooting preview interface.
  • the electronic device may further perform style transfer processing on the original objects in each original image after all original images are selected, respectively, Get the stylized image corresponding to each original image, and then display the stylized image corresponding to each original image in turn in the order of image selection: display the original image in full screen, then use the flashing animation to make the original image disappear, and then rotate and fly from the specified boundary of the interface.
  • the stylized images corresponding to the original images are imported, and the stylized images that are flown in are placed at the designated positions until all the stylized images corresponding to the original images are displayed in their respective designated positions.
  • S240 Display a composite image, where the composite image is an image obtained by synthesizing multiple stylized images and a background image corresponding to the target style.
  • the electronic device may switch multiple stylized images into composite images for display according to a preset image switching manner.
  • the preset image switching mode may include a preset transition animation mode.
  • the electronic device may switch from the interface displaying the multiple stylized images to the interface displaying the composite image according to the preset transition animation mode, and then display the composite image.
  • the transition animation mode can be preset according to actual needs, which is not limited here.
  • the transition animation mode may be a crayon animation transition mode, a gradient animation transition mode, and the like.
  • multiple stylized images can be displayed based on a preset image display mode before displaying the composite image, which increases the interaction process with the user, thereby improving the interactive interest.
  • the embodiment of the present disclosure also provides another image display method.
  • the method before displaying the composite image, the method further includes: acquiring image parameters of each stylized image; determining each stylized image according to the image parameters Display parameters of the stylized image; according to the display parameters, multiple stylized images are spliced on the background image to obtain a composite image.
  • the method before displaying the composite image, the method further includes: acquiring image parameters of each stylized image; determining each stylized image according to the image parameters Display parameters of the stylized image; according to the display parameters, multiple stylized images are spliced on the background image to obtain a composite image.
  • the image display method may be performed by an electronic device.
  • electronic devices may include mobile phones, tablet computers, desktop computers, notebook computers, vehicle-mounted terminals, wearable electronic devices, all-in-one computers, smart home devices and other devices with communication functions, and may also be virtual machines or devices simulated by simulators .
  • FIG. 3 shows a schematic flowchart of still another image display method provided by an embodiment of the present disclosure.
  • the image display method may include the following steps.
  • S320 Perform style transfer processing on the original objects in the multiple original images respectively to obtain multiple stylized images corresponding to the target style of the multiple original images.
  • S310-S320 are similar to S110-S120 in the embodiment shown in FIG. 1 , and details are not described here.
  • the electronic device may acquire image parameters of each stylized image after acquiring the stylized image corresponding to each original image.
  • the image parameters may include any one of the following: the acquisition order of the original images corresponding to the stylized image, and the rotation angle of the object corresponding to the stylized image.
  • the electronic device may determine the original image corresponding to each stylized image, and then use the acquisition order of the original images corresponding to each stylized image as an image parameter of each stylized image.
  • the electronic device may perform object rotation angle detection on each stylized image based on a preset object pose detection algorithm to obtain an object rotation angle corresponding to each stylized image.
  • the object rotation angle may include at least one of an object yaw angle, an object pitch angle, and an object roll angle.
  • the object rotation angle of the stylized image may include at least one of a head yaw angle, a head pitch angle, and a head roll angle.
  • the electronic device may determine the display parameters of each stylized image in the background image according to the image parameters of each stylized image.
  • the electronic device may first acquire a background image for generating a composite image, and acquire multiple sets of preset display parameters corresponding to the background image, and a set of preset display parameters is used to place a stylized image on the background image. image.
  • the electronic device may select the display parameters of each stylized image in the background image from among multiple sets of preset display parameters corresponding to the background image according to the image parameters of each stylized image.
  • one set of preset display parameters may correspond to one acquisition order. Therefore, the electronic device can directly select the multiple sets of preset display parameters corresponding to the background image. , and select a set of preset display parameters corresponding to the acquisition order of the original images corresponding to the stylized image, as the display parameters of the stylized image in the preset stylized background image.
  • a set of preset display parameters may correspond to a range of rotation angles.
  • a set of preset display parameters corresponding to the rotation angle range in which the object rotation angle corresponding to the stylized image falls is selected as the display parameters of the stylized image in the background image.
  • a set of preset display parameters can correspond to a yaw angle range. Therefore, the electronic device can Directly from the multiple sets of preset display parameters corresponding to the background image, select a set of preset display parameters corresponding to the yaw angle range in which the head yaw angle corresponding to the head image of the target style falls, as the style Display parameters of the image in the background image.
  • the display parameters may include at least one of the following: a display size, a display position, and a display angle.
  • the display size may refer to the display size of the stylized image in the background image, and is used to determine the size scaling ratio of the stylized image.
  • the display position may refer to the display position of the stylized image in the background image, and is used to determine the placement position of the stylized image in the background image.
  • the display angle may refer to the display angle of the stylized image within the background image, and is used to determine the rotation angle of the stylized image.
  • the electronic device may adjust each stylized image from the initial display parameters to the display parameters in the background image, and then stitch the stylized image after adjusting the display parameters on the background image to obtain a composite image .
  • the electronic device may first determine the size scaling ratio of each stylized image according to the initial display size of each stylized image and the display size in the background image, and according to the determined size scaling ratio, perform a Perform size adjustment, and then determine the rotation angle of each stylized image according to the initial display angle of each stylized image and the display angle in the background image, and adjust the angle of each stylized image according to the determined selection angle, Then, each stylized image is spliced on the background image according to its display position in the background image to obtain a composite image.
  • S360 Display a composite image, where the composite image is an image obtained by synthesizing multiple stylized images and a background image corresponding to the target style.
  • the electronic device can automatically determine the display parameters in the background image for the stylized images corresponding to each original object, and then stitch the stylized images in the background image based on the display parameters to obtain Composite images to add interest to the puzzle effect.
  • FIG. 4 shows a schematic flowchart of an image processing process provided by an embodiment of the present disclosure.
  • the image processing process may specifically include the following steps.
  • the user can control the electronic device to enter the image selection mode, and during the process of the electronic device being in the image selection mode, among the local images displayed by the electronic device, select a plurality of original images.
  • the electronic device may input each original image into a pre-trained comic style transfer model to obtain a comic head image corresponding to the head of the largest size in each original image and a region matrix of each comic head image.
  • the cartoon head image may include at least one of a cartoon character head image and a cartoon animal head image.
  • the electronic device may perform a back buttoning process on each original image for the subject to which the head of the largest size belongs, to obtain a subject image corresponding to each original image.
  • the subject may include at least one of a character and an animal.
  • the electronic device can perform positive matrix transformation by using the area matrix of each cartoon head image, and fuse each cartoon head image into the corresponding main image to obtain a fusion image corresponding to each cartoon head image.
  • S405 Perform object modification on each fused image to obtain a modified image.
  • the electronic device may perform modifications such as head enlargement and deformation and adding filters to each fused image to obtain a modified image.
  • the electronic device can use the area matrix of each comic head image to perform inverse matrix transformation on the corresponding modified image to obtain a canvas corresponding to each modified image including the comic head image, and perform image interception on the canvas according to the area matrix to obtain the comic head image. image.
  • the electronic device may select a comic background image of the type of the Spring Festival reunion scene, and respectively stitch the comic head image onto the comic background image to obtain a composite image.
  • the electronic device can display the cartoon head image corresponding to each original image in sequence according to the image selection sequence: display the original image in full screen, then use the flashing animation to make the original image disappear, and then rotate and fly into the original image corresponding to the flashing white bottom border. and place the in-flying comic big head images at the designated positions until the comic big head images corresponding to all the original images are displayed on the respective designated positions.
  • the electronic device may switch from displaying a cartoon head image to displaying a composite image by using a crayon animation transition method.
  • the user can upload multiple original images, and beautify and collage the multiple original images to form a cartoon group photo with a Spring Festival reunion effect.
  • FIG. 5 shows a schematic flowchart of another image processing process provided by an embodiment of the present disclosure.
  • the image processing process may specifically include the following steps.
  • the user can control the electronic device to enter the image capturing mode, and during the process of the electronic device being in the image capturing mode, the original image is captured by the capturing manner specified by the image capturing mode.
  • the electronic device can input the original image into the comic style transfer model obtained by pre-training, and obtain the comic head image corresponding to the head of the largest size in the original image and the region matrix of the comic head image.
  • the cartoon head image may include at least one of a cartoon character head image and a cartoon animal head image.
  • the electronic device may perform a back buttoning process on the original image for the subject to which the head of the largest size belongs, to obtain a subject image corresponding to the original image.
  • the subject may include at least one of a character and an animal.
  • the electronic device can perform positive matrix transformation by using the area matrix of the cartoon head image, and fuse the cartoon head image into the corresponding main image to obtain a fusion image corresponding to the cartoon head image.
  • the electronic device may perform modifications such as head enlargement and deformation and adding filters to the fused image, respectively, to obtain a modified image.
  • the electronic device can perform inverse matrix transformation on the corresponding modified image by using the area matrix of the cartoon head image to obtain a canvas corresponding to the modified image including the cartoon head image, and perform image interception on the canvas according to the area matrix to obtain the cartoon head image.
  • the electronic device can display the cartoon big head image according to the specified size in the middle of the shooting preview interface, and rotate and move the cartoon big head image to the top of the shooting preview interface.
  • S508 determine whether it is the last original image, if so, execute S509, and if not, return to S501.
  • the electronic device may determine whether the number of raw images that have been captured reaches a specified number, and if so, it is the last raw image, and if not, it is not the last raw image.
  • the electronic device may determine whether an instruction to end shooting is received from the user, if received, it is determined to be the last original image, and if not received, it is determined not to be the last original image.
  • the electronic device may select a comic background image of the type of the Spring Festival reunion scene, and respectively stitch the comic head image onto the comic background image to obtain a composite image.
  • the electronic device may switch from displaying a cartoon head image to displaying a composite image by using a crayon animation transition method.
  • the user can take a plurality of original images, and beautify and collage the plurality of original images to form a cartoon group photo with a Spring Festival reunion effect.
  • the embodiment of the present disclosure further provides an image display device capable of implementing the above-mentioned image display method.
  • the following describes the image display device provided by the embodiment of the present disclosure with reference to FIG. 6 .
  • the image display apparatus may be an electronic device.
  • electronic devices may include mobile phones, tablet computers, desktop computers, notebook computers, vehicle-mounted terminals, wearable electronic devices, all-in-one computers, smart home devices and other devices with communication functions, and may also be virtual machines or devices simulated by simulators .
  • FIG. 6 shows a schematic structural diagram of an image display device provided by an embodiment of the present disclosure.
  • the image display apparatus 600 may include an image acquisition unit 610 , a first processing unit 620 and a first display unit 630 .
  • the image acquisition unit 610 may be configured to acquire a plurality of original images, the original images being images including original objects.
  • the first processing unit 620 may be configured to perform style transfer processing on the original objects in the multiple original images respectively, so as to obtain multiple stylized images corresponding to the target style of the multiple original images.
  • the first display unit 630 may be configured to display a composite image, where the composite image is an image obtained by synthesizing a plurality of stylized images and a background image corresponding to the target style.
  • the original objects in the multiple original images are respectively subjected to style transfer processing, so as to obtain multiple stylized images corresponding to the target style of the multiple original images , and display a composite image obtained by synthesizing multiple stylized images and a background image corresponding to the target style, and then automatically beautify and collage the original image in the target style, without the need for users to manually perform image beautification or image editing.
  • the image automatically generates a composite image with the target style, thereby reducing the time cost of producing the composite image and improving the quality of the composite image to enhance the user experience.
  • the image display apparatus 600 may further include a second display unit, and the second display unit may be configured to display a plurality of stylized images according to a preset image display manner.
  • the first display unit 630 may be further configured to switch a plurality of stylized images into composite images for display according to a preset image switching manner.
  • the first processing unit 620 may include a first sub-processing unit and a second sub-processing unit.
  • the first sub-processing unit may be configured to perform style transfer processing on the original objects in the multiple original images, respectively, to obtain multiple style transfer images corresponding to the target style of the multiple original images.
  • the second sub-processing unit may be configured to perform object modification processing on the plurality of style transfer images respectively to obtain a plurality of stylized images corresponding to the plurality of style transfer images.
  • the original image may be an image including a plurality of original objects.
  • the first processing unit 620 may be further configured to perform style transfer processing on the original object with the largest size in each original image, respectively, to obtain multiple stylized images.
  • the image display apparatus 600 may further include a parameter acquisition unit, a second processing unit, and an image synthesis unit.
  • the parameter obtaining unit may be configured to obtain image parameters of each stylized image.
  • the second processing unit may be configured to determine display parameters of each stylized image according to the image parameters.
  • the image synthesizing unit may be configured to stitch a plurality of stylized images on the background image according to the display parameters to obtain a composite image.
  • the image parameters may include any of the following:
  • the object rotation angle corresponding to the stylized image is the object rotation angle corresponding to the stylized image.
  • the display parameters may include at least one of the following:
  • the background image may be used to place a target number of stylized images, which may be the total number of original images.
  • the background image may belong to a target scene type, and the target scene type may be determined according to the image background of the original image.
  • the image display device 600 shown in FIG. 6 can perform various steps in the method embodiments shown in FIGS. 1 to 5 , and implement various processes and processes in the method embodiments shown in FIGS. The effect will not be repeated here.
  • Embodiments of the present disclosure also provide an image display device, the image display device may include a processor and a memory, and the memory may be used to store executable instructions.
  • the processor may be configured to read executable instructions from the memory and execute the executable instructions to implement the image display method in the above-mentioned embodiments.
  • FIG. 7 shows a schematic structural diagram of an image display device provided by an embodiment of the present disclosure. Referring specifically to FIG. 7 below, it shows a schematic structural diagram of an image display device 700 suitable for implementing an embodiment of the present disclosure.
  • the image display device 700 in the embodiment of the present disclosure may be an electronic device.
  • the electronic equipment may include, but not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), in-vehicle terminals (such as in-vehicle navigation terminals) , wearable devices, etc., as well as stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
  • image display device 700 shown in FIG. 7 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the image display apparatus 700 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 701 , which may be loaded from a storage device 708 according to a program stored in a read only memory (ROM) 702 or from a storage device 708 .
  • a program in a random access memory (RAM) 703 executes various appropriate actions and processes.
  • various programs and data necessary for the operation of the image display apparatus 700 are also stored.
  • the processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to bus 704 .
  • the following devices can be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 707 of a computer, etc.; a storage device 708 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 709.
  • the communication means 709 may allow the image display device 700 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 7 shows the image display apparatus 700 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the processor enables the processor to implement the image display method in the foregoing embodiments.
  • Embodiments of the present disclosure also provide a computer program product, the computer program product may include a computer program, and when the computer program is executed by a processor, enables the processor to implement the image display method in the above embodiments.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 709, or from the storage device 708, or from the ROM 702.
  • the processing device 701 the above-mentioned functions defined in the image display method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium, other than a computer-readable storage medium, that can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • clients, servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
  • a communication network examples include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned image display apparatus; or may exist alone without being incorporated into the image display apparatus.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the image display device, the image display device is caused to execute:
  • the original images are images including original objects; performing style transfer processing on the original objects in the multiple original images respectively to obtain multiple stylized images corresponding to the target styles of the multiple original images; displaying the synthesized images, and synthesizing them
  • the image is an image obtained by synthesizing multiple stylized images and a background image corresponding to the target style.
  • computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本公开涉及一种图像显示方法、装置、设备及介质。其中,图像显示方法包括:获取多个原始图像,原始图像为包括原始对象的图像;分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像;显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。根据本公开实施例,能够实现对原始图像的自动美化和拼图,减少制作合成图像的时间成本,提高合成图像的质量,以提升用户的体验。

Description

图像显示方法、装置、设备及介质
本申请要求于2021年02月09日提交国家知识产权局、申请号为202110178213.3、申请名称为“图像显示方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及多媒体技术领域,尤其涉及一种图像显示方法、装置、设备及介质。
背景技术
随着计算机技术和移动通信技术的迅速发展,基于电子设备的各种图像美化平台得到了普遍应用,极大地丰富了人们的日常生活。越来越多的用户乐于在图像美化平台上对图像进行图像美化,例如增加滤镜效果或者对几张图像进行拼图。
在对图像进行拼图的过程中,用户首先需要对图像进行一些复杂的图像美化操作,例如人脸美化、背景美化等,然后对美化后的图像进行图像编辑操作,例如图像剪裁、图像拼接等,最终生成一个拼图图像。如果用户不善于图像美化或者图像编辑,不但会使制作拼图图像的时间成本较高,还无法保证拼图图像的质量,降低了用户的体验。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种图像显示方法、装置、设备及介质。
第一方面,本公开提供了一种图像显示方法,包括:
获取多个原始图像,原始图像为包括原始对象的图像;
分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像;
显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
第二方面,本公开提供了一种图像显示装置,包括:
图像获取单元,配置为获取多个原始图像,原始图像为包括原始对象的图像;
第一处理单元,配置为分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像;
第一显示单元,配置为显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
第三方面,本公开提供了一种图像显示设备,包括:
处理器;
存储器,用于存储可执行指令;
其中,处理器用于从存储器中读取可执行指令,并执行可执行指令以实现第一方面所述的图像显示方法。
第四方面,本公开提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现第一方面所述的图像显示方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例的图像显示方法、装置、设备及介质,能够在获取到多个包括原始对象的原始图像之后,分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像,并且显示将多个风格化图像和对应于目标风格的背景图像合成得到的合成图像,进而自动对原始图像进行目标风格的美化和拼图,无需用户手动进行图像美化操作或者图像编辑操作,便可以利用原始图像自动生成具有目标风格的合成图像,从而减少制作合成图像的时间成本,提高合成图像的质量,以提升用户的体验。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种图像显示方法的流程示意图;
图2为本公开实施例提供的另一种图像显示方法的流程示意图;
图3为本公开实施例提供的又一种图像显示方法的流程示意图;
图4为本公开实施例提供的一种图像处理过程的流程示意图;
图5为本公开实施例提供的另一种图像处理过程的流程示意图;
图6为本公开实施例提供的一种图像显示装置的结构示意图;
图7为本公开实施例提供的一种图像显示设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供了一种能够自动对图像进行美化和拼图的图像显示方法、装置、设 备及介质。
下面首先参考图1对本公开实施例提供的一种图像显示方法进行说明。
在本公开一些实施例中,该图像显示方法可以由电子设备执行。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
图1示出了本公开实施例提供的一种图像显示方法的流程示意图。
如图1所示,该图像显示方法可以包括如下步骤。
S110、获取多个原始图像,原始图像可以为包括原始对象的图像。
在本公开实施例中,在用户想要对指定的原始对象进行美化和拼图时,可以通过电子设备获取包括原始对象的多个原始图像。
在本公开实施例中,原始对象可以根据实际需要预先设定,在此不作限制。
在一些实施例中,原始对象可以包括人物、动物或者事物中的至少一种。
在另一些实施例中,原始对象也可以包括指定的部位,如人物和动物中的指定身体部位等。
例如,原始对象可以包括所有人物的头部或者动物的头部。
在本公开一些实施例中,原始图像可以为用户使用电子设备实时拍摄得到的图像。
具体地,用户可以控制电子设备进入图像拍摄模式,并且在电子设备处于图像拍摄模式的过程中,通过图像拍摄模式指定的拍摄方式连续拍摄得到多个原始图像。
例如,原始对象可以为用户的头部,此时,用户可以在电子设备处于图像拍摄模式的过程中,通过眨眼触发电子设备拍摄得到原始图像,使电子设备可以在每次检测到用户有眨眼动作时,拍摄一次得到包括用户的头部的原始图像。
在另一些实施例中,原始图像也可以为用户在电子设备存储的本地图像中所选择的图像。
具体地,用户可以控制电子设备进入图像选择模式,并且在电子设备处于图像选择模式的过程中,在电子设备所显示的本地图像中,选择多个原始图像。
需要说明的是,在用户使用电子设备实时拍摄的过程中,可以加载表情检测算法、姿态检测算法、对象修饰算法等算法资源,使得电子设备可以通过指定姿态或者指定表情触发拍摄,并且可以对拍摄得到的原始图像中的原始对象进行初步的修饰如美妆、滤镜等。但是,由于上述的算法所需的资源较多,因此,在用户从电子设备存储的本地图像中选择原始图像的过程中,可以释放上述算法所需的资源,降低对电子设备的资源占用率。
在本公开实施例中,原始图像的数量可以根据实际需要预先设定,在此不作限制。例如,原始图像的数量可以为3个、5个等。
在本公开实施例中,一个原始图像中可以包含一个原始对象,一个原始图像中也可以包括多个原始对象,在此不作限制。
例如,原始对象可以为用户的头部,一个原始对象中可以包含一个用户的头部,一个原始对象中也可以包含多个用户的头部,在此不作限制。
S120、分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应 目标风格的多个风格化图像。
在本公开实施例中,电子设备在获取多个原始图像之后,可以按照预设的风格化处理方式,分别对各个原始图像中的原始对象进行风格迁移处理,得到各个原始图像对应的具有目标风格的风格化图像,该风格化图像即为对各个原始图像进行美化得到的图像。其中,按照预设的风格化处理方式可以是针对目标风格的风格迁移处理。
其中,目标风格可以根据实际需要预先设定,在此不作限制。例如,目标风格可以为漫画风格。
在本公开一些实施例中,在原始图像为包括一个原始对象的图像的情况下,电子设备可以直接对原始图像中的原始对象进行风格迁移处理,得到风格化图像。
在本公开另一些实施例中,在原始图像为包括多个原始对象的图像的情况下,S120可以具体包括:分别对各个原始图像中的尺寸最大的原始对象进行风格迁移处理,得到多个风格化图像。
具体地,针对每一个原始图像,电子设备可以仅针对该原始图像中的尺寸最大的原始对象进行风格迁移处理,得到该原始图像对应的风格化图像,使得一个原始图像可以生成一个最主要的原始对象对应的风格化图像。
在本公开又一些实施例中,在原始图像为包括多个原始对象的图像的情况下,S120还可以具体包括:分别对各个原始图像中的全部原始对象进行风格迁移处理,得到各个原始图像中的全部原始对象对应的风格化图像。
具体地,针对每一个原始图像,电子设备可以分别对该原始图像中的全部原始对象进行风格迁移处理,得到该原始图像中的各个原始对象对应的风格化图像。
S130、显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
在本公开实施例中,电子设备在得到多个风格化图像之后,可以获取对应于目标风格的背景图像,显示由多个风格化图像和对应于目标风格的背景图像合成得到的合成图像,该合成图像即为对多个风格化图像进行拼图得到的图像。
在本公开实施例中,电子设备中可以预先存储有多个背景图像,多个背景图像可以具有相同的图像风格如目标风格,多个背景图像也可以分别具有不同的图像风格,在此不作限制。
具体地,电子设备在得到多个风格化图像之后,可以在预先存储的多个背景图像中,获取用于生成合成图像的对应于目标风格的背景图像。
在一些实施例中,用于生成合成图像的对应于目标风格的背景图像可以为在预先存储的多个对应于目标风格的背景图像中随机选择的图像。
具体地,在电子设备生成多个风格化图像之后,可以从预先存储的背景图像中随机选择一个对应于目标风格的背景图像,作为用于生成合成图像的对应于目标风格的背景图像。
在另一些实施例中,用于生成合成图像的对应于目标风格的背景图像可以为预先存储的对应于目标风格的背景图像中的能够放置目标数量的风格化图像的图像,目标数量可以为原始图像的总数。
具体地,在电子设备生成多个风格化图像之后,可以确定所获取的原始图像的总数,然后从预先存储的对应于目标风格的背景图像中,选择能够放置目标数量的风格化图像的背景图像,作为用于生成合成图像的对应于目标风格的背景图像。
在又一些实施例中,用于生成合成图像的对应于目标风格的背景图像可以为预先存储的对应于目标风格的背景图像中的能够放置目标数量的风格化图像的图像,目标数量可以为电子设备生成的风格化图像的总数,在此不做赘述。
在本公开实施例中,一个背景图像可以属于一个场景类型。例如,场景类型可以包括聚会场景类型、购物场景类型、春节团圆场景类型等。
在一些实施例中,用于生成合成图像的对应于目标风格的背景图像可以属于随机选择的场景类型。
具体地,在电子设备生成多个风格化图像之后,可以从预先存储的对应于目标风格的背景图像中随机选择属于任意场景类型的背景图像,作为用于生成合成图像的对应于目标风格的背景图像。
在另一些实施例中,用于生成合成图像的对应于目标风格的背景图像可以属于目标场景类型,目标场景类型可以根据原始图像的图像背景确定。
具体地,在电子设备生成多个风格化图像之后,可以基于预先设置的场景识别算法,识别每个原始图像的图像背景所对应的场景类型,然后将对应原始图像最多的场景类型作为目标场景类型,进而从预先存储的对应于目标风格的背景图像中选择属于目标场景类型的背景图像,作为用于生成合成图像的对应于目标风格的背景图像。
在本公开实施例中,能够在获取到多个包括原始对象的原始图像之后,分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像,并且显示将多个风格化图像和对应于目标风格的背景图像合成得到的合成图像,进而自动对原始图像进行目标风格的美化和拼图,无需用户手动进行图像美化操作或者图像编辑操作,便可以利用原始图像自动生成具有目标风格的合成图像,从而减少制作合成图像的时间成本,提高合成图像的质量,以提升用户的体验。
在本公开一种实施方式中,S120可以具体包括:
分别对各个原始图像进行对象识别处理,得到每个原始图像中的原始对象对应的原始对象图像。
分别对各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格化图像。
在一些实施例中,在原始图像为包括一个原始对象的图像的情况下,电子设备可以直接利用预先训练得到的对象识别模型,对各个原始图像进行对象识别处理,得到各个原始图像中的原始对象对应的原始对象图像,然后利用预先训练得到的风格迁移模型,对得到的各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格化图像。
在另一些实施例中,在原始图像为包括多个原始对象的图像的情况下,电子设备可以利用预先训练得到的对象识别模型,对各个原始图像进行对象识别处理,得到各个原始图像中的尺寸最大的原始对象对应的原始对象图像,然后利用预先训练得到的风格迁移模型, 对得到的各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格化图像。
在又一些实施例中,在原始图像为包括多个原始对象的图像的情况下,电子设备也可以利用预先训练得到的对象识别模型,对各个原始图像进行对象识别处理,得到每个原始图像中的各个原始对象对应的原始对象图像,然后利用预先训练得到的风格迁移模型,对得到的各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格化图像。
由此,在本公开实施例中,可以快速地获得具有目标风格的风格化图像,提高图像美化的趣味性、便利性。
在本公开另一种实施方式中,为了提高风格迁移得到的风格化图像的美观性,S120可以具体包括:
S121、分别对多个原始图像中的原始对象图像进行风格迁移处理,得到多个原始图像对应目标风格的多个风格迁移图像。
在一些实施例中,在原始图像为包括一个原始对象图像的情况下,电子设备可以直接利用预先训练得到的对象识别模型,对各个原始图像进行对象识别处理,得到各个原始图像中的原始对象对应的原始对象图像和原始对象图像的区域矩阵,然后利用预先训练得到的风格迁移模型,对各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格迁移图像,将原始对象图像的区域矩阵作为风格迁移图像的区域矩阵。
在另一些实施例中,在原始图像为包括多个原始对象图像的情况下,电子设备可以利用预先训练得到的对象识别模型,对各个原始图像进行对象识别处理,得到各个原始图像中的尺寸最大的原始对象、尺寸最大的原始对象对应的原始对象图像和原始对象图像的区域矩阵,然后利用预先训练得到的风格迁移模型,对得到的各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格迁移图像,将原始对象图像的区域矩阵作为风格迁移图像的区域矩阵。
在又一些实施例中,在原始图像为包括多个原始对象图像的情况下,电子设备也可以利用预先训练得到的对象识别模型,对各个原始图像进行对象识别处理,得到每个原始图像中的各个原始对象对应的原始对象图像和原始对象图像的区域矩阵,然后利用预先训练得到的风格迁移模型,对得到的各个原始对象图像进行风格迁移处理,得到各个原始对象图像对应的风格迁移图像,将原始对象图像的区域矩阵作为风格迁移图像的区域矩阵。
S122、分别对多个风格迁移图像进行对象修饰处理,得到多个风格迁移图像对应的多个风格化图像。
在电子设备得到风格迁移图像和风格迁移图像的区域矩阵之后,电子设备可以对每个原始图像分别进行扣背处理,得到每个原始图像中的各个原始对象所属的主体对应的主体图像,并对每个风格迁移图像分别进行如下的对象修饰处理:利用风格迁移图像的区域矩阵进行正矩阵变换,将风格迁移图像融合至对应主体图像内,得到融合图像,然后对融合图像进行对象修饰,得到修饰图像,接着利用风格迁移图像的区域矩阵对修饰图像进行逆矩阵变换,得到风格化图像。
在电子设备得到各个原始图像中的尺寸最大的原始对象、风格迁移图像和风格迁移图像的区域矩阵之后,电子设备可以对每个原始图像分别进行扣背处理,得到每个原始图像 中的尺寸最大的原始对象所属的主体对应的主体图像,并对每个风格迁移图像分别进行如下的对象修饰处理:利用风格迁移图像的区域矩阵进行正矩阵变换,将风格迁移图像融合至对应主体图像内,得到融合图像,然后对融合图像进行对象修饰,得到修饰图像,接着利用风格迁移图像的区域矩阵对修饰图像进行逆矩阵变换,得到风格化图像。
其中,对象修饰可以包括对象放大变形、添加滤镜等,也可以包括其他的修饰方式,在此不作限制。
由此,在本公开实施例中,可以快速地获得具有目标风格和目标修饰效果的风格化图像,提高图像美化的趣味性。
为了提高互动趣味性,本公开实施例还提供了另一种图像显示方法,在该方法中,在显示合成图像之前还包括:按照预设的图像显示方式,显示多个风格化图像,使显示合成图像可以具体包括:按照预设的图像切换方式,将多个风格化图像切换为合成图像进行显示。下面参考图2进行说明。
在本公开一些实施例中,该图像显示方法可以由电子设备执行。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
图2示出了本公开实施例提供的另一种图像显示方法的流程示意图。
如图2所示,该图像显示方法可以包括如下步骤。
S210、获取多个原始图像,原始图像可以为包括原始对象的图像。
S220、分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像。
其中,S210-S220与图1所示实施例中的S110-S120相似,在此不做赘述。
S230、按照预设的图像显示方式,显示多个风格化图像。
在本公开实施例中,电子设备在得到多个风格化图像之后,还可以按照原始图像的获取方式对应的图像显示方式,显示多个风格化图像。
在一些实施例中,在原始图像的获取方式为电子设备实时拍摄获取的情况下,电子设备可以在每拍摄一个原始图像之后,便对拍摄得到的原始图像进行风格迁移处理,得到该原始图像对应的风格化图像,然后在拍摄预览界面的中间按照指定尺寸显示原始图像对应的风格化图像,并将原始图像对应的风格化图像旋转移动至在拍摄预览界面的顶部,直至全部原始图像采集完成后,在拍摄预览界面的顶部可以显示有全部的原始图像对应的风格化图像。
在另一些实施例中,在原始图像的获取方式为电子设备实时拍摄获取的情况下,电子设备还可以在全部原始图像拍摄完毕后,对每个原始图像中的原始对象分别进行风格迁移处理,得到每个原始图像对应的风格化图像,然后在拍摄预览界面内按照图像拍摄顺序依次展示每个原始图像对应的风格化图像:在拍摄预览界面的中间按照指定尺寸显示原始图像对应的风格化图像,并旋转移动至在拍摄预览界面的顶部,直至在拍摄预览界面的顶部可以显示有全部的原始图像对应的风格化图像。
在又一些实施例中,在原始图像的获取方式为在电子设备本地选择的情况下,电子设 备还可以在全部原始图像选择完毕后,对每个原始图像中的原始对象分别进行风格迁移处理,得到每个原始图像对应的风格化图像,然后按照图像选择顺序依次展示每个原始图像对应的风格化图像:全屏显示原始图像,然后利用闪白动画使原始图像消失,接着从界面指定边界旋转飞入该原始图像对应的风格化图像,并将飞入的风格化图像摆放在指定位置,直至全部的原始图像对应的风格化图像显示在各自的指定位置上。
S240、显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
具体地,电子设备可以按照预设的图像切换方式,将多个风格化图像切换为合成图像进行显示。
可选地,预设的图像切换方式可以包括预设的转场动画方式。
电子设备可以在显示多个风格化图像之后,按照预设的转场动画方式,由显示有多个风格化图像的界面切换至显示有合成图像的界面,进而显示合成图像。
其中,转场动画方式可以根据实际需要预先设定,在此不作限制。例如,转场动画方式可以为蜡笔动画转场方式、渐变动画转场方式等。
由此,在本公开实施例中,可以在显示合成图像之前,先基于预设的图像显示方式展示多个风格化图像,增加了与用户的互动过程,进而提高了互动趣味性。
为了提高拼图效果的趣味性,本公开实施例还提供了又一种图像显示方法,在该方法中,在显示合成图像之前还包括:获取各个风格化图像的图像参数;根据图像参数,确定各个风格化图像的显示参数;按照显示参数,将多个风格化图像拼接在背景图像上,得到合成图像。下面参考图3进行说明。
在本公开一些实施例中,该图像显示方法可以由电子设备执行。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
图3示出了本公开实施例提供的又一种图像显示方法的流程示意图。
如图3所示,该图像显示方法可以包括如下步骤。
S310、获取多个原始图像,原始图像可以为包括原始对象的图像。
S320、分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像。
其中,S310-S320与图1所示实施例中的S110-S120相似,在此不做赘述。
S330、获取各个风格化图像的图像参数。
在本公开实施例中,电子设备可以在得到每个原始图像对应的风格化图像之后,获取每个风格化图像的图像参数。
可选地,图像参数可以包括下列中的任一项:风格化图像对应的原始图像的获取顺序、风格化图像对应的对象旋转角度。
在一些实施例中,电子设备可以确定每个风格化图像对应的原始图像,进而将各个风格化图像对应的原始图像的获取顺序作为各个风格化图像的图像参数。
在另一些实施例中,电子设备可以基于预设的对象姿态检测算法,对每个风格化图像 进行对象旋转角度检测,得到每个风格化图像对应的对象旋转角度。
其中,对象旋转角度可以包括对象偏航角度、对象俯仰角度和对象翻滚角度中的至少一种。
以原始对象包括头部为例,风格化图像的对象旋转角度可以包括头部偏航角度、头部俯仰角度和头部翻滚角度中的至少一种。
S340、根据图像参数,确定各个风格化图像的显示参数。
在本公开实施例中,电子设备可以在获取每个风格化图像的图像参数之后,根据各个风格化图像的图像参数,确定各个风格化图像在背景图像内的显示参数。
具体地,电子设备可以首先获取用于生成合成图像的背景图像,并且获取该背景图像对应的多组预设的显示参数,一组预设的显示参数用于在背景图像上摆放一个风格化图像。电子设备可以根据各个风格化图像的图像参数,在背景图像对应的多组预设的显示参数中,选择各个风格化图像在背景图像内的显示参数。
在图像参数为风格化图像对应的原始图像的获取顺序的情况下,一组预设的显示参数可以对应一个获取顺序,因此,电子设备可以直接在背景图像对应的多组预设的显示参数中,选择风格化图像对应的原始图像的获取顺序所对应的一组预设的显示参数,作为该风格化图像在预设风格化背景图像内的显示参数。
在图像参数为风格化图像对应的对象旋转角度的情况下,一组预设的显示参数可以对应一个旋转角度范围,因此,电子设备可以直接在背景图像对应的多组预设的显示参数中,选择风格化图像对应的对象旋转角度所落入的旋转角度范围对应的一组预设的显示参数,作为该风格化图像在背景图像内的显示参数。
以风格化图像为具有目标风格的头部图像、风格化图像的对象旋转角度可以包括头部偏航角度为例,一组预设的显示参数可以对应一个偏航角度范围,因此,电子设备可以直接在背景图像对应的多组预设的显示参数中,选择目标风格的头部图像对应的头部偏航角度所落入的偏航角度范围对应的一组预设的显示参数,作为该风格化图像在背景图像内的显示参数。
在本公开实施例中,可选地,显示参数可以包括下列中的至少一种:显示尺寸、显示位置和显示角度。
其中,显示尺寸可以指风格化图像在背景图像内的显示尺寸,用于确定风格化图像的尺寸缩放比例。
显示位置可以指风格化图像在背景图像内的显示位置,用于确定风格化图像在背景图像内的摆放位置。
显示角度可以指风格化图像在背景图像内的显示角度,用于确定风格化图像的旋转角度。
S350、按照显示参数,将多个风格化图像拼接在背景图像上,得到合成图像。
在本公开实施例中,电子设备可以将每个风格化图像由初始的显示参数调整为在背景图像内的显示参数,然后将调整显示参数后的风格化图像拼接在背景图像上,得到合成图像。
具体地,电子设备可以首先根据每个风格化图像的初始显示尺寸和在背景图像内的显示尺寸确定每个风格化图像的尺寸缩放比例,并按照确定的尺寸缩放比例,对每个风格化图像进行尺寸调整,然后根据每个风格化图像的初始显示角度和在背景图像内的显示角度确定每个风格化图像的旋转角度,并按照确定的选择角度,对每个风格化图像进行角度调整,接着将每个风格化图像按照其在背景图像内的显示位置拼接在背景图像上,得到合成图像。
S360、显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
由此,在本公开实施例中,电子设备可以自动为各个原始对象对应的风格化图像确定其在背景图像中的显示参数,进而基于该显示参数将风格化图像在背景图像内进行拼图,得到合成图像,以提高拼图效果的趣味性。
为了便于理解,下面参考图4和图5对本公开实施例中所涉及的图像处理过程进行详细说明。
图4示出了本公开实施例提供的一种图像处理过程的流程示意图。
如图4所示,该图像处理过程可以具体包括如下步骤。
S401、用户导入多个原始图像。
具体地,用户可以控制电子设备进入图像选择模式,并且在电子设备处于图像选择模式的过程中,在电子设备所显示的本地图像中,选择多个原始图像。
S402、生成各个原始图像对应的漫画头部图像。
具体地,电子设备可以将每个原始图像分别输入预先训练得到的漫画风格迁移模型,得到每个原始图像中最大尺寸的头部对应的漫画头部图像和每个漫画头部图像的区域矩阵。
其中,漫画头部图像可以包括漫画人物头部图像和漫画动物头部图像中的至少一种。
S403、对各个原始图像进行扣背处理,得到最大尺寸的头部所属主体的主体图像。
具体地,电子设备可以对各个原始图像分别做针对最大尺寸的头部所属主体的扣背处理,得到各个原始图像对应的主体图像。
其中,主体可以包括人物和动物中的至少一种。
S404、生成各个漫画头部图像对应的融合图像。
具体地,电子设备可以利用各个漫画头部图像的区域矩阵进行正矩阵变换,将各个漫画头部图像融合至对应主体图像内,得到各个漫画头部图像对应的融合图像。
S405、对各个融合图像进行对象修饰,得到修饰图像。
具体地,电子设备可以对各个融合图像分别进行头部放大变形和添加滤镜等修饰,得到修饰图像。
S406、生成各个修饰图像对应的漫画大头图像。
具体地,电子设备可以利用各个漫画头部图像的区域矩阵对对应修饰图像进行逆矩阵变换,得到各个修饰图像对应的包括漫画大头图像的画布,并且根据区域矩阵对画布进行图像截取,得到漫画大头图像。
S407、生成合成图像。
具体地,电子设备可以选取春节团圆场景类型的漫画背景图像,并且将漫画大头图像分别拼接至漫画背景图像上,得到合成图像。
S408、显示漫画大头图像。
具体地,电子设备可以按照图像选择顺序依次展示每个原始图像对应的漫画大头图像:全屏显示原始图像,然后利用闪白动画使原始图像消失,接着从闪白底部边界旋转飞入该原始图像对应的漫画大头图像,并将飞入的漫画大头图像摆放在指定位置,直至全部的原始图像对应的漫画大头图像显示在各自的指定位置上。
S409、显示合成图像。
具体地,电子设备可以利用蜡笔动画转场方式,由显示漫画大头图像切换至显示合成图像。
由此,在本公开实施例中,用户可以上传多张原始图像,并且对多张原始图像进行美化和拼图,形成春节团圆效果的漫画合照。
图5示出了本公开实施例提供的另一种图像处理过程的流程示意图。
如图5所示,该图像处理过程可以具体包括如下步骤。
S501、用户拍摄原始图像。
用户可以控制电子设备进入图像拍摄模式,并且在电子设备处于图像拍摄模式的过程中,通过图像拍摄模式指定的拍摄方式拍摄原始图像。
S502、生成原始图像对应的漫画头部图像。
具体地,电子设备可以将原始图像输入预先训练得到的漫画风格迁移模型,得到原始图像中最大尺寸的头部对应的漫画头部图像和漫画头部图像的区域矩阵。
其中,漫画头部图像可以包括漫画人物头部图像和漫画动物头部图像中的至少一种。
S503、对原始图像进行扣背处理,得到最大尺寸的头部所属主体的主体图像。
具体地,电子设备可以对原始图像分别做针对最大尺寸的头部所属主体的扣背处理,得到原始图像对应的主体图像。
其中,主体可以包括人物和动物中的至少一种。
S504、生成漫画头部图像对应的融合图像。
具体地,电子设备可以利用漫画头部图像的区域矩阵进行正矩阵变换,将漫画头部图像融合至对应主体图像内,得到漫画头部图像对应的融合图像。
S505、对融合图像进行对象修饰,得到修饰图像。
具体地,电子设备可以对融合图像分别进行头部放大变形和添加滤镜等修饰,得到修饰图像。
S506、生成修饰图像对应的漫画大头图像。
具体地,电子设备可以利用漫画头部图像的区域矩阵对对应修饰图像进行逆矩阵变换,得到修饰图像对应的包括漫画大头图像的画布,并且根据区域矩阵对画布进行图像截取,得到漫画大头图像。
S507、显示漫画大头图像。
具体地,电子设备可以在拍摄预览界面的中间按照指定尺寸显示漫画大头图像,并将 漫画大头图像旋转移动至在拍摄预览界面的顶部。
S508、判断是否为最后一张原始图像,如果是,则执行S509,如果不是,则返回S501。
在一个示例中,电子设备可以确定已经拍摄的原始图像的数量是否达到指定数量,如果是,则确定是最后一张原始图像,如果不是,则确定不是最后一张原始图像。
在另一个示例中,电子设备可以判断是否接收到用户发出的结束拍摄指令,如果接收到,则确定是最后一张原始图像,如果未接收到,则确定不是最后一张原始图像。
S509、生成合成图像。
具体地,电子设备可以选取春节团圆场景类型的漫画背景图像,并且将漫画大头图像分别拼接至漫画背景图像上,得到合成图像。
S510、显示合成图像。
具体地,电子设备可以利用蜡笔动画转场方式,由显示漫画大头图像切换至显示合成图像。
由此,在本公开实施例中,用户可以拍摄多张原始图像,并且对多张原始图像进行美化和拼图,形成春节团圆效果的漫画合照。
需要说明的是,上述漫画大头图像仅仅是示例说明。
本公开实施例还提供了一种能够实现上述的图像显示方法的图像显示装置,下面参考图6对本公开实施例提供的图像显示装置进行说明。
在本公开一些实施例中,该图像显示装置可以为电子设备。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
图6示出了本公开实施例提供的一种图像显示装置的结构示意图。
如图6所示,该图像显示装置600可以包括图像获取单元610、第一处理单元620和第一显示单元630。
该图像获取单元610可以配置为获取多个原始图像,原始图像为包括原始对象的图像。
该第一处理单元620可以配置为分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像。
该第一显示单元630可以配置为显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
在本公开实施例中,能够在获取到多个包括原始对象的原始图像之后,分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像,并且显示将多个风格化图像和对应于目标风格的背景图像合成得到的合成图像,进而自动对原始图像进行目标风格的美化和拼图,无需用户手动进行图像美化或者图像编辑,便可以利用原始图像自动生成具有目标风格的合成图像,从而减少制作合成图像的时间成本,提高合成图像的质量,以提升用户的体验。
在本公开一些实施例中,该图像显示装置600还可以包括第二显示单元,该第二显示单元可以配置为按照预设的图像显示方式,显示多个风格化图像。
相应地,该第一显示单元630可以进一步配置为按照预设的图像切换方式,将多个风 格化图像切换为合成图像进行显示。
在本公开一些实施例中,该第一处理单元620可以包括第一子处理单元和第二子处理单元。
该第一子处理单元可以配置为分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格迁移图像。
该第二子处理单元可以配置为分别对多个风格迁移图像进行对象修饰处理,得到多个风格迁移图像对应的多个风格化图像。
在本公开一些实施例中,原始图像可以为包括多个原始对象的图像。
相应地,该第一处理单元620可以进一步配置为分别对各个原始图像中的尺寸最大的原始对象进行风格迁移处理,得到多个风格化图像。
在本公开一些实施例中,该图像显示装置600还可以包括参数获取单元、第二处理单元和图像合成单元。
该参数获取单元可以配置为获取各个风格化图像的图像参数。
该第二处理单元可以配置为根据图像参数,确定各个风格化图像的显示参数。
该图像合成单元可以配置为按照显示参数,将多个风格化图像拼接在背景图像上,得到合成图像。
在本公开一些实施例中,图像参数可以包括下列中的任一项:
风格化图像对应的原始图像的获取顺序;
风格化图像对应的对象旋转角度。
在本公开一些实施例中,显示参数可以包括下列中的至少一种:
显示尺寸、显示位置和显示角度。
在本公开一些实施例中,背景图像可以用于放置目标数量的风格化图像,目标数量可以为原始图像的总数。
在本公开一些实施例中,背景图像可以属于目标场景类型,目标场景类型可以根据原始图像的图像背景确定。
需要说明的是,图6所示的图像显示装置600可以执行图1至图5所示的方法实施例中的各个步骤,并且实现图1至图5所示的方法实施例中的各个过程和效果,在此不做赘述。
本公开实施例还提供了一种图像显示设备,该图像显示设备可以包括处理器和存储器,存储器可以用于存储可执行指令。其中,处理器可以用于从存储器中读取可执行指令,并执行可执行指令以实现上述实施例中的图像显示方法。
图7示出了本公开实施例提供的一种图像显示设备的结构示意图。下面具体参考图7,其示出了适于用来实现本公开实施例中的图像显示设备700的结构示意图。
本公开实施例中的图像显示设备700可以为电子设备。其中,电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴设备、等等的 移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。
需要说明的是,图7示出的图像显示设备700仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,该图像显示设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储装置708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有图像显示设备700操作所需的各种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许图像显示设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的图像显示设备700,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
本公开实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的图像显示方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品可以包括计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的图像显示方法。
例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM 702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的图像显示方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、 传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述图像显示设备中所包含的;也可以是单独存在,而未装配入该图像显示设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该图像显示设备执行时,使得该图像显示设备执行:
获取多个原始图像,原始图像为包括原始对象的图像;分别对多个原始图像中的原始对象进行风格迁移处理,得到多个原始图像对应目标风格的多个风格化图像;显示合成图像,合成图像为将多个风格化图像和对应于目标风格的背景图像合成得到的图像。
在本公开实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上***(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (12)

  1. 一种图像显示方法,其特征在于,包括:
    获取多个原始图像,所述原始图像为包括原始对象的图像;
    分别对所述多个原始图像中的原始对象进行风格迁移处理,得到所述多个原始图像对应目标风格的多个风格化图像;
    显示合成图像,所述合成图像为将所述多个风格化图像和对应于所述目标风格的背景图像合成得到的图像。
  2. 根据权利要求1所述的方法,其特征在于,在所述显示合成图像之前,所述方法还包括:
    按照预设的图像显示方式,显示所述多个风格化图像;
    其中,所述显示合成图像,包括:
    按照预设的图像切换方式,将所述多个风格化图像切换为所述合成图像进行显示。
  3. 根据权利要求1所述的方法,其特征在于,所述分别对所述多个原始图像中的原始对象进行风格迁移处理,得到所述多个原始图像对应目标风格的多个风格化图像,包括:
    分别对所述多个原始图像中的原始对象进行风格迁移处理,得到所述多个原始图像对应目标风格的多个风格迁移图像;
    分别对所述多个风格迁移图像进行对象修饰处理,得到所述多个风格迁移图像对应的所述多个风格化图像。
  4. 根据权利要求1所述的方法,其特征在于,所述原始图像为包括多个原始对象的图像;
    其中,所述分别对所述多个原始图像中的原始对象进行风格迁移处理,得到所述多个原始图像对应目标风格的多个风格化图像,包括:
    分别对各个所述原始图像中的尺寸最大的原始对象进行风格迁移处理,得到所述多个风格化图像。
  5. 根据权利要求1所述的方法,其特征在于,在所述显示合成图像之前,所述方法还包括:
    获取各个所述风格化图像的图像参数;根据所述图像参数,确定各个所述风格化图像的显示参数;其中,所述合成图像为按照所述显示参数,将所述多个风格化图像拼接在所述背景图像上得到的图像。
  6. 根据权利要求5所述的方法,其特征在于,所述图像参数包括下列中的任一项:
    所述风格化图像对应的原始图像的获取顺序;
    所述风格化图像对应的对象旋转角度。
  7. 根据权利要求5所述的方法,其特征在于,所述显示参数包括下列中的至少一种:
    显示尺寸、显示位置和显示角度。
  8. 根据权利要求1所述的方法,其特征在于,所述背景图像用于放置目标数量的风格 化图像,所述目标数量为所述原始图像的总数。
  9. 根据权利要求1所述的方法,其特征在于,所述背景图像属于目标场景类型,所述目标场景类型根据所述原始图像的图像背景确定。
  10. 一种图像显示装置,其特征在于,包括:
    图像获取单元,配置为获取多个原始图像,所述原始图像为包括原始对象的图像;
    第一处理单元,配置为分别对所述多个原始图像中的原始对象进行风格迁移处理,得到所述多个原始图像对应目标风格的多个风格化图像;
    第一显示单元,配置为显示合成图像,所述合成图像为将所述多个风格化图像和对应于所述目标风格的背景图像合成得到的图像。
  11. 一种图像显示设备,其特征在于,包括:
    处理器;
    存储器,用于存储可执行指令;
    其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-9中任一项所述的图像显示方法。
  12. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-9中任一项所述的图像显示方法。
PCT/CN2022/074918 2021-02-09 2022-01-29 图像显示方法、装置、设备及介质 WO2022171024A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22752185.3A EP4276738A4 (en) 2021-02-09 2022-01-29 IMAGE DISPLAY METHOD AND APPARATUS, DEVICE AND MEDIUM
JP2023548277A JP2024506639A (ja) 2021-02-09 2022-01-29 画像表示方法、装置、機器及び媒体
US18/366,939 US20230386001A1 (en) 2021-02-09 2023-08-08 Image display method and apparatus, and device and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110178213.3A CN113012082A (zh) 2021-02-09 2021-02-09 图像显示方法、装置、设备及介质
CN202110178213.3 2021-02-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/366,939 Continuation US20230386001A1 (en) 2021-02-09 2023-08-08 Image display method and apparatus, and device and medium

Publications (1)

Publication Number Publication Date
WO2022171024A1 true WO2022171024A1 (zh) 2022-08-18

Family

ID=76383932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074918 WO2022171024A1 (zh) 2021-02-09 2022-01-29 图像显示方法、装置、设备及介质

Country Status (5)

Country Link
US (1) US20230386001A1 (zh)
EP (1) EP4276738A4 (zh)
JP (1) JP2024506639A (zh)
CN (1) CN113012082A (zh)
WO (1) WO2022171024A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588070A (zh) * 2022-12-12 2023-01-10 南方科技大学 一种三维图像风格化迁移方法及终端

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012082A (zh) * 2021-02-09 2021-06-22 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质
CN113780068A (zh) * 2021-07-30 2021-12-10 武汉中海庭数据技术有限公司 一种基于对抗网络的道路箭头图片生成方法及***
CN118071577A (zh) * 2022-11-18 2024-05-24 北京字跳网络技术有限公司 图像生成方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170148222A1 (en) * 2014-10-31 2017-05-25 Fyusion, Inc. Real-time mobile device capture and generation of art-styled ar/vr content
US20180082715A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Artistic style transfer for videos
CN109618222A (zh) * 2018-12-27 2019-04-12 北京字节跳动网络技术有限公司 一种拼接视频生成方法、装置、终端设备及存储介质
CN110956654A (zh) * 2019-12-02 2020-04-03 Oppo广东移动通信有限公司 图像处理方法、装置、设备及存储介质
CN111986076A (zh) * 2020-08-21 2020-11-24 深圳市慧鲤科技有限公司 图像处理方法及装置、互动式展示装置和电子设备
CN113012082A (zh) * 2021-02-09 2021-06-22 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100619975B1 (ko) * 2004-12-22 2006-09-08 엘지전자 주식회사 이동통신 단말기의 합성 사진 생성 방법
SG152952A1 (en) * 2007-12-05 2009-06-29 Gemini Info Pte Ltd Method for automatically producing video cartoon with superimposed faces from cartoon template
CN106920212A (zh) * 2015-12-24 2017-07-04 掌赢信息科技(上海)有限公司 一种发送风格化视频的方法及电子设备
EP3526770B1 (en) * 2016-10-21 2020-04-15 Google LLC Stylizing input images
CN109993716B (zh) * 2017-12-29 2023-04-14 微软技术许可有限责任公司 图像融合变换
CN109729269B (zh) * 2018-12-28 2020-10-30 维沃移动通信有限公司 一种图像处理方法、终端设备及计算机可读存储介质
EP3970112A4 (en) * 2019-05-30 2022-08-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. SINGLEMODAL OR MULTIMODAL STYLE TRANSFER SYSTEM AND METHOD AND RANDOM STYLING SYSTEM USING THE SAME
CN110392211B (zh) * 2019-07-22 2021-04-23 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN111931566B (zh) * 2020-07-01 2022-10-21 南京审计大学 一种基于图像处理的人脸卡通形象设计方法
CN111951353A (zh) * 2020-07-29 2020-11-17 广州华多网络科技有限公司 电子相册的合成方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170148222A1 (en) * 2014-10-31 2017-05-25 Fyusion, Inc. Real-time mobile device capture and generation of art-styled ar/vr content
US20180082715A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Artistic style transfer for videos
CN109618222A (zh) * 2018-12-27 2019-04-12 北京字节跳动网络技术有限公司 一种拼接视频生成方法、装置、终端设备及存储介质
CN110956654A (zh) * 2019-12-02 2020-04-03 Oppo广东移动通信有限公司 图像处理方法、装置、设备及存储介质
CN111986076A (zh) * 2020-08-21 2020-11-24 深圳市慧鲤科技有限公司 图像处理方法及装置、互动式展示装置和电子设备
CN113012082A (zh) * 2021-02-09 2021-06-22 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4276738A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588070A (zh) * 2022-12-12 2023-01-10 南方科技大学 一种三维图像风格化迁移方法及终端

Also Published As

Publication number Publication date
CN113012082A (zh) 2021-06-22
EP4276738A4 (en) 2023-11-29
JP2024506639A (ja) 2024-02-14
EP4276738A1 (en) 2023-11-15
US20230386001A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
WO2022171024A1 (zh) 图像显示方法、装置、设备及介质
WO2022083383A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2021218325A1 (zh) 视频处理方法、装置、计算机可读介质和电子设备
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112199016B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2022007627A1 (zh) 一种图像特效的实现方法、装置、电子设备及存储介质
WO2021254502A1 (zh) 目标对象显示方法、装置及电子设备
WO2023125374A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2022105846A1 (zh) 虚拟对象显示方法及装置、电子设备、介质
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
WO2022042290A1 (zh) 一种虚拟模型处理方法、装置、电子设备和存储介质
CN113806306B (zh) 媒体文件处理方法、装置、设备、可读存储介质及产品
WO2023040749A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2022037484A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023185671A1 (zh) 风格图像生成方法、装置、设备及介质
WO2022132032A1 (zh) 人像图像处理方法及装置
WO2023169305A1 (zh) 特效视频生成方法、装置、电子设备及存储介质
WO2023273697A1 (zh) 图像处理方法、模型训练方法、装置、电子设备及介质
WO2022171114A1 (zh) 图像处理方法、装置、设备及介质
CN114401443B (zh) 特效视频处理方法、装置、电子设备及存储介质
WO2024051540A1 (zh) 特效处理方法、装置、电子设备及存储介质
WO2023221941A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022170975A1 (zh) 视频生成方法、装置、设备及介质
WO2022213798A1 (zh) 图像处理方法、装置、电子设备和存储介质
US12019669B2 (en) Method, apparatus, device, readable storage medium and product for media content processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22752185

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023548277

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 2022752185

Country of ref document: EP

Effective date: 20230809

NENP Non-entry into the national phase

Ref country code: DE