CN112529765A - Image processing method, apparatus and storage medium - Google Patents

Image processing method, apparatus and storage medium Download PDF

Info

Publication number
CN112529765A
CN112529765A CN201910846895.3A CN201910846895A CN112529765A CN 112529765 A CN112529765 A CN 112529765A CN 201910846895 A CN201910846895 A CN 201910846895A CN 112529765 A CN112529765 A CN 112529765A
Authority
CN
China
Prior art keywords
image
target
size
background
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846895.3A
Other languages
Chinese (zh)
Inventor
侯文迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910846895.3A priority Critical patent/CN112529765A/en
Publication of CN112529765A publication Critical patent/CN112529765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, image processing equipment and a storage medium. In the image processing method, when the first image does not meet the target size, a second image adapted to the background of the first image may be acquired based on the background feature of the first image, and the size of the first image may be corrected according to the second image. In the embodiment, the operation of directly performing size stretching on the first image is avoided, the distortion degree of the first image can be effectively reduced, and the display effect of the first image is favorably improved.

Description

Image processing method, apparatus and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
In some application scenarios, when a display device such as a computer or a mobile phone displays a picture, the size of the displayed picture is limited within a specific range, so as to achieve a certain visual design effect. Typically, pictures from different data sources have sizes that deviate from the target presentation size.
In the prior art, one common way to solve the above size deviation defect is: and performing size stretching on the image to modify the size of the image into a target size. However, this method causes the displayed picture to be distorted obviously, and the ideal display effect cannot be achieved. Therefore, a new solution is yet to be proposed.
Disclosure of Invention
Aspects of the present disclosure provide an image processing method, an apparatus, and a storage medium to effectively reduce a distortion degree of a first image while flexibly modifying a size of the image.
An embodiment of the present application provides an image processing method, including: acquiring background features of a first image; acquiring a second image matched with the background of the first image according to the background feature; and correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with the target size.
An embodiment of the present application further provides an image processing method, including: responding to a request for generating a target image according to a first image, and acquiring energy distribution characteristics on the first image; determining an image area with energy meeting a set energy condition on the first image as a target area according to the energy distribution characteristics; adding specified additional information on the target area to generate the target image.
An embodiment of the present application further provides an image processing method, including: acquiring a second image matched with the background of the first image according to the background feature of the first image; correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with a target size; responding to a request for generating a target image according to the first image, and acquiring energy distribution characteristics on the first image; according to the energy distribution characteristics, adding specified additional information on a target area with energy meeting set energy conditions on the first image to generate the target image.
An embodiment of the present application further provides an image processing method, including: identifying a subject object on a first image in response to a request to generate a target image from the first image; acquiring a second image used for generating the target image and energy distribution characteristics of the second image; according to the energy distribution characteristics of the second image, identifying an image area on the second image, of which the energy distribution meets set conditions, as a target area; adding a subject object on the first image on the target area to generate the target image.
An embodiment of the present application further provides an image processing method, including: identifying a subject object on a first image in response to a request to generate a target image from the first image; identifying at least one object from the acquired second image; determining a target object adapted to the subject object from the at least one object; adding the subject object on an image area associated with the target object to generate the target image.
An embodiment of the present application further provides an image processing apparatus, including: a memory and a processor; the memory is to store one or more computer instructions; the processor is to execute the one or more computer instructions to: the steps in the image processing method provided by the embodiment of the application are executed.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the image processing method provided in the embodiments of the present application when executed.
In the image processing method provided by the embodiment of the application, when the first image does not meet the target size, the second image matched with the background of the first image can be acquired based on the background feature of the first image, and the size of the first image can be corrected according to the second image. In the embodiment, the operation of directly performing size stretching on the first image is avoided, the distortion degree of the first image can be effectively reduced, and the display effect of the first image is favorably improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of an image processing method according to an exemplary embodiment of the present application;
FIG. 2a is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application;
FIG. 2b is a flowchart illustrating an image completion operation according to an exemplary embodiment of the present disclosure;
FIG. 2c is a schematic flow chart illustrating an image completion operation according to another exemplary embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating an image processing method according to another exemplary embodiment of the present application;
FIG. 4a is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application;
FIG. 4b is a schematic diagram of constraint clipping provided by an exemplary embodiment of the present application;
FIG. 4c is a schematic diagram of an energy profile provided by an exemplary embodiment of the present application;
FIG. 4d is an illustration of a gray projection value curve in the horizontal direction;
FIG. 4e is an illustration of a gray projection value curve in the vertical direction;
FIG. 4f is a schematic illustration of a target image provided by an exemplary embodiment of the present application;
FIG. 5a is a schematic flow chart of an image processing method according to another exemplary embodiment of the present application;
FIG. 5b is a flowchart illustrating an image processing method according to another exemplary embodiment of the present application;
FIG. 5c is a schematic diagram of generating a target image according to an exemplary embodiment of the present application;
FIG. 5d is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application;
FIG. 5e is a schematic diagram of generating a target image according to an exemplary embodiment of the present application;
FIG. 5f is a schematic diagram of generating a target image according to an exemplary embodiment of the present application;
FIG. 6 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to another exemplary embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to yet another embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to yet another embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to yet another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some embodiments of the present application, a solution is provided, and the technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an exemplary embodiment of the present application, and as shown in fig. 1, the method includes:
step 101, obtaining background features of the first image.
And 102, acquiring a second image matched with the background of the first image according to the background characteristic.
And 103, correcting the size of the first image according to the second image so as to enable the size of the corrected first image to be in accordance with the target size.
In this embodiment, the first image may be an image input by a user, or an image transmitted by another device or application. The background feature of the first image refers to an image feature of another image area on the first image for supporting the subject object. In this embodiment, the background feature of the first image includes, but is not limited to, at least one of a style feature, a color feature and a texture feature of the background of the first image.
The second image may be adapted to the background of the first image, and the image feature of the second image may be adapted to at least one of the features of the background of the first image. On the basis, the second image can be considered to have higher similarity or visual harmony with the background of the first image, and therefore, the second image can be used for correcting the size of the first image.
The target size may be a size designated by a user, or may be a display size defined by an image display area for displaying the first image, which is determined according to a specific scene, and the present embodiment is not limited. In this embodiment, the size of the first image does not conform to the target size, and size correction is required. For example, the length of the first image is less than the target length, or the width of the first image is less than the target width.
In this embodiment, performing the size correction on the first image according to the second image may include: and filling the missing part of the length and/or the missing part of the width of the first image by using the whole image area or part of the image area on the second image. Due to the fact that the first image and the second image are matched in background, the filled image area has a good setoff effect on the main body object on the first image, and the visual requirement is met.
In this embodiment, when the first image does not satisfy the target size, based on the background feature of the first image, the second image adapted to the background of the first image may be acquired, and the size of the first image may be corrected according to the second image. In the embodiment, the operation of directly performing size stretching on the first image is avoided, the distortion degree of the first image can be effectively reduced, and the display effect of the first image is favorably improved.
The above-described embodiment describes a technical solution of correcting the size of the first image based on the second image adapted to the background of the first image, and will be further described below with reference to the accompanying drawings.
Fig. 2a is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application, and as shown in fig. 2a, the method includes:
step 201, in response to a request to modify the size of the first image, identifies a subject object on the first image.
Step 202, removing the main object from the first image, and performing image completion operation on an image area of the main object corresponding to the first image to obtain a background image of the first image.
Step 203, extracting style features, color features and texture features from the background image of the first image.
Step 204, aiming at any candidate image in the multiple candidate images, obtaining the style characteristic, the color characteristic and the texture characteristic of the candidate image.
Step 205, calculating the similarity of the background image and the candidate image on the style feature, the color feature and the texture feature, respectively.
And step 206, if the similarity meets the set similarity condition, determining the candidate image as a second image matched with the background of the first image.
And step 207, performing size correction on the first image according to the second image so as to enable the size of the corrected first image to be in accordance with the target size.
In step 201, optionally, when the subject object on the first image is identified, a neural network-based subject object identification method may be adopted, or a method of performing subject detection based on image texture features may be adopted, which is included in the present embodiment but not limited thereto. As will be exemplified below.
In some exemplary embodiments, when the neural network-based subject object recognition method is used, the subject object recognition model may be trained in advance based on a large number of training samples containing subject objects.
In such embodiments, training samples are collected to cover as much as possible different subject objects that may be encountered in multiple subject object recognition scenarios to improve sample coverage. Then, the image subject object on the training sample can be labeled to obtain the actual distribution situation of the background area and the image subject object on the training sample.
The labeled training samples may then be input into a neural network model. In the neural network model, the training samples can be subjected to operations such as feature extraction, calculation and the like according to the model parameters, and the output layer of the neural network model outputs the recognition result of the subject object. Then, the loss layer of the neural network model can calculate a loss function according to the difference between the recognition result of the subject object output by the output layer and the actual distribution situation of the subject object on the training sample. If the loss function does not meet the set requirement, the model parameters can be adjusted, and iterative training is continued. And when the loss function of the neural network model meets the set requirement, obtaining the trained subject object recognition model.
In this regard, in such an embodiment, upon receiving a request to modify the size of the first image, the first image may be input to a trained subject object recognition model, and the subject object on the first image may be recognized by the subject object recognition model.
In other exemplary embodiments, subject object recognition may be performed using methods based on image texture feature detection. It should be understood that, on the first image, the region where the subject object is located has more complex texture features, and the gradient change of the region is more obvious. Accordingly, the subject object on the first image can be identified by identifying a region on the first image where the texture feature is complex.
Alternatively, in this embodiment, an edge detection operator may be used to identify an edge on the first image, and then, a region with a more concentrated edge distribution is used as the region where the subject object is located. Of course, besides the edge detection operator, other texture feature detection methods may be used to identify the region where the texture distribution on the first image is more concentrated, and details are not repeated here.
After the subject object on the first image is identified, in the next step 202, the image owner may be removed from the first image, and an image complementing (image inpainting) operation may be performed on the image area of the subject object corresponding to the first image. That is, the void region generated after the removal of the main body object is filled with new pixels.
Alternatively, in this embodiment, a GAN (generic adaptive Network, generated countermeasure Network) image generation technology may be adopted to generate pixels for filling the hole region. GAN comprises a G (producer) network and a D (Discriminator) network. The G network can generate new pixels according to random noise and image areas except the cavity area on the first image; the D network can judge the probability that the G network generated pixel is the real pixel according to the actual image characteristics of the image area except the cavity area on the first image. The D network can repeat the above process of generating pixels continuously until the discrimination result of the G network on the pixels generated by the D network meets the set probability requirement.
Optionally, in addition to the GAN image generation technology, the embodiment may also adopt algorithms such as patch-match and fast-patch to complement the hole region on the first image, which is not described again. After the cavity region generated by the main object is completely removed, the obtained complete new image can be used as the background image of the first image, and the characteristics of the background image are extracted.
Wherein the features of the background image include, but are not limited to, at least one of style features, color features, and texture features, which will be exemplified in step 203.
Optionally, in some embodiments, the style, color, and texture features may be extracted from the background image of the first image.
Alternatively, the optional manner of extracting the color feature of the background image may include: extracting at least one of color histograms of the background images, color moments (color moments) of the background images, color sets (color sets) of the background images, color correlation maps (color histograms) of the background images, and the like, which is not limited in this embodiment.
Optionally, the optional manner of extracting the texture feature of the image of the background image may include: at least one of a texture feature extraction method based on a gray level co-occurrence matrix, a texture feature extraction method based on a Gabor filter, and a texture feature extraction method based on an autoregressive model, which is not limited in this embodiment.
Optionally, the optional manner of extracting the style features of the background image may include: and adopting a style classification method based on a neural network. As will be exemplified below.
Optionally, in this approach, a neural network model may be trained in advance, which is used to classify different images into different style classifications. That is, a picture style classifier is trained in advance.
When the image style classifier is trained, firstly, sample images with different styles in different application scenes can be obtained, and classification and labeling are carried out on the sample images according to the styles of the sample images. For example, the sample image is classified into a geometric style class, a gentle style class, a scientific style class, a fresh wave style class, a natural scene style class, an ink and wash style class, and the like. Then, the classified sample image is input into a neural network model. The neural network model may learn the possessed features of the images of each style class based on the classification samples.
Based on this, when the background image of the first image is acquired, the background image may be input to the trained neural network model. Furthermore, the neural network model can extract the image features of the background image, identify the style class to which the background image belongs according to the extracted image features and the features of the images of different style classes learned in advance, and output the style class. Based on the style category to which the background image belongs, a style description label of the background image can be determined as a style feature of the background image.
Next, in step 204, the candidate images may be collected in advance according to different application scenes of the first image. For example, when the first image is a commercial poster, a plurality of poster background images may be collected in advance as candidate images. For another example, when the first image is a model display image of the footwear, images of different wearing environments of the footwear may be collected in advance as candidate images.
For an alternative embodiment of obtaining the style feature, the color feature and the texture feature of the candidate image, reference may be made to the above description, which is not repeated herein. Optionally, the manner of acquiring the above-mentioned features of the candidate image should be the same as the manner of acquiring the above-mentioned features of the background image, and the acquired features may have the same expression form for subsequent calculation. For example, when the color histogram is extracted to extract the color features of the background image, the color histogram should be extracted to extract the color features of the candidate image, which is beneficial to simplify the subsequent similarity calculation process.
Next, in step 205, based on the image features obtained in the previous steps, the similarity between the background image and the candidate image on the style feature, the similarity on the color feature, and the similarity on the texture feature can be calculated, which is not repeated herein.
In step 206, optionally, the similarity condition may be set according to different application scenarios, which is not limited in this embodiment.
In some exemplary embodiments, the similarity satisfying a set similarity condition may include: the similarity on the style feature, the similarity on the color feature and the similarity on the texture feature respectively meet corresponding similarity thresholds. For example, the similarity of the style features satisfies a first similarity threshold, the similarity of the color features satisfies a second similarity threshold, and the similarity of the texture features satisfies a third similarity threshold.
In other exemplary embodiments, the similarity satisfying the set similarity condition may include: the weighted cumulative value of the similarity on the style feature, the similarity on the color feature, and the similarity on the texture feature satisfies a fourth similarity threshold. The weight corresponding to each similarity can be set according to actual requirements. For example, the weight corresponding to the style feature may be set to 0.5, the weight corresponding to the color feature may be set to 0.3, and the weight corresponding to the texture feature may be set to 0.2.
The first similarity threshold, the second similarity threshold, the third similarity threshold, and the fourth similarity threshold are empirical values, which is not limited in this embodiment.
After determining the second image that fits the background of the first image, step 207 may continue with resizing the first image based on the second image.
In some exemplary embodiments, the size correction may be performed by image synthesis. Wherein the size of the second image conforms to the target size. In this case, the first image may be combined with the second image as a foreground image on the second image, as shown in fig. 2 b. Alternatively, at the time of composition, the centers of the first image and the second image may be aligned so that the first image is located in the middle of the second image. After the compositing, a smoothing (smoothening) process may be performed at the boundary of the first image and the second image to optimize the visual effect of the composited image.
In other exemplary embodiments, the size correction may be performed by image stitching. Alternatively, in such an embodiment, the size deviation of the first image may be calculated according to the target size. The dimensional deviation includes dimensional deviation in length as well as in width. Next, as shown in fig. 2c, image blocks matching the size deviation may be cropped from the second image. For example, if the first image has a size of 2cm × 2cm and the target size is 3cm × 3cm, the deviation of the second image from the target size is: the length deviation is 1cm and the width deviation is 1 cm. At this time, two 1cm × 2cm image blocks, one 1cm × 1cm image block, may be cropped from the second image. Then, the first image can be stitched according to the image block, so that the size of the stitched first image meets the target size.
In this embodiment, when the first image does not satisfy the target size, based on the background feature of the first image, the second image adapted to the background of the first image may be acquired, and the size of the first image may be corrected according to the second image. In the embodiment, the operation of directly performing size stretching on the first image is avoided, the distortion degree of the first image can be effectively reduced, and the display effect of the first image is favorably improved.
The above embodiments of the present application can be applied to a variety of different application scenarios, which will be exemplified below.
In a typical application scenario, a user (e.g., a vendor residing on an e-commerce platform) may upload a commercial poster to the e-commerce platform for display. To ensure a good visual display of the electric merchandiser, the size of the booth for displaying the poster of the merchandise may be limited. However, the size of the commodity pictures uploaded by part of the users does not meet the size requirement of the exhibition booth. In such a scenario, the background features of the commercial poster can be extracted based on the technical scheme provided by the embodiment; and then, selecting a poster background image which is matched with the background of the commodity poster and meets the size requirement of the exhibition space according to the background characteristics of the commodity poster, and synthesizing the commodity poster and the selected poster background image to obtain the commodity poster to be displayed. Or, according to the background characteristics of the commodity poster, selecting a poster background image matched with the background of the commodity poster, and splicing the commodity poster based on the image block cut from the poster background image to obtain the commodity poster to be displayed.
In other typical application scenarios, users of the social platform may upload their customized avatar to the social platform. Typically, the size of the avatar exhibition booth provided by the social platform is fixed. When the head portrait uploaded by the user is smaller than the size of the head portrait exhibition position, the background feature on the head portrait can be extracted based on the technical scheme provided by the embodiment, and a background image matched with the background of the head portrait is selected according to the background feature of the head portrait. If the size of the background image is in accordance with the size of the head portrait exhibition position, the head portrait and the background image can be directly synthesized. Or, the image block may be cut from the background image, and the part of the avatar with missing size may be filled according to the image block obtained by cutting.
Fig. 3 is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application, and as shown in fig. 3, the method includes:
step 301, in response to a request for generating a target image according to a first image, obtaining an energy distribution characteristic on the first image.
Step 302, determining an image area on the first image, where the energy meets a set energy condition, as a target area according to the energy distribution characteristics.
Step 303, add the specified additional information on the target area to generate the target image.
The first image may be an image input by a user or an image transmitted by another device or application. Designated additional information may be added to the first image to generate a target image.
The target image refers to an image with a certain display effect, and is used for publicity, advertisement or meeting other visual requirements of the user. In some embodiments, the target image may be implemented as a poster image, an advertisement image, a pictorial representation, an emoticon, a postcard, a greeting card, or other image with a certain design feel, including but not limited to.
The additional information may be at least one of character information and picture information, and the embodiment is not limited. The additional information may be user-provided. For example, when the first image is a commodity image, a commodity file provided by a user can be added to the commodity image to generate a commodity poster; for another example, when the first image is a landscape image, a foreground person image provided by the user may be added to the landscape image to generate a person photograph, and so on. Alternatively, the additional information may be automatically generated by the system. For example, when the first image is a photographic work, watermark information may be added to the photographic work to protect the rights and interests of the author.
It will be appreciated that the first image carries some image information to be represented which causes different areas of the first image to be of different complexity. The more abundant the image information is, the higher the complexity of the corresponding region is. Based on this, to ensure that the added additional information has less influence on the image information, the image area on the first image for adding the additional information may be determined according to the area complexity of the first image.
In this embodiment, the complexity of the region on the first image may be obtained by obtaining the energy distribution characteristics on the first image. Generally, the higher the complexity of a region, the more energy that is distributed by that region.
After the acquired energy distribution characteristics on the first image, an image area with energy meeting the set energy condition can be determined as a target area. The set energy condition may be set according to actual requirements in different application scenarios, and this embodiment is not limited.
In this embodiment, before adding the additional information to the first image, the energy distribution characteristics on the first image are calculated in advance, and the target region for adding the additional information is selected appropriately in combination with the energy distribution characteristics. Furthermore, the target image can be automatically generated according to the first image and the additional information, meanwhile, the loss caused by the image information carried by the first image is effectively reduced, and the visual effect of the target image is effectively improved.
In the image processing method provided in the present embodiment, in addition to generating the target image from the first image and the specified additional information, the size of the target image can be further controlled while retaining the image important information, which will be described in detail below.
Fig. 4a is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application, and as shown in fig. 4a, the method includes:
step 400, a request to generate a target image from a first image is received.
Step 401, judging whether the size of the first image meets the target size; if yes, go to step 406; if not, go to step 402.
Step 402, judging whether the size of the first image is larger than or smaller than the target size; if yes, go to step 403; if so, go to step 405.
And 403, acquiring a background feature of the first image, and acquiring a second image matched with the background of the first image according to the background feature.
Step 404, performing size correction on the first image according to the second image so as to enable the size of the corrected first image to be in accordance with the target size, and executing step 406.
Step 405, recognizing a main body object on the first image, taking an image area and a target size corresponding to the main body object as constraint conditions for clipping, clipping the first image, and executing step 406.
Step 406, acquiring an energy distribution characteristic on the first image.
Step 407, determining a target area on the first image, where the energy satisfies a set energy condition, according to the energy distribution feature.
Step 408, add the specified additional information on the target area to generate the target image.
In step 401, the target size corresponding to the target image may be a size customized by the user; alternatively, the size of the display bits of the target image may be used, and the embodiment is not limited. If the size of the first image meets the target size, the size of the first image does not need to be corrected; if not, the following steps may be performed to correct the size of the first image to the target size.
In step 402, the size of the first image is smaller than the target size, meaning that the length of the first image is smaller than the length of the target size indication, and/or the width of the first image is smaller than the width of the target size indication, as the case may be. If the size of the first image is smaller than the target size, steps 403 and 404 may be performed to perform size expansion on the first image. For optional implementation of step 403 and step 404, reference may be made to the descriptions of the foregoing embodiments, which are not repeated herein.
The size of the first image is larger than the target size, meaning that the length of the first image is larger than the length indicated by the target size, and/or the width of the first image is larger than the width indicated by the target size, as the case may be. If the size of the first image is larger than the target size, step 405 may be executed to perform constrained cropping on the first image.
In step 405, optionally, the description of step 201 in the embodiment corresponding to fig. 2 may be referred to as a manner of identifying the subject object on the first image, and details thereof are not repeated here.
Alternatively, when the image area corresponding to the subject object and the target size are clipped as the constraint conditions, the center of the image area corresponding to the subject object may be determined, and the center is used as the center of the clip frame, and the target size is used as the size of the clip frame to draw the clip frame. Next, the image area within the cropping frame is cropped as a first image after size correction, as shown in fig. 4 b.
In some scenarios, the area of the image region corresponding to the subject object is large and cannot fall within the crop box completely, and at this time, the reduction operation may be performed on the first image until the image region corresponding to the subject object falls within the crop box completely. Further, based on such a constrained cropping mode, it is possible to prevent the loss of the subject information on the first image while correcting the size of the first image.
After the size correction operation of the first image is completed based on the above steps, the following steps may be performed to add specified additional information on the first image, resulting in a target image. The additional information may be character information or picture information, and this embodiment is not limited.
In step 406, optionally, an algorithm based on gradient analysis, or an algorithm based on image spectrum analysis, or other optional energy analysis algorithms may be used to obtain the energy distribution characteristics on the first image, which includes but is not limited to this embodiment. In the following, some alternative embodiments will be exemplified.
In some exemplary embodiments, an algorithm based on gradient analysis may be used to analyze the energy distribution characteristics of the first image. In such an embodiment, the subject object on the first image may be identified in advance. The method for identifying the subject object can refer to the descriptions of the foregoing embodiments, and is not repeated herein.
Next, edge recognition may be performed on the first image to obtain a binarized image containing edge information. Alternatively, a first or second order edge detection operator may be employed to identify edge information on the first image. The first-order edge detection operator can comprise at least one of a Roberts Cross operator, a Prewitt operator, a Sobel operator, a Kirsch operator and a compass operator; the second-order edge detection operator may include at least one of a Marr-Hildreth, a Canny operator, and a Laplacian operator, which will not be described in detail.
It should be understood that the richer the area of the edge, the more energy distribution is meant. Therefore, the gray-scale value of the pixels on the edge can be set to a larger value, and the gray-scale value of the pixels on the non-edge can be set to a smaller value, so as to represent the positive contribution of the pixels on the edge to the energy distribution characteristic. Alternatively, in some embodiments, for ease of calculation, the gray scale value of the pixels on the identified edge may be set to 1 and the gray scale value of the pixels on the non-edge may be set to 0. Of course, the gray value of the pixel on the edge may also be set to 255, and this embodiment is not limited.
In this embodiment, the gray scale value of the pixel corresponding to the subject object in the binarized image may be set to be the same as the gray scale value of the pixel located on the edge, so as to obtain the energy distribution map corresponding to the first image. That is, if the gray scale value of the pixel on the edge is set to 1, the gray scale values of the pixels of the main subject can be set to 1; if the gray scale value of the pixels on the edge is 255, the gray scale values of the pixels of the main body object can be set to be 255. Based on this, the contribution of the pixels of the subject object in calculating the image energy can be improved.
Fig. 4c illustrates a typical energy distribution image, and as shown in the left image in fig. 4c, when the region of the binarized image where the human-shaped model is located is the subject, and the gray scale of each pixel in the subject region is 1, the energy distribution image shown in the right portion in fig. 4c is obtained. It should be understood that in the energy distribution image, the gray value of each pixel can be regarded as the energy that the pixel has.
After the gradient analysis based algorithm obtains the energy distribution map corresponding to the first image, optionally, in step 407, according to the energy distribution characteristics, an optional manner of determining a target region on the first image whose energy satisfies the set energy condition may be as described in the following implementation:
in one embodiment, the coordinate system corresponding to the energy distribution map may be established in units of pixels. The origin of the coordinate system may be the pixels in the last row of the first row, the horizontal axis of the coordinate system is along the length direction of the energy distribution diagram, and the vertical axis of the coordinate system is along the width direction of the energy distribution diagram, as shown in fig. 4d and 4 e.
Then, performing horizontal projection on the gray value of the pixel on the energy distribution map to obtain a gray projection value in the horizontal direction, as shown in fig. 4 d; and performing a gray projection value in the vertical direction on the gray value of the pixel on the energy distribution map to obtain a gray projection value in the vertical direction, as shown in fig. 4 e. The horizontal projection may be understood as accumulating gray values of pixels with the same abscissa to obtain a gray value corresponding to each abscissa. The vertical projection can be understood as accumulating the gray values of the pixels with the same vertical coordinate to obtain the gray value corresponding to each vertical coordinate.
Then, determining an abscissa range in which the integrated value of the grayscale projection values within the specified horizontal size is smaller than the first energy threshold according to the grayscale projection values obtained by horizontal projection; and determining a vertical coordinate range in which the integrated value of the gray projection values within the specified vertical dimension is smaller than the second energy threshold value according to the gray projection values obtained by the vertical projection. After determining the abscissa range and the ordinate range, the target region may be determined on the first image based on the abscissa range and the ordinate range.
Alternatively, when the abscissa range in which the integrated value of the grayscale projection values within the specified horizontal size is smaller than the first energy threshold is determined based on the grayscale projection values obtained by horizontal projection, a one-dimensional horizontal sliding window may be created, the window length corresponding to the specified horizontal size. Then, the horizontal sliding window is used to slide on the gray projection value obtained by horizontal projection. The window interval of two adjacent sliding may be one pixel unit or a plurality of pixel units. An integrated value of the gray projection values within the window is calculated every time sliding occurs. Therefore, after a plurality of sliding, an integrated value of a plurality of gray projection values can be obtained. Then, the sliding position of the sliding window corresponding to the minimum accumulated value can be determined, and the abscissa range can be determined according to the sliding position. Alternatively, the sliding position of the sliding window corresponding to the integrated value smaller than the set threshold value may be determined, and the abscissa range may be determined based on the sliding position.
Similarly, a one-dimensional vertically sliding window may be created, with the window length being consistent with the specified vertical dimension. Then, the vertical sliding window is used to slide on the gray projection value obtained by vertical projection. The window interval of two adjacent sliding may be one pixel unit or a plurality of pixel units. An integrated value of the gray projection values within the window is calculated every time sliding occurs. Therefore, after a plurality of sliding, an integrated value of a plurality of gray projection values can be obtained. Then, the sliding position of the sliding window corresponding to the minimum accumulated value can be determined, and the vertical coordinate range can be determined according to the sliding position. Alternatively, the sliding position of the sliding window corresponding to the integrated value smaller than the set threshold value may be determined, and the vertical coordinate range may be determined based on the sliding position.
Alternatively, the specified horizontal size and the specified vertical size may be directly specified by the user, or determined according to additional information to be added.
In some embodiments, optionally, the size of the target area for adding the additional information in the horizontal and vertical directions may be determined according to the display requirement of the additional information. Next, the sizes of the target area in the horizontal and vertical directions may be set as a designated horizontal size and the vertical direction size, respectively. When the additional information is text information, the display requirement of the additional information may include the word size, the word spacing, the number of characters contained in the text, and the like of the text. When the additional information is picture information, the display requirement of the additional information may include resolution of the icon information, and the like.
In other exemplary embodiments, the energy distribution characteristic of the first image may be analyzed using an algorithm based on image spectral analysis. In such an embodiment, the first image may optionally be fourier transformed to obtain a spectrogram of the first image. In the spectrogram, the brightness of each pixel is different, reflecting the degree of difference between the pixel and its neighborhood, i.e. the gradient information on the image. The greater the brightness of a pixel means that the greater the difference between the pixel and its neighborhood, the greater the energy of the pixel.
After the spectrogram corresponding to the first image is obtained based on the algorithm of image spectrum analysis, optionally, in step 407, a region with the lowest luminance integrated value or a region with a luminance integrated value smaller than a set luminance threshold on the spectrogram may be selected as a target region for displaying additional information.
In this embodiment, a two-dimensional sliding window may be determined according to the sizes of the target area for adding the additional information in the horizontal and vertical directions, and may be controlled to slide on the spectrogram. At each sliding, an integrated value of the luminance of the pixels within the sliding window is calculated. After obtaining a plurality of luminance integrated values, determining the position of the sliding window corresponding to the minimum luminance integrated value as the position corresponding to the target area; alternatively, the position of the sliding window corresponding to the luminance integrated value smaller than the set luminance threshold is determined as the position corresponding to the target region.
In step 407, optionally, after the target area is determined, the specified additional information may be added to the target area to generate the target image. The additional information may be image information or text information. Fig. 4f shows an example that the additional information is text information, and as shown in fig. 4f, it is very convenient to automatically generate an advertisement image for the user according to the image input by the user and the advertisement document.
In this embodiment, when the target image is generated according to the first image, the first image may be subjected to size expansion according to the background feature of the first image, or the first image may be subjected to size clipping according to the distribution position of the main body on the first image, which is beneficial to reducing the probability of losing image information while correcting the size of the first image. In addition to this, before adding the additional information to the first image, an energy distribution characteristic on the first image may be calculated in advance based on gradient information on the image or based on the image spectrum, and in conjunction with the energy distribution characteristic, a target region for adding the additional information is appropriately selected. Furthermore, the target image can be automatically generated according to the first image and the additional information, meanwhile, the loss caused by the image information carried by the first image is effectively reduced, and the visual effect of the target image is effectively improved.
The above embodiments of the present application can be applied to a variety of different application scenarios, which will be exemplified below.
In a typical application scenario, a user (e.g., a vendor residing on a merchant platform) may upload a commodity image to an advertisement design platform and a corresponding advertisement image is automatically generated by the advertisement design platform. Based on the technical scheme provided by the embodiment, the advertisement design platform can acquire the set size of the advertisement image in advance. If the product image is larger than the size of the advertisement image, the constraint clipping method described in this embodiment may be used to identify the product body on the product image. If the commodity image is larger than the size of the advertisement image, the commodity image can be restricted and cut according to the position of the commodity main body and the size of the advertisement image. If the commodity image is smaller than the size of the advertisement image, a background image which is matched with the background of the commodity image and conforms to the size of the advertisement image can be selected according to the background characteristics of the commodity image, and the commodity image and the background image are synthesized. When the commodity image conforms to the size of the advertisement image, the energy distribution characteristic of the commodity image can be calculated, then, the image area with the minimum energy on the commodity image is selected, the file information to be advertised is added to the image area with the minimum energy, the advertisement image is obtained and output, the automatic design process of the advertisement image is further realized, and the generation efficiency of the advertisement image is improved.
Fig. 5a is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application, and as shown in fig. 5a, the method includes:
step 501a, acquiring a second image adapted to the background of the first image according to the background feature of the first image.
Step 502a, performing size correction on the first image according to the second image, so that the size of the corrected first image is in accordance with a target size.
Step 503a, in response to a request for generating a target image according to the first image, acquiring an energy distribution characteristic on the first image.
Step 504a, according to the energy distribution characteristics, adding specified additional information to the target area with energy meeting set energy conditions on the first image to generate the target image.
In this embodiment, when the target image is generated according to the first image, the size of the first image may be expanded according to the background feature of the first image, so that the operation of directly performing size stretching on the first image is avoided, the distortion of the first image may be effectively reduced, and the display effect of the first image may be improved. In addition, before adding additional information to the first image, an energy distribution feature on the first image may be calculated, and in conjunction with the energy distribution feature, a target region for adding the additional information is rationally selected. Furthermore, the target image can be automatically generated according to the first image and the additional information, meanwhile, the loss caused by the image information carried by the first image is effectively reduced, and the visual effect of the target image is effectively improved.
Fig. 5b is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application, and as shown in fig. 5b, the method includes:
step 501b, in response to a request to generate a target image from a first image, identifies a subject object on the first image.
And step 502b, acquiring a second image used for generating the target image and the energy distribution characteristics of the second image.
And step 503b, identifying an image area on the second image, of which the energy distribution meets the set condition, as a target area according to the energy distribution characteristics of the second image.
Step 504b, adding the subject object on the first image on the target area to generate the target image.
In this embodiment, the first image and the second image are used to generate the target image. The target image refers to an image with a certain display effect for promotion, advertisement or meeting other visual requirements of the user. In some embodiments, the target image may be implemented as a poster image, an advertisement image, a pictorial representation, an emoticon, a postcard, a greeting card, or other image with a certain design feel, including but not limited to.
The first image may be an image provided by a user; the object mainly represented by the first image may be referred to as a subject object on the first image. For example, in a commercial poster, a commercial article is a subject object; in the person photograph, a person is a subject object; in the architectural landscape, a building is a subject object.
In this embodiment, after the first image is acquired, the main object on the first image may be identified. Optionally, when the subject object on the first image is identified, a subject object identification method based on a neural network may be adopted, or a method for performing subject detection based on image texture features may be adopted, which may specifically refer to the descriptions of the foregoing embodiments and is not described herein again.
And the second image is used for generating a target image in cooperation with the first image. In general, the second image may constitute a background portion of the target image to set off the subject object on the first image, and the embodiment is not limited thereto.
In some embodiments, the second image may be an image actively provided by the user. Optionally, in this embodiment, an operation entry icon for uploading pictures may be displayed. When the user triggers the operation entry icon, an image uploading interface can be displayed so that the user can upload the image.
In other embodiments, the second image may be provided by an apparatus that performs the image processing method. In such an embodiment, the device may pre-store a plurality of images for selection by the user; or, multiple images can be downloaded from the network terminal according to the requirements of the user, so that the user can select the images.
After acquiring the second image, a region suitable for synthesizing the subject object on the first image may be further determined from the second image. The image is a carrier of information, and the second image originally shows the image information. In this embodiment, when other images are synthesized on the second image, an image area may be determined on the second image in combination with the distribution of the image information originally carried by the second image, so that the target image synthesized by the image area and the subject object on the first image has a better visual display effect.
As can be seen from the above description of the embodiments, the areas of the image containing different image information have different complexities and different distributed energies. Therefore, in this embodiment, the energy distribution characteristic of the second image can be calculated, and a suitable image area can be determined from the second image based on the energy distribution characteristic.
Optionally, in some optional embodiments, when the energy distribution feature of the second image is acquired, the subject object on the second image and the edge on the second image may be identified in advance; then, setting the gray value of the pixel on the edge as 1, and setting the gray value of the pixel on the non-edge as 0 to obtain a binary image; next, the gray value of the pixel corresponding to the subject object on the second image on the binarized image is set to 1 to obtain the energy distribution map corresponding to the second image. It should be understood that in the energy distribution map, the gray value of each pixel can be regarded as the energy that the pixel has.
For an optional way of identifying the subject object on the second image, reference may be made to the descriptions of the foregoing embodiments, which are not repeated herein. In this embodiment, setting the gray-scale value of the pixel corresponding to the subject object on the second image on the binarized image to 1 is beneficial to improving the forward contribution of the subject object on the second image to the energy distribution characteristics when calculating the energy distribution map of the second image.
After the energy distribution features on the second image are acquired, which part of the area on the second image is suitable for adding the subject object on the first image can be determined by combining the energy distribution features of the second image. In the present embodiment, an image region on the second image suitable for adding the subject object on the first image is described as a target region whose energy satisfies the set energy condition. The set energy condition may be set according to actual requirements in different application scenarios, and this embodiment is not limited.
In some optional embodiments, when generating the target image, in order to ensure that there is less influence on the image information originally contained in the second image, the set energy condition that the target region should satisfy may be: in the target area, the accumulated value of the energy of the pixels in the horizontal direction is smaller than a third energy threshold, and the accumulated value of the energy of the pixels in the vertical direction is smaller than a fourth energy threshold; alternatively, the set energy conditions may be: and in the target area, the sum of the energy accumulation values of the pixels in the horizontal direction and the vertical direction is less than a fifth energy threshold value. For an alternative implementation of calculating the accumulated energy value, reference may be made to the descriptions in the foregoing embodiments, which are not described herein again.
In this embodiment, the size of the target area is not limited. Alternatively, the target region may be a maximum image region on the second image whose energy satisfies a set energy condition; alternatively, the target area may be an image area on the second image that meets the size requirement set by the user; alternatively, the target area may be an image area adapted to the size of the subject object on the first image, which includes but is not limited to this embodiment.
When the subject object on the first image is added on the target area, the center of the subject object can be aligned with the center of the target area, and then the image synthesis operation is performed to generate the target image.
In this embodiment, when generating the target image from the first image and the second image, the energy distribution characteristics of the subject object and the second image on the first image are recognized, and then, in combination with the energy distribution characteristics of the second image, the target region of the subject object for synthesizing the first image is appropriately selected from the second image. Furthermore, the target image can be automatically generated according to the first image and the second image, meanwhile, the loss caused by the image information carried by the second image is effectively reduced, and the visual effect of the target image is effectively improved.
It should be noted that, in the above embodiment, optionally, before the subject object on the first image is added to the target area to generate the target image, the size of the subject object on the first image may be further corrected according to the size of the target area. The method for modifying the size of the subject object on the first image includes, but is not limited to, image scaling, subject object multi-constraint clipping, image boundary filling, and the like, and is not described herein again.
The above embodiments can be applied to a variety of different application scenarios, such as poster automatic design, greeting card automatic design, postcard automatic design, emoticon automatic design, advertisement image automatic design, portrait automatic changing design, and the like, and the embodiments are not limited thereto. This will be exemplified below in connection with fig. 5 c.
Fig. 5c illustrates a typical application scenario of the present embodiment, in which a user may implement automatic portrait changing design through a terminal device.
As shown in fig. 5c, the user can input a landscape portrait with trees and people on it through the terminal device. And the terminal equipment acquires the landscape portrait, then performs main body identification, and identifies the main body object on the landscape portrait. Then, the terminal device may obtain another image provided by the user through interaction with the user, and may calculate the energy distribution map of the image by using the method described in the foregoing embodiment. Then, the terminal device can determine a target area with smaller energy from the image according to the energy distribution map of the image. Then, the terminal device can perform a synthesizing operation of the person in the landscape portrait and the target area in the other image, and output the synthesized image, by which the person in the landscape portrait is automatically switched to the other background.
Fig. 5d is a schematic flowchart of an image processing method according to another exemplary embodiment of the present application, and as shown in fig. 5d, the method includes:
step 501d, in response to a request to generate a target image from a first image, identifies a subject object on the first image.
Step 502d, identifying at least one object from the acquired second image.
Step 503d, determining a target object adapted to the subject object from the at least one object.
Step 504d, add the subject object on the image area associated with the target object to generate the target image.
In this embodiment, a subject identification method based on a neural network or a subject detection method based on image texture features may be adopted to identify the subject object on the first image, which may specifically refer to the descriptions of the foregoing embodiments and is not described herein again.
And the second image is used for generating a target image in cooperation with the first image. In general, the second image may constitute a background portion of the target image to set off the subject object on the first image, and the embodiment is not limited thereto. The second image includes at least one object, which is a representation object of the second image, and may include, for example, buildings, animals, sand beaches, lawns, seas, trees, and so on, depending on the actual implementation form of the second image, which is not limited in this embodiment. The second image may be provided actively by the user or by a device that performs the image processing method.
In order to enable the target image to have a better visual display effect, the object on the second image can be identified and analyzed, and the adding position of the main object on the first image on the second image is determined according to the analysis result. The method for identifying at least one object included in the second image may include an image object identification algorithm based on a neural network, or a method for detecting an image object based on texture features, which is not limited to this embodiment.
Wherein, the object on the second image, which is adapted to the subject object on the first image, includes but is not limited to: and one or more of an object which can set the subject object, an object which has a correlation degree with the subject object larger than a set threshold value, an object which can form a specific composition style with the subject object, and an object which meets the preference requirement of a user are arranged on the second image.
For convenience of description, the present embodiment describes an object acquired from the second image and adapted to the subject object on the first image as a target object. After the target object is acquired, the subject object may be added to an image area associated with the target object to generate a target image.
The image area associated with the target object may include an image area where the target object is located, an image area in a neighborhood of the target object, or an image area partially overlapped with the target object, which is not limited in this embodiment.
In this embodiment, when the target image is generated according to the first image and the second image, the subject object on the first image is recognized, the target object adapted to the subject object is found on the second image, and the subject object is added to the second image based on the position of the target object, so that automatic composition of the target object is realized, and the visual effect of the synthesized target object is considered at the same time.
In some exemplary embodiments, optionally, when at least one object is identified from the acquired second image, the boundaries of different image regions on the second image may be identified in advance; when the boundary of different image regions on the second image is identified, edge detection may be performed by using a plurality of edge detection operators (e.g., the first-order edge detection operator and the second-order edge detection operator described in the foregoing embodiments), and then, according to the distribution of the edges, the contour formed by the edges is determined as the boundary.
After the boundary on the second image is identified, at least one image region on the second image may be acquired based on the boundary of the different image region. Then, the at least one image area can be subjected to image recognition respectively so as to recognize the at least one object. For example, when the second image is a landscape photograph of the sea, objects such as sea water, sand beach, sky, etc. can be recognized from the second image; for another example, when the second image is a landscape photograph of a park, objects such as trees, grasslands, paths, chairs, and the like can be recognized from the second image.
After identifying the at least one object from the second image, a target object that fits the subject object may then be determined from the at least one object. In some exemplary embodiments, optionally, in determining the target object adapted to the subject object from the at least one object, an object for combining with the subject object may be determined as the target object from the at least one object according to a combination rule when different objects are displayed on the image.
The combination rule of different objects displayed on the image may be obtained in advance. As will be exemplified below.
In some embodiments, the combination rules of different objects when shown on an image may be learned from a sample image by a neural network algorithm. In such an embodiment, objects appearing in combination on each sample image may be identified based on a neural network algorithm, and a combination relationship of the objects may be established. For example, when a combination of sand and a sand couch is recognized from a plurality of sample images based on a neural network algorithm, a combination relationship of sand and a sand couch may be established. For another example, when a combination of the person wearing the swim ring and the sea water is identified from the plurality of sample images based on the neural network algorithm, a combination relationship between the person wearing the swim ring and the sea water near the beach may be established.
In other embodiments, the combination rule of different objects when displayed on the image can be obtained according to the object combination instruction input by the user. For example, the user may input a combination of a desk and a book, a combination of a dog and a lawn, a combination of a television and a television cabinet, and the like, which is not limited in this embodiment.
In still other embodiments, the combination rule of different objects when displayed on the image can be obtained according to the historical target object which is obtained by the historical image processing process and is matched with the historical subject object. In this embodiment, the history target object adapted to the history subject object determined in each image processing process may be recorded, and the recorded content may be continuously updated as time passes, so as to gradually enrich the above combination rule.
Optionally, in some embodiments, the combination rule of different objects when displayed on the image represents the combination relationship of the objects. For example, the combination relationship of beach and beach lounge chair, the combination relationship of desk and book, etc. described in the foregoing embodiments.
In other embodiments, the combination rule of different objects when presented on the image represents a combination relationship of an object having a first attribute and an object having a second attribute. For example, the above embodiments describe the combination relationship between a person wearing a swim ring and sea water near the beach. In this embodiment, when a combination rule of different objects displayed on an image is established, the attributes of the objects may be further obtained to obtain an object combination relationship more conforming to an actual scene.
Based on the above, when the target object is determined from at least one object on the second image, the attribute of the subject object may be further acquired. Wherein the attributes of the subject object include: the subject object belongs to at least one of a user, an expression of the subject object, a decoration carried by the subject object, and an action posture of the subject object. Then, according to the attribute of the subject object, the attribute of a target object for combining with the subject object is determined from the combination rule, and according to the attribute of the target object, the target object is determined from the at least one object.
For example, the subject object identified on the first image is a user, and the attributes of the user include: the user is in a leg-winding sitting position and the user is smiling. It is assumed that the object recognized from the second image includes: grassland, avenues, pine, puppies. Then, based on the attributes of the user, the attributes that the target object should have can be determined to be: the user can be provided with a space for sitting with legs. Based on this, a grass may be selected from the identified objects as a target object adapted to the user.
After determining the target object, the subject object may be added to the image area associated with the target object on the second image. Optionally, if the second image is a two-dimensional image, the main object is added to an image area associated with the target object, and the layer relationship between the main object and the image area is adjusted.
For example, as shown in fig. 5e, when the main object is a user sitting on a lawn, and the target object is a lawn, the user may be directly added on the lawn, and the image layer corresponding to the user is adjusted on the image layer corresponding to the lawn, so as to realize the image effect of sitting on the lawn on the user sitting on the lawn. For another example, as shown in fig. 5f, if the main object is a user wearing a swim ring and the target object is seawater, the user wearing the swim ring can add the user on the seawater, adjust the layer corresponding to the image area below the swim ring on the upstream of the main object under the layer where the seawater is located, and adjust the layer corresponding to the image area above the swim ring above the layer where the seawater is located, so as to realize the screen effect of swimming with the swim ring in the seawater.
Optionally, if the second image is a three-dimensional image, determining the three-dimensional coordinates of the subject object according to the position of the target object, and adding the subject object on the second image according to the three-dimensional coordinates. The method is suitable for typical three-dimensional image application scenes, such as virtual home decoration, virtual clothes changing, virtual travel and the like. Taking the virtual home decoration as an example, it is assumed that the main object recognized on the first image is a dining table, and the object matched with the main object is determined on the three-dimensional second image as a dining table and chair. Then, the three-dimensional coordinates of the dining table can be calculated according to the three-dimensional coordinates of the dining table and the chair in the second image, and the dining table is added in the second image according to the calculated three-dimensional coordinates, so that the virtual dining room arrangement is completed, and the description is omitted.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of step 201 to step 204 may be device a; for another example, the execution subject of steps 201 and 202 may be device a, and the execution subject of step 203 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application, and as shown in fig. 6, the image processing apparatus includes: memory 601, processor 602.
The memory 601 is used for storing a computer program and may be configured to store other various data to support operations on the image processing apparatus. Examples of such data include instructions for any application or method operating on the image processing device, contact data, phonebook data, messages, pictures, videos, and so forth.
A processor 602, coupled to the memory 601, for executing the computer programs in the memory 601 to: acquiring background features of a first image; acquiring a second image matched with the background of the first image according to the background feature; and correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with the target size.
Further optionally, when obtaining the background feature of the first image, the processor 602 is specifically configured to: identifying a subject object on the first image and removing the subject object; performing image completion operation on an image area of the main object corresponding to the first image to obtain a background image of the first image; at least one of a style feature, a color feature, and a texture feature is extracted from a background image of the first image.
Further optionally, the processor 602, when extracting style features from an image region of the first image other than the subject object, is specifically configured to: inputting the background image into a neural network model so that the neural network model extracts the image characteristics of the background image and outputs the style category to which the background image belongs according to the image characteristics; and determining the style description label of the background image according to the style category to which the background image belongs.
Further optionally, when the processor 602 acquires the second image adapted to the background of the first image according to the background feature, specifically, to: aiming at any candidate image in a plurality of candidate images, at least one of style characteristics, color characteristics and texture characteristics of the candidate image is obtained; calculating the similarity of the background image and the candidate image on at least one of style characteristics, color characteristics and texture characteristics; and if the similarity meets the set similarity condition, determining the candidate image as a second image matched with the background of the first image.
Further optionally, when performing the size correction on the first image according to the second image, the processor 602 is specifically configured to: if the size of the second image accords with the target size, the first image is taken as a foreground image on the second image, and the first image and the second image are synthesized; and smoothing the boundary of the first image and the second image.
Further optionally, when performing the size correction on the first image according to the second image, the processor 602 is specifically configured to: calculating the size deviation of the first image according to the target size; cutting an image block matched with the size deviation from the second image; and splicing the first images according to the image blocks so as to enable the size of the spliced first images to accord with the target size.
Further optionally, the first image comprises: a commercial poster to be displayed; the target size, comprising: the size of the display area of the commercial poster.
Further, as shown in fig. 6, the image processing apparatus further includes: communication component 603, display 604, power component 605, audio component 606, and the like. Only some of the components are schematically shown in fig. 6, and it is not intended that the image processing apparatus includes only the components shown in fig. 6.
In this embodiment, when the first image does not satisfy the target size, the second image adapted to the background of the first image may be acquired based on the background feature of the first image, and the size of the first image may be corrected according to the second image. In the embodiment, the operation of directly performing size stretching on the first image is avoided, the distortion degree of the first image can be effectively reduced, and the display effect of the first image is favorably improved.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, and the computer program can implement the steps that can be executed by the image processing apparatus in the method embodiments described above when executed.
Fig. 7 illustrates a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application. As shown in fig. 7, the image processing apparatus includes: a memory 701 and a processor 702.
A memory 701 for storing a computer program and may be configured to store other various data to support operations on the image processing apparatus. Examples of such data include instructions for any application or method operating on the image processing device, contact data, phonebook data, messages, pictures, videos, and so forth.
A processor 702, coupled to the memory 701, for executing the computer program in the memory 701 for: responding to a request for generating a target image according to a first image, and acquiring energy distribution characteristics on the first image; determining an image area with energy meeting a set energy condition on the first image as a target area according to the energy distribution characteristics; adding specified additional information on the target area to generate the target image.
Further optionally, when acquiring the energy distribution feature on the first image, the processor 702 is specifically configured to: identifying a subject object on the first image and an edge on the first image; setting the gray value of the pixel on the edge as 1, and setting the gray value of the pixel on the non-edge as 0 to obtain a binary image; and setting the gray value of a pixel corresponding to the main object on the binary image as 1 to obtain an energy distribution map corresponding to the first image.
Further optionally, when determining, according to the energy distribution feature, that an image area on the first image, where energy meets a set energy condition, is a target area, the processor 702 is specifically configured to: establishing a coordinate system corresponding to the energy distribution diagram by taking the pixel as a unit; performing horizontal projection and vertical projection on the gray value of the pixel on the energy distribution map to obtain a gray projection value in the horizontal direction and a gray projection value in the vertical direction respectively; determining an abscissa range in which the accumulated value of the gray projection values within the specified horizontal size is smaller than a first energy threshold according to the gray projection value obtained by horizontal projection; determining a vertical coordinate range in which the accumulated value of the gray projection values in the specified vertical size is smaller than a second energy threshold according to the gray projection value obtained by vertical projection; and determining the target area on the first image according to the abscissa range and the ordinate range.
Further optionally, the processor 702 is further configured to: determining the sizes of the target area in the horizontal and vertical directions according to the display requirement of the additional information; the sizes of the target area in the horizontal and vertical directions are taken as the specified horizontal size and the vertical direction size, respectively.
Further optionally, the processor 702 is further configured to: if the size of the first image is larger than the target size, identifying a main object on the first image; and taking the image area corresponding to the main body object and the target size as constraint conditions of cutting, and cutting the first image.
Further optionally, the processor 702 is further configured to: if the size of the first image is smaller than the target size, acquiring background features of the first image; acquiring a second image matched with the background of the first image according to the background feature; and correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with the target size.
Further optionally, the first image comprises: an advertisement background image, the specified additional information including: the file information to be advertised, the target image, include: an advertisement image.
Further, as shown in fig. 7, the image processing apparatus further includes: communication components 703, display 704, power components 705, audio components 706, and other components. Only some of the components are schematically shown in fig. 7, and it is not intended that the image processing apparatus includes only the components shown in fig. 7.
In this embodiment, before adding the additional information to the first image, the energy distribution characteristics on the first image are calculated in advance, and the target region for adding the additional information is selected appropriately in combination with the energy distribution characteristics. Furthermore, the target image can be automatically generated according to the first image and the additional information, meanwhile, the loss caused by the image information carried by the first image is effectively reduced, and the visual effect of the target image is effectively improved.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, and the computer program can implement the steps that can be executed by the image processing apparatus in the method embodiments described above when executed.
Fig. 8 illustrates a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application. As shown in fig. 8, the image processing apparatus includes: a memory 801 and a processor 802.
A memory 801 for storing a computer program and may be configured to store other various data to support operations on the image processing apparatus. Examples of such data include instructions for any application or method operating on the image processing device, contact data, phonebook data, messages, pictures, videos, and so forth.
A processor 802, coupled to the memory 801, for executing computer programs in the memory 801 for: acquiring a second image matched with the background of the first image according to the background characteristics of the first image; correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with the target size; responding to a request for generating a target image according to the first image, and acquiring energy distribution characteristics on the first image; according to the energy distribution characteristic, adding specified additional information on a target area with energy meeting a set energy condition on the first image to generate the target image.
Further, as shown in fig. 8, the image processing apparatus further includes: communication component 803, display 804, power component 805, audio component 806, and other components. Only some of the components are schematically shown in fig. 8, and it is not intended that the image processing apparatus includes only the components shown in fig. 8.
In this embodiment, when the target image is generated according to the first image, the size of the first image may be expanded according to the background feature of the first image, so that the operation of directly performing size stretching on the first image is avoided, the distortion of the first image may be effectively reduced, and the display effect of the first image may be improved. In addition, before the additional information is added to the first image, the energy distribution characteristics on the first image may be calculated in advance, and in combination with the energy distribution characteristics, a target region for adding the additional information may be appropriately selected. Furthermore, the target image can be automatically generated according to the first image and the additional information, meanwhile, the loss caused by the image information carried by the first image is effectively reduced, and the visual effect of the target image is effectively improved.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, and the computer program can implement the steps that can be executed by the image processing apparatus in the method embodiments described above when executed.
Fig. 9 illustrates a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application. As shown in fig. 9, the image processing apparatus includes: a memory 901 and a processor 902.
A memory 901 for storing a computer program and may be configured to store other various data to support operations on the image processing apparatus. Examples of such data include instructions for any application or method operating on the image processing device, contact data, phonebook data, messages, pictures, videos, and so forth.
A processor 902, coupled to the memory 901, for executing the computer program in the memory 901 for: identifying a subject object on a first image in response to a request to generate a target image from the first image; acquiring a second image used for generating the target image and energy distribution characteristics of the second image; according to the energy distribution characteristics of the second image, identifying an image area on the second image, of which the energy distribution meets set conditions, as a target area; adding a subject object on the first image on the target area to generate the target image.
Further optionally, when acquiring the second image used for generating the target image and the energy distribution feature of the second image, the processor 902 is specifically configured to: identifying a subject object on the second image and an edge on the second image; setting the gray value of the pixel on the edge as 1, and setting the gray value of the pixel on the non-edge as 0 to obtain a binary image; and setting the gray value of a pixel corresponding to the main object on the second image on the binary image as 1 to obtain an energy distribution map corresponding to the second image.
Further optionally, the processor 902, before adding the subject object on the first image on the target area to generate the target image, is further configured to: and correcting the size of the main object on the first image according to the size of the target area.
Further, as shown in fig. 9, the image processing apparatus further includes: communication component 903, display 904, power component 905, audio component 906, and the like. Only some of the components are schematically shown in fig. 9, and it is not intended that the image processing apparatus includes only the components shown in fig. 9.
In this embodiment, when generating the target image from the first image and the second image, the energy distribution characteristics of the subject object and the second image on the first image are recognized, and then, in combination with the energy distribution characteristics of the second image, the target region of the subject object for synthesizing the first image is appropriately selected from the second image. Furthermore, the target image can be automatically generated according to the first image and the second image, meanwhile, the loss caused by the image information carried by the second image is effectively reduced, and the visual effect of the target image is effectively improved.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, and the computer program can implement the steps that can be executed by the image processing apparatus in the method embodiments described above when executed.
Fig. 10 illustrates a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application. As shown in fig. 10, the image processing apparatus includes: a memory 1001 and a processor 1002.
A memory 1001 for storing a computer program and may be configured to store other various data to support operations on the image processing apparatus. Examples of such data include instructions for any application or method operating on the image processing device, contact data, phonebook data, messages, pictures, videos, and so forth.
A processor 1002, coupled to the memory 1001, for executing the computer programs in the memory 1001 to: identifying a subject object on a first image in response to a request to generate a target image from the first image; identifying at least one object from the acquired second image; determining a target object adapted to the subject object from the at least one object; adding the subject object on an image area associated with the target object to generate the target image.
Further optionally, when identifying at least one object from the acquired second image, the processor 1002 is specifically configured to: identifying boundaries of different image regions on the second image; acquiring at least one image area on the second image according to the boundary of the different image areas; and performing image recognition on the at least one image area to identify the at least one object.
Further optionally, when determining the target object adapted to the subject object from the at least one object, the processor 1002 is specifically configured to: and determining an object used for being combined with the main object from the at least one object as the target object according to a combination rule when different objects are displayed on the image.
Further optionally, when the processor 1002 determines, according to a combination rule when different objects are displayed on the image, an object used for combining with the subject object from the at least one object, and when the object is the target object, the processor is specifically configured to: acquiring the attribute of the main object; the attributes of the subject object include: at least one of a user to which the subject object belongs, an expression of the subject object, a carried decoration of the subject object, and an action posture of the subject object; determining the attribute of a target object for combining with the main object from the combination rule according to the attribute of the main object; and determining the target object from the at least one object according to the attribute of the target object.
Further optionally, the processor 1002 is further configured to: learning a combination rule of the different objects when the different objects are displayed on the image from the sample image through a neural network algorithm; and/or acquiring a combination rule of the different objects when the different objects are displayed on the image according to an object combination instruction input by a user; and/or acquiring a combination rule of different objects when the different objects are displayed on the image according to a historical target object which is acquired in the historical image processing process and is matched with the historical main object.
Further optionally, when the processor 1002 adds the subject object to the image area associated with the target object to generate the target image, specifically: if the second image is a two-dimensional image, adding the main body object to an image area associated with the target object, and adjusting the layer relation between the main body object and the image area; and if the second image is a three-dimensional image, determining the three-dimensional coordinates of the main body object according to the position of the target object, and adding the main body object on the second image according to the three-dimensional coordinates.
Further, as shown in fig. 10, the image processing apparatus further includes: communication component 1003, display 1004, power component 1005, audio component 1006, and other components. Only some of the components are schematically shown in fig. 10, and it is not intended that the image processing apparatus includes only the components shown in fig. 10.
In this embodiment, when the target image is generated according to the first image and the second image, the subject object on the first image is recognized, the target object adapted to the subject object is found on the second image, and the subject object is added to the second image based on the position of the target object, so that automatic composition of the target object is realized, and the visual effect of the synthesized target object is considered at the same time.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, and the computer program can implement the steps that can be executed by the image processing apparatus in the method embodiments described above when executed.
The memories of fig. 6, 7, 8, 9, and 10 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The communication components of fig. 6, 7, 8, 9, and 10 described above are configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The displays in fig. 6, 7, 8, 9, and 10 described above include screens, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply modules of fig. 6, 7, 8, 9 and 10 provide power to various modules of the device in which the power supply module is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (26)

1. An image processing method, comprising:
acquiring background features of a first image;
acquiring a second image matched with the background of the first image according to the background feature;
and correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with the target size.
2. The method of claim 1, wherein obtaining the background feature of the first image comprises:
identifying a subject object on the first image and removing the subject object;
performing image completion operation on an image area of the main object corresponding to the first image to obtain a background image of the first image;
at least one of a style feature, a color feature, and a texture feature is extracted from a background image of the first image.
3. The method of claim 2, wherein extracting style features from image regions on the first image other than the subject object comprises:
inputting the background image into a neural network model so that the neural network model extracts the image characteristics of the background image and outputs the style category to which the background image belongs according to the image characteristics;
and determining the style description label of the background image according to the style category to which the background image belongs.
4. The method according to any one of claims 2-3, wherein obtaining a second image adapted to the background of the first image according to the background feature comprises:
aiming at any candidate image in a plurality of candidate images, at least one of style characteristics, color characteristics and texture characteristics of the candidate image is obtained;
calculating the similarity of the background image and the candidate image on at least one of style characteristics, color characteristics and texture characteristics;
and if the similarity meets the set similarity condition, determining the candidate image as a second image matched with the background of the first image.
5. The method of any of claims 1-3, wherein performing a size correction on the first image based on the second image comprises:
if the size of the second image accords with the target size, the first image is taken as a foreground image on the second image, and the first image and the second image are synthesized;
and smoothing the boundary of the first image and the second image.
6. The method of any of claims 1-3, wherein performing a size correction on the first image based on the second image comprises:
calculating the size deviation of the first image according to the target size;
cutting an image block matched with the size deviation from the second image;
and splicing the first images according to the image blocks so as to enable the size of the spliced first images to accord with the target size.
7. The method of any of claims 1-3, wherein the first image comprises: a commercial poster to be displayed; the target size, comprising: the size of the display area of the commercial poster.
8. An image processing method, comprising:
responding to a request for generating a target image according to a first image, and acquiring energy distribution characteristics on the first image;
determining an image area with energy meeting a set energy condition on the first image as a target area according to the energy distribution characteristics;
adding specified additional information on the target area to generate the target image.
9. The method of claim 8, wherein obtaining an energy distribution feature on the first image comprises:
identifying a subject object on the first image and an edge on the first image;
setting the gray value of the pixel on the edge as 1, and setting the gray value of the pixel on the non-edge as 0 to obtain a binary image;
and setting the gray value of a pixel corresponding to the main object on the binary image as 1 to obtain an energy distribution map corresponding to the first image.
10. The method of claim 9, wherein determining, as the target region, an image region on the first image with energy satisfying a set energy condition according to the energy distribution characteristic comprises:
establishing a coordinate system corresponding to the energy distribution diagram by taking the pixel as a unit;
performing horizontal projection and vertical projection on the gray value of the pixel on the energy distribution map to obtain a gray projection value in the horizontal direction and a gray projection value in the vertical direction respectively;
determining an abscissa range in which the accumulated value of the gray projection values within the specified horizontal size is smaller than a first energy threshold according to the gray projection value obtained by horizontal projection;
determining a vertical coordinate range in which the accumulated value of the gray projection values in the specified vertical size is smaller than a second energy threshold according to the gray projection value obtained by vertical projection;
and determining the target area on the first image according to the abscissa range and the ordinate range.
11. The method of claim 10, further comprising:
determining the sizes of the target area in the horizontal and vertical directions according to the display requirement of the additional information;
the sizes of the target area in the horizontal and vertical directions are taken as the specified horizontal size and the vertical direction size, respectively.
12. The method according to any one of claims 8-11, further comprising:
if the size of the first image is larger than the target size, identifying a main object on the first image;
and taking the image area corresponding to the main body object and the target size as constraint conditions of cutting, and cutting the first image.
13. The method of claim 12, further comprising:
if the size of the first image is smaller than the target size, acquiring background features of the first image;
acquiring a second image matched with the background of the first image according to the background feature;
and correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with the target size.
14. The method of any of claims 8-11, wherein the first image comprises: an advertisement background image, the specified additional information including: the file information to be advertised, the target image, include: an advertisement image.
15. An image processing method, comprising:
acquiring a second image matched with the background of the first image according to the background feature of the first image;
correcting the size of the first image according to the second image so as to enable the size of the corrected first image to accord with a target size;
responding to a request for generating a target image according to the first image, and acquiring energy distribution characteristics on the first image;
according to the energy distribution characteristics, adding specified additional information on a target area with energy meeting set energy conditions on the first image to generate the target image.
16. An image processing method, comprising:
identifying a subject object on a first image in response to a request to generate a target image from the first image;
acquiring a second image used for generating the target image and energy distribution characteristics of the second image;
according to the energy distribution characteristics of the second image, identifying an image area on the second image, of which the energy distribution meets set conditions, as a target area;
adding a subject object on the first image on the target area to generate the target image.
17. The method of claim 16, wherein obtaining a second image used to generate the target image and energy distribution characteristics of the second image comprises:
identifying a subject object on the second image and an edge on the second image;
setting the gray value of the pixel on the edge as 1, and setting the gray value of the pixel on the non-edge as 0 to obtain a binary image;
and setting the gray value of a pixel corresponding to the main object on the second image on the binary image as 1 to obtain an energy distribution map corresponding to the second image.
18. The method of claim 16 or 17, wherein before adding the subject object on the first image on the target area to generate the target image, further comprising:
and correcting the size of the main object on the first image according to the size of the target area.
19. An image processing method, comprising:
identifying a subject object on a first image in response to a request to generate a target image from the first image;
identifying at least one object from the acquired second image;
determining a target object adapted to the subject object from the at least one object;
adding the subject object on an image area associated with the target object to generate the target image.
20. The method of claim 19, wherein identifying at least one object from the acquired second image comprises:
identifying boundaries of different image regions on the second image;
acquiring at least one image area on the second image according to the boundary of the different image areas;
and performing image recognition on the at least one image area to identify the at least one object.
21. The method of claim 19, wherein determining, from the at least one object, a target object that fits the subject object comprises:
and determining an object used for being combined with the main object from the at least one object as the target object according to a combination rule when different objects are displayed on the image.
22. The method according to claim 21, wherein determining an object for combining with the subject object from the at least one object as the target object according to a combination rule when different objects are shown on the image comprises:
acquiring the attribute of the main object; the attributes of the subject object include: at least one of a user to which the subject object belongs, an expression of the subject object, a carried decoration of the subject object, and an action posture of the subject object;
determining the attribute of a target object for combining with the main object from the combination rule according to the attribute of the main object;
and determining the target object from the at least one object according to the attribute of the target object.
23. The method of claim 21 or 22, further comprising:
learning a combination rule of the different objects when the different objects are displayed on the image from the sample image through a neural network algorithm; and/or the presence of a gas in the gas,
acquiring a combination rule of the different objects when the different objects are displayed on the image according to an object combination instruction input by a user; and/or the presence of a gas in the gas,
and acquiring a combination rule of the different objects when the different objects are displayed on the image according to a historical target object which is acquired in the historical image processing process and is matched with the historical main object.
24. The method of any one of claims 19-22, wherein adding the subject object over an image region associated with the target object to generate the target image comprises:
if the second image is a two-dimensional image, adding the main body object to an image area associated with the target object, and adjusting the layer relation between the main body object and the image area;
and if the second image is a three-dimensional image, determining the three-dimensional coordinates of the main body object according to the position of the target object, and adding the main body object on the second image according to the three-dimensional coordinates.
25. An image processing apparatus characterized by comprising: a memory and a processor;
the memory is to store one or more computer instructions;
the processor is to execute the one or more computer instructions to: performing the steps in the image processing method of any of claims 1-24.
26. A computer-readable storage medium storing a computer program, wherein the computer program is capable of implementing the steps in the image processing method according to any one of claims 1 to 24 when executed.
CN201910846895.3A 2019-09-02 2019-09-02 Image processing method, apparatus and storage medium Pending CN112529765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846895.3A CN112529765A (en) 2019-09-02 2019-09-02 Image processing method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846895.3A CN112529765A (en) 2019-09-02 2019-09-02 Image processing method, apparatus and storage medium

Publications (1)

Publication Number Publication Date
CN112529765A true CN112529765A (en) 2021-03-19

Family

ID=74974166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846895.3A Pending CN112529765A (en) 2019-09-02 2019-09-02 Image processing method, apparatus and storage medium

Country Status (1)

Country Link
CN (1) CN112529765A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427242A (en) * 2015-10-28 2016-03-23 南京工业大学 Interactive network constraint deformation image adaptive scaling method based on content perception
CN107092684A (en) * 2017-04-21 2017-08-25 腾讯科技(深圳)有限公司 Image processing method and device, storage medium
CN107330885A (en) * 2017-07-07 2017-11-07 广西大学 A kind of multi-operator image reorientation method of holding important content region the ratio of width to height
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN108550101A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN108564527A (en) * 2018-04-04 2018-09-21 百度在线网络技术(北京)有限公司 The method and device of the completion of panorama sketch content and reparation based on neural network
CN109035288A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 A kind of image processing method and device, equipment and storage medium
CN109360159A (en) * 2018-09-07 2019-02-19 华南理工大学 A kind of image completion method based on generation confrontation network model
CN110060203A (en) * 2019-04-22 2019-07-26 京东方科技集团股份有限公司 Image display method, image display apparatus, electronic equipment and storage medium
CN110136052A (en) * 2019-05-08 2019-08-16 北京市商汤科技开发有限公司 A kind of image processing method, device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427242A (en) * 2015-10-28 2016-03-23 南京工业大学 Interactive network constraint deformation image adaptive scaling method based on content perception
CN107092684A (en) * 2017-04-21 2017-08-25 腾讯科技(深圳)有限公司 Image processing method and device, storage medium
CN107330885A (en) * 2017-07-07 2017-11-07 广西大学 A kind of multi-operator image reorientation method of holding important content region the ratio of width to height
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN108564527A (en) * 2018-04-04 2018-09-21 百度在线网络技术(北京)有限公司 The method and device of the completion of panorama sketch content and reparation based on neural network
CN108550101A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN109035288A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 A kind of image processing method and device, equipment and storage medium
CN109360159A (en) * 2018-09-07 2019-02-19 华南理工大学 A kind of image completion method based on generation confrontation network model
CN110060203A (en) * 2019-04-22 2019-07-26 京东方科技集团股份有限公司 Image display method, image display apparatus, electronic equipment and storage medium
CN110136052A (en) * 2019-05-08 2019-08-16 北京市商汤科技开发有限公司 A kind of image processing method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨露菁 等: "智能图像处理及应用", 中国铁道出版社, pages: 98 *

Similar Documents

Publication Publication Date Title
US11595737B2 (en) Method for embedding advertisement in video and computer device
US10839573B2 (en) Apparatus, systems, and methods for integrating digital media content into other digital media content
US10255681B2 (en) Image matting using deep learning
CN106778928B (en) Image processing method and device
US8913847B2 (en) Replacement of a person or object in an image
KR101605983B1 (en) Image recomposition using face detection
US20110305397A1 (en) Systems and methods for retargeting an image utilizing a saliency map
US20120327172A1 (en) Modifying video regions using mobile device input
US10540568B2 (en) System and method for coarse-to-fine video object segmentation and re-composition
WO2014056112A1 (en) Intelligent video thumbnail selection and generation
US11308628B2 (en) Patch-based image matting using deep learning
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN111145308A (en) Paster obtaining method and device
CN108830175A (en) Iris image local enhancement methods, device, equipment and storage medium
CN113516666A (en) Image cropping method and device, computer equipment and storage medium
CN113627402A (en) Image identification method and related device
CN105447846B (en) Image processing method and electronic equipment
CN112529765A (en) Image processing method, apparatus and storage medium
CN114760517B (en) Image moving embedding method and device, equipment, medium and product thereof
CN113870313A (en) Action migration method
Nachlieli et al. Skin-sensitive automatic color correction
Tanaka et al. Iterative applications of image completion with CNN-based failure detection
CN107945201B (en) Video landscape processing method and device based on self-adaptive threshold segmentation
GB2585722A (en) Image manipulation
KR102371145B1 (en) Apparatus for Image Synthesis and Driving Method Thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination