WO2022052509A1 - 图像生成方法及装置 - Google Patents

图像生成方法及装置 Download PDF

Info

Publication number
WO2022052509A1
WO2022052509A1 PCT/CN2021/095458 CN2021095458W WO2022052509A1 WO 2022052509 A1 WO2022052509 A1 WO 2022052509A1 CN 2021095458 W CN2021095458 W CN 2021095458W WO 2022052509 A1 WO2022052509 A1 WO 2022052509A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
composition
layer
primitive
scene construction
Prior art date
Application number
PCT/CN2021/095458
Other languages
English (en)
French (fr)
Inventor
王晨宇
Original Assignee
北京沃东天骏信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京沃东天骏信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京沃东天骏信息技术有限公司
Priority to EP21865570.2A priority Critical patent/EP4213097A1/en
Priority to US18/245,081 priority patent/US20230401763A1/en
Publication of WO2022052509A1 publication Critical patent/WO2022052509A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Definitions

  • the embodiments of the present application relate to the field of computer technologies, and in particular, to an image generation method and apparatus.
  • a machine image combination engine is generally used to generate images.
  • the methods of generating images by using the machine image combination engine generally include: 1. Based on the machine learning method, use a large number of samples to train to obtain an image generation model, and generate images through the image generation model; 2. Establish some simple knowledge rules, under the knowledge rules Deduction generates a large number of images.
  • the embodiments of the present application provide an image generation method and apparatus.
  • an embodiment of the present application provides an image generation method, including: composing an image based on acquired image generation conditions, determining the primitive type and layout information of the image, and obtaining a composition image; Scene construction information: perform scene construction on the composition image, determine the primitive corresponding to each primitive type in the composition image and multiple primitive types, and obtain the scene construction image; adjust the scene according to the color matching information corresponding to the image generation conditions The color of each primitive in the image is constructed to obtain a color matching image; the color matching image is rendered to obtain a target image.
  • performing image composition based on the acquired image generation conditions, determining the primitive type and layout information of the image, and obtaining a composition image including: based on the acquired image generation conditions, using a preset composition knowledge graph to perform image composition from part To the overall multi-level image composition, determine the primitive type and layout information in each layer of the image, and obtain a multi-layered composition image that represents the composition information of each layer and is nested in layers.
  • the characterization is for each layer of the composition image in the multi-layer composition image, in which the composition image of the lower layer is nested in the composition image of the lower layer which takes part of the primitive types in the layer composition image as a whole.
  • the preset composition knowledge graph is used to perform image composition at multiple levels from part to the whole, and the type and layout information of the image elements in each layer of the image are determined, and the representation of each level is obtained.
  • the multi-layered composition images of the composition information which are nested in layers, include: for each layer of composition images in the multi-layer composition images, the following operations are performed: in the composition process of the layered composition images, for the image generation conditions and each composition parameter of the plurality of composition parameters determined by the preset knowledge map, the parameter value of the composition parameter is determined by the judgment information corresponding to the composition parameter, wherein the composition parameter includes the composition parameters for each composition image in this layer.
  • the operation parameters of the primitive type and the calling parameters for the composition image of the lower layer; based on the value of each parameter, the composition image of the layer is obtained.
  • the parameter values determined by the same composition parameter are different, and the composition image of this layer includes a plurality of sub-composition images with the same type of primitives and different layouts; the above is based on each parameter value to obtain the layer A composition image, comprising: for each composition parameter in a plurality of composition parameters in the composition process of the composition image of this layer, nesting and calling the parameter value of the composition parameter whose parameter value has been determined in the previous sequence, to determine the composition parameter including the plurality of composition parameters Multiple sets of parameter value groups for the parameter values of the layer; based on the multiple sets of parameter value groups, multiple sub-composition images corresponding to the layer composition image with the same primitive type and different layout are obtained.
  • the above method further includes: in response to determining that a selection instruction is received, and based on the selection instruction, determining at least one sub-composition image of the composition image of the layer to be nested and called by the composition image of the upper layer of the composition image of the layer.
  • the above-mentioned scene construction is performed on the composition image according to the scene construction information corresponding to the image generation conditions, and the picture element corresponding to each picture element type in the plurality of picture element types in the composition image is determined, and the scene construction is obtained.
  • the image includes: for each layer of composition images in the multi-layer composition images, the following operations are performed: perform a first knowledge transfer operation until it is determined that a first preset termination condition is reached; according to the target scene obtained by each first knowledge transfer operation
  • the construction information is to determine the final scene construction information corresponding to the composition image of this layer; according to the final scene construction information, the scene construction of the composition image of this layer is performed, and the corresponding element type of each of the plurality of primitive types in the composition image of this layer is determined.
  • the first knowledge transfer operation includes: based on the similarity, determining the first target primitive type set corresponding to the first primitive type set; The scene construction information corresponding to the target primitive type set is determined as the target scene construction information; the primitive types included in the first primitive type set but not included in the first target primitive type set are determined as the next first primitive type.
  • a first graphic element type set of the knowledge transfer operation wherein, the first graphic element type set of the first first knowledge transfer operation is the graphic element type set corresponding to the composition image of the layer.
  • the above-mentioned determining the picture element corresponding to each picture element type in the plurality of picture element types in the layer composition image, and obtaining the scene construction image corresponding to the layer composition image includes: according to a pre-established representation map The search tree of the element type and the element relationship is used to determine the element corresponding to each element type in the composition image of this layer and multiple element types from the element library, and obtain the scene construction image corresponding to the composition image of this layer.
  • the first preset termination condition includes: the first primitive type set is an empty set, and the first primitive type set of this first knowledge transfer operation is the same as the first graph of the previous first knowledge transfer operation.
  • the metatype sets are the same; the above method further includes: in response to determining that the first preset termination condition is that the first primitive type set of this first knowledge transfer operation is the same as the first primitive type set of the previous first knowledge transfer operation , based on the received input instruction, add information representing scene construction for the primitive types in the first primitive type set of the first knowledge migration operation to the final scene construction information.
  • the above-mentioned adjusting the color of each primitive in the scene construction image according to the color matching information corresponding to the image generation condition, to obtain a color matching image includes: constructing an image for each layer of the scene in the multi-layer scene construction image, executing The following operations: perform the second knowledge transfer operation until it is determined that the second preset termination condition is reached; determine the final color matching information corresponding to the composition image of this layer according to the target color matching information obtained by each second knowledge transfer operation; adjust the color matching information according to the The color of each primitive in the scene construction image of this layer is obtained, and the color matching image corresponding to the scene construction image of this layer is obtained, wherein the second knowledge transfer operation includes: based on the similarity, determining the second target corresponding to the second primitive type set Set of primitive types; determine the color matching information corresponding to the second target primitive type set as target color matching information; set the primitive types included in the second primitive type set but not included in the second target primitive type set , which is determined as the second primitive type set of the next second
  • the second preset termination condition includes: the second primitive type set is an empty set, and the second primitive type set of this second knowledge transfer operation is the same as the second graph of the previous second knowledge transfer operation.
  • the meta-type set is the same; the above method further includes: in response to determining that the second preset termination condition is that the second primitive type set of this second knowledge transfer operation is the same as the second primitive type set of the previous second knowledge transfer operation , based on the received input instruction, add information representing color matching of the primitive types in the second primitive type set of the second knowledge transfer operation to the final color matching information.
  • an embodiment of the present application provides an image generation device, including: a composition unit configured to perform image composition based on acquired image generation conditions, determine the primitive type and layout information of the image, and obtain a composition image; a scene;
  • the construction unit is configured to perform scene construction on the composition image according to the scene construction information corresponding to the image generation conditions, determine the primitive corresponding to each primitive type in the composition image and among the plurality of primitive types, and obtain the scene construction image
  • the color matching unit is configured to adjust the color of each graphic element in the scene construction image according to the color matching information corresponding to the image generation conditions to obtain the color matching image;
  • the rendering unit is configured to render the color matching image to obtain the target image.
  • the composition unit is further configured to: based on the acquired image generation conditions, use a preset composition knowledge graph to perform image composition at multiple levels from part to the whole, and determine the types of primitives in each layer of images and Layout information to obtain a multi-layered composition image representing the composition information of each layer and nested in layers.
  • the composition image of the lower layer is set with some of the primitive types in the composition image of this layer as a whole.
  • the composition unit is further configured to: for each layer of the composition image in the multi-layer composition image, perform the following operation: in the composition process of the layer composition image, for the image generation condition and the preset
  • the parameter value of the composition parameter is determined by the judgment information corresponding to the composition parameter, wherein the composition parameter includes the type of each graphic element in the composition image for this layer
  • the operation parameters and the calling parameters for the lower layer composition image based on each parameter value, the layer composition image is obtained.
  • the parameter values determined by the same composition parameter are different, and the layer of composition images includes a plurality of sub-composition images with the same primitive type and different layouts; the composition unit is further configured to: for For each composition parameter in a plurality of composition parameters in the composition process of the composition image of this layer, the parameter value of the composition parameter whose parameter value has been determined in the pre-order is called nestedly, so as to determine a plurality of sets of parameter values including the parameter value of the plurality of composition parameters. Parameter value group; based on multiple sets of parameter value groups, multiple sub-composition images corresponding to the layer composition image with the same primitive type and different layout are obtained.
  • the above-mentioned apparatus further includes: a selection unit, configured to, in response to determining that a selection instruction is received, and based on the selection instruction, determine at least one of the composition images of the layer to be nested and invoked by the composition image of the upper layer of the composition image of the layer Subcompose the image.
  • a selection unit configured to, in response to determining that a selection instruction is received, and based on the selection instruction, determine at least one of the composition images of the layer to be nested and invoked by the composition image of the upper layer of the composition image of the layer Subcompose the image.
  • the scene construction unit is further configured to: for each layer of composition images in the multi-layer composition images, perform the following operations: perform a first knowledge transfer operation until it is determined that a first preset termination condition is reached; The target scene construction information obtained by each first knowledge transfer operation determines the final scene construction information corresponding to the composition image of the layer; according to the final scene construction information, the scene construction is performed for the composition image of the layer, and the composition image of the layer is determined to be multiple.
  • the scene construction image corresponding to the composition image of this layer is obtained, wherein the first knowledge transfer operation includes: based on the similarity, determining the corresponding graphic element type set The first target primitive type set; the scene construction information corresponding to the first target primitive type set is determined as the target scene construction information; The included primitive type is determined to be the first primitive type set of the next first knowledge transfer operation; wherein, the first primitive type set of the first first knowledge transfer operation is the primitive type corresponding to the composition image of this layer gather.
  • the scene construction unit is further configured to: according to a pre-established search tree characterizing the primitive types and primitive relationships, determine from the primitive library, the composition image of the layer, among the plurality of primitive types The picture element corresponding to each picture element type is obtained, and the scene construction image corresponding to the composition image of this layer is obtained.
  • the first preset termination condition includes: the first primitive type set is an empty set, and the first primitive type set of this first knowledge transfer operation is the same as the first graph of the previous first knowledge transfer operation.
  • the metatype sets are the same;
  • the apparatus further includes: a first adding unit, configured to respond to determining that the first preset termination condition is the first primitive type set of this first knowledge transfer operation and the last one of the first knowledge transfer operation.
  • the first primitive type sets are the same, and based on the received input instruction, information representing scene construction for the primitive types in the first primitive type set of the first knowledge migration operation is added to the final scene construction information.
  • the color matching unit is further configured to: for each layer of scene construction images in the multi-layer scene construction images, perform the following operations: perform a second knowledge transfer operation until it is determined that a second preset termination condition is reached; According to the target color matching information obtained by each second knowledge transfer operation, the final color matching information corresponding to the composition image of this layer is determined; according to the color matching information, the color of each graphic element in the scene construction image of this layer is adjusted to obtain the corresponding color matching of the scene construction image of this layer.
  • a color matching image wherein the second knowledge transfer operation includes: based on the similarity, determining a second target graphic element type set corresponding to the second graphic element type set; determining the color matching information corresponding to the second target graphic element type set as Target color matching information; determine the primitive types included in the second primitive type set but not included in the second target primitive type set as the second primitive type set of the next second knowledge transfer operation; wherein, The second primitive type set of the first second knowledge transfer operation is the primitive type set corresponding to the scene construction image of this layer.
  • the second preset termination condition includes: the second primitive type set is an empty set, and the second primitive type set of this second knowledge transfer operation is the same as the second graph of the previous second knowledge transfer operation.
  • the set of meta types is the same; the apparatus further includes: a second adding unit, configured to respond to determining that the second preset termination condition is the set of the second primitive type of the second knowledge transfer operation and the set of the second knowledge transfer operation of the previous second knowledge transfer operation.
  • the second set of primitive types is the same, and based on the received input instruction, information representing color matching of the primitive types in the second set of primitive types of the second knowledge transfer operation is added to the final color matching information.
  • an embodiment of the present application provides a computer-readable medium on which a computer program is stored, wherein the method described in any implementation manner of the first aspect is implemented when the program is executed by a processor.
  • an embodiment of the present application provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, when the one or more programs are processed by the one or more processors Execution causes one or more processors to implement a method as described in any implementation form of the first aspect.
  • the image generation method and device obtained by composing an image based on the acquired image generation conditions, and determining the primitive type and layout information of the image; according to the scene construction information corresponding to the image generation conditions
  • the composition image is used for scene construction, and the primitives corresponding to each primitive type among the multiple primitive types in the composition image are determined, and the scene construction image is obtained; according to the color matching information corresponding to the image generation conditions, each image in the scene construction image is adjusted.
  • the color of the element is obtained to obtain a color matching image; the color matching image is rendered to obtain a target image, so that the target image can be flexibly generated according to the image generation conditions, which improves the flexibility of image generation.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present application may be applied;
  • FIG. 2 is a flowchart of an embodiment of an image generation method according to the present application.
  • FIG. 3 is a schematic diagram of obtaining a composition image according to a plurality of operations of the present application.
  • FIG. 5 is a schematic diagram of a composition operation according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a search tree according to the present application.
  • FIG. 7 is a schematic diagram of an application scenario of the image generation method according to the present embodiment.
  • FIG. 8 is a flowchart of yet another embodiment of an image generation method according to the present application.
  • FIG. 9 is a structural diagram of an embodiment of an image generating apparatus according to the present application.
  • FIG. 10 is a schematic structural diagram of a computer system suitable for implementing the embodiments of the present application.
  • FIG. 1 shows an exemplary architecture 100 to which the image generation method and apparatus of the present application may be applied.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the terminal devices 101, 102, and 103 may be hardware devices or software that support network connection for data interaction and data processing.
  • the terminal devices 101, 102, 103 are hardware, they can be various electronic devices that support network connection, information interaction, display, processing and other functions, including but not limited to smart phones, tablet computers, e-book readers, laptops laptops and desktop computers, etc.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
  • the server 105 may be a server that provides various services, such as a background processing server that performs image generation in response to determining an image generation condition based on user input at the terminal devices 101 , 102 , 103 .
  • the background processing server can perform composition, scene construction, color matching, rendering and other processing based on the image generation conditions, so as to obtain the target image.
  • the background processing server may feed back the target image to the terminal device for display by the terminal device.
  • the server 105 may be a cloud server.
  • the server may be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers, or can be implemented as a single server.
  • the server is software, it can be implemented as a plurality of software or software modules (for example, software or software modules for providing distributed services), or can be implemented as a single software or software module. There is no specific limitation here.
  • each part eg, each unit, sub-unit, module, sub-module included in the image generating apparatus may all be provided in the server, in the terminal device, or in the server and the terminal device respectively.
  • the numbers of terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the system architecture may only include the electronic device (such as a server or terminal device) on which the image generation method runs.
  • a flow 200 of an embodiment of an image generation method is shown, including the following steps:
  • Step 201 based on the acquired image generation conditions, perform image composition, determine the image element type and layout information, and obtain a composition image.
  • the execution body of the image generation method can obtain the image generation conditions, and based on the image generation conditions, perform image composition, determine the image element type and layout information, and obtain the composition. image.
  • the image generation condition represents the restriction condition of the generated image, and includes at least one of the following: image theme, image size, image style, and image copy content.
  • Primitives generally refer to basic graphic elements constituting an image
  • the composition process of an image is a process of determining the types of primitives in the generated image and the layout information of the types of primitives.
  • the primitive types can be classified according to one or more attributes of various primitives included in the image. As an example, according to the position information of the primitives in the image, the primitives can be divided into upper primitives, lower primitives, left primitives, right primitives and other primitive types;
  • the meta area is divided into primitive types such as background, graphics, text, etc.
  • the primitive types in this embodiment may be coarse-grained or fine-grained primitive types that are flexibly divided according to actual conditions.
  • the above-mentioned execution subject may search for corresponding composition information in a preset composition knowledge graph, wherein the preset composition knowledge graph includes knowledge of the correspondence between various image generation conditions and composition information.
  • the image generation condition is the theme of the annual meeting
  • the above-mentioned execution subject may determine the corresponding graphic element type and layout information from the composition information related to the annual meeting. For example, it is determined that the background of the image to be generated is a background that expresses a lively and cheerful atmosphere.
  • the above-mentioned execution subject may perform the above-mentioned step 201 in the following manner:
  • the preset composition knowledge map uses the preset composition knowledge map to perform image composition at multiple levels from part to the whole, determine the type and layout information of the primitives in each layer of the image, and obtain the layer-by-layer image representing the composition information at each level. Nested multi-layer composition images.
  • the layer-by-layer nesting is used to represent each layer of the composition image in the multi-layer composition image, and the layer of the composition image is nested with a lower-layer composition image that takes part of the primitive types in the layer of the composition image as a whole.
  • the above-mentioned execution body firstly determines the primitive type and layout information of a part of the image, and obtains a composition image corresponding to the part. On the basis of nesting and calling the composition image of this layer, gradually add new primitive types, determine the layout information of the composition image of this layer and the newly added primitive type, and determine the composition image of the next layer. In this way, the above-mentioned executive body finally obtains a complete composition image.
  • the above-mentioned execution body may firstly determine a first-layer composition image that includes two primitive types of background and text content, and layout information of the above two primitive types.
  • a new graphic element type—the inserted picture is determined, and the layout information of the composition image of the first layer and the newly inserted picture is determined, and the composition image of the second layer is obtained.
  • the composition image of the second layer continue to determine the new element type—modified graphics, and determine the layout information of the composition image of the second layer and the newly inserted modified graphics to obtain the final composition image.
  • the execution subject performs the following operations:
  • the composition parameter is determined through the judgment information corresponding to the composition parameter
  • the parameter value of wherein the composition parameters include operation parameters for each primitive type in the composition image of this layer, and call parameters for the composition image of the lower layer.
  • the composition parameter can be a parameter representing determining the horizontal and vertical arrangement according to the size of the image, or a parameter representing whether the arrangement is arranged around the picture or on one side of the picture according to the inserted picture.
  • the judgment information can represent the comparison result of the length and width of the image, so it can be determined that when the length of the image to be generated is greater than the width, the inserted text is arranged vertically; otherwise, the inserted text can be arranged horizontally.
  • the parameter value of each parameter is determined, and then the composition image of this layer is determined.
  • the above-mentioned execution main body may gradually realize the composition process of the composition image of this layer in units of composition operation 301 .
  • the composition operation may be any operation involved in the composition process.
  • the composition operation may be an alignment operation for different primitive types in the composition image of the layer.
  • Each composition operation includes a corresponding composition parameter 302 and a composition object 303, wherein the composition object includes the composition image of the lower layer and the newly added graphic element type, and the specific parameter value of the composition parameter is determined by the judgment information 304 corresponding to each composition parameter. .
  • Each composition object in the composition image can be regarded as a kind of figure composing the composition image.
  • Various graphics are stored in corresponding containers 305 .
  • the selection and storage of the lower-layer composition image by the container is a nested call to the lower-layer composition image.
  • the container that stores the composition object of the composition image of this layer it is also possible to set the corresponding composition parameter representing the nested call of the composition image of the lower layer, so as to realize the nested call of the composition image of the lower layer.
  • each composition operation in the composition process of the composition image of the layer needs to be affected by the composition image of the upper layer.
  • the influence of the composition image of the upper layer on the composition operation is extracted through the variable 306 .
  • composition objects include the composition images of the lower layers, so as to realize the composition process from part to the whole, layer by layer nesting.
  • category information for the composition operation is preset in the execution body.
  • the classification information of the composition operation may be classified into fine-grained or coarse-grained information according to the actual situation.
  • the above-mentioned execution subject selects the operation based on the received classification information, and can determine the composition operation of the corresponding category. And give the framing operation of the determined category to compose the framing image.
  • the parameter values determined by the same composition parameter may be different, that is, there are such parameters in multiple parameters of the layer of composition images: the corresponding different parameter values under this parameter all satisfy the requirements of this layer.
  • the pre-order of the nesting call has been determined by the above-mentioned execution subject.
  • the parameter values of the composition parameters of the parameter values are used to determine multiple sets of parameter values including the parameter values of the plurality of composition parameters.
  • the parameter values between multiple sets of parameter value groups are the same; for the same parameter with different parameter values selected, the parameter values between multiple sets of parameter value groups are different. .
  • the layer composition image includes N+2 parameter nodes, each parameter node corresponds to a parameter. Then the second parameter node can call the first parameter node in a nest. Similarly, the N+2 parameter node can call the first N+1 parameter nodes, and finally determine the parameter value array.
  • the above-mentioned executive body obtains multiple sub-composition images corresponding to the layered composition images with the same type of primitives and different layouts.
  • any sub-composition image in the lower layer image can be called, thereby increasing the richness of the composition image.
  • the execution subject determines, based on the selection instruction, at least one of the composition images of the layer to be nested and invoked by the composition image of the upper layer of the composition image of the layer. Subcompose the image.
  • each layer of composition images includes a plurality of sub-composition images, and each layer of composition images is nested to call the lower layer of composition images, and finally a fourth layer of composition images is obtained.
  • the sub-composition images in each layer of composition images can be regarded as a node.
  • the above-mentioned execution subject determines the sub-composition images 5011, 5012, and 5013 in the first layer of composition images 501. , sub-composition images 5021 and 5022 in the second-layer composition image 502 , sub-composition images 5031 and 5032 in the second-layer composition image 503 , and sub-composition image 5041 in the fourth-layer composition image 504 .
  • the above executive body can determine the composition path of the composition image on the top layer in the composition process. For example, the composition image corresponding to the composition path specified by the sub composition images 5011, 5021, 5031, and 5041.
  • Step 202 Perform scene construction on the composition image according to the scene construction information corresponding to the image generation conditions, determine the primitive corresponding to each primitive type in the composition image and among the plurality of primitive types, and obtain the scene construction image.
  • the above-mentioned execution subject can perform scene construction on the composition image obtained in step 201 according to the scene construction information corresponding to the image generation conditions, and determine the composition image corresponding to each element type of the multiple primitive types in the composition image. Primitive to get the scene construction image.
  • the scene construction process of the composition image is a process of determining the picture element corresponding to the picture element type in the composition image, which may also involve adjustment of the position information of the picture element.
  • the scene construction information may be a scene construction script determined according to a scene construction knowledge base, where the scene construction knowledge base includes corresponding scene construction knowledge under various image generation conditions.
  • the picture element corresponding to each picture element type among the plurality of picture element types in the composition image can be determined from the picture element library.
  • the graphic element library is required to include rich graphic elements.
  • the above-mentioned execution subject can combine primitives on the basis of atomic primitives (non-decomposable primitives) and self-designed primitives from third parties. Generate composite primitives.
  • the number of primitives in the primitive library is huge and the types are rich.
  • a search tree representing the correspondence between primitive types and primitives is established in advance. As shown in FIG. 6, a schematic diagram of a search tree is shown.
  • the search tree 500 includes each coarse-grained primitive type 601 , and for each coarse-grained primitive type 601 , a fine-grained primitive type 602 is obtained by further division.
  • Each fine-grained primitive type 602 includes a corresponding primitive 603 and records specific primitive information.
  • the above-mentioned executive body can quickly determine the picture element corresponding to each picture element type in the composition image of this layer and the plurality of picture element types from the picture element library, and obtain the scene construction image corresponding to the composition image of this layer.
  • the above-mentioned execution subject performs the following operations:
  • a first target primitive type set corresponding to the first primitive type set is determined.
  • the first target primitive type set is a set with the highest similarity with the first primitive type set. As an example, when the intersection between sets has more primitive types, the similarity is higher.
  • the scene construction information corresponding to the first target primitive type set is determined as the target scene construction information.
  • the primitive types included in the first primitive type set but not included in the first target primitive type set are determined as the first primitive type set in the next first knowledge transfer operation.
  • the above-mentioned execution subject may perform multiple first knowledge transfer operations.
  • the first graphic element type set of the first first knowledge transfer operation is the graphic element type set corresponding to the composition image of this layer.
  • the set of primitive types corresponding to the composition image of this layer is a set composed of all primitive types included in the composition image of this layer.
  • the final scene construction information corresponding to the composition image of this layer is determined.
  • the target scene construction information obtained by each first knowledge transfer operation may be combined to obtain the final scene construction information.
  • the first preset termination condition includes: the first primitive type set is an empty set, and the first primitive type set of this first knowledge transfer operation is the same as the first primitive type set of the previous first knowledge transfer operation.
  • a set of primitive types is the same.
  • the first preset termination condition is that when the first primitive type set of this first knowledge transfer operation is the same as the first primitive type set of the previous first knowledge transfer operation, it indicates the primitive type set corresponding to the composition image of this layer. It includes primitive types that have not been involved so far. It is necessary to add the corresponding scene construction information based on the user's input operation.
  • the input instruction is added to the final scene construction information, and the information representing the scene construction for the primitive types in the first primitive type set of the first knowledge migration operation is added.
  • Step 203 according to the color matching information corresponding to the image generation conditions, adjust the colors of each graphic element in the scene construction image to obtain a color matching image.
  • the above-mentioned execution main body may adjust the color of each graphic element in the scene construction image obtained in step 202 according to the color matching information corresponding to the image generation condition to obtain a color matching image.
  • the color matching information may be a color matching script determined according to a color matching knowledge base, and the color matching knowledge base includes corresponding color matching knowledge under various image generation conditions. According to the color matching information, the picture element corresponding to each picture element type among the plurality of picture element types in the composition image can be determined from the picture element library.
  • the following operations can be performed according to the corresponding color matching information: First, determine the color of the background. For example, determine the color of the background based on the uploaded image. Specifically, the color of the background can be the largest color component, complementary color or adjacent color of the uploaded picture. For another example, the color of the background is randomly obtained from a color set corresponding to the theme that can be defined in the image generation conditions. Then, determine the color of the text, the color of the text is the same as the color of the background, and modify its brightness on this basis; or color by a preset color table, and then color the shadow of the text. The color of the text shadow is the same as the text color, and the brightness is modified based on this. Finally, do a hue shift to the heap image to make it the same as the background.
  • a second target primitive type set corresponding to the second primitive type set is determined.
  • the second target primitive type set is the set with the highest similarity with the second primitive type set.
  • the similarity is higher.
  • the color matching information corresponding to the second target primitive type set is determined as the target color matching information.
  • the primitive types included in the second primitive type set but not included in the second target primitive type set are determined as the second primitive type set in the next second knowledge transfer operation.
  • the above-mentioned execution subject may perform multiple second knowledge transfer operations.
  • the second primitive type set of the first second knowledge transfer operation is the primitive type set corresponding to the scene construction image of this layer.
  • the final color matching information corresponding to the composition image of this layer is determined.
  • the target color matching information obtained by each second knowledge transfer operation may be combined to obtain final color matching scene construction information.
  • the color of each primitive in the scene construction image of this layer is adjusted to obtain a color matching image corresponding to the scene construction image of this layer.
  • the second preset termination condition includes: the second primitive type set is an empty set, and the second primitive type set of this second knowledge transfer operation is the same as the second primitive type set of the previous second knowledge transfer operation.
  • the two primitive types have the same set.
  • the second preset termination condition is that the second primitive type set of this second knowledge transfer operation is the same as the second primitive type set of the previous second knowledge transfer operation, it indicates that the primitive corresponding to the scene construction image of this layer is
  • the type set includes primitive types that have not been involved so far, and corresponding color matching information needs to be added based on the user's input operation.
  • the execution body in response to determining that the second preset termination condition is that the second graph element type set of this second knowledge transfer operation is the same as the second graph element type set of the previous second knowledge transfer operation, the execution body based on the received The input instruction, adding information representing the color matching of the primitive types in the second primitive type set of the second knowledge transfer operation in the final color matching information.
  • Step 204 Render the color matching image to obtain a target image.
  • the above-mentioned execution subject may render the color matching image obtained in step 203 to obtain the target image.
  • the work to be done in image rendering is: through geometric transformation, projection transformation, perspective transformation and window clipping, and then through the acquired material and light and shadow information, generate an image.
  • FIG. 7 is a schematic diagram of an application scenario of the image generation method according to this embodiment.
  • the user 701 inputs the image generation conditions through the terminal device 702 .
  • the server 703 performs image composition based on the acquired image generation conditions, determines the primitive type and layout information of the image, and obtains a composition image 704; performs scene construction on the composition image 704 according to the scene construction information corresponding to the image generation conditions, and determines the composition image
  • a picture element corresponding to each picture element type in the plurality of picture element types is obtained, and a scene construction image 705 is obtained; according to the color matching information corresponding to the image generation condition, the color of each picture element in the scene construction image is adjusted to obtain a color matching image 706 : Render the color matching image to obtain a target image 707 , and feed back the target image 707 to the terminal device 702 .
  • the method provided by the above-mentioned embodiments of the present disclosure obtains a composition image by composing an image based on the acquired image generation conditions, determining the primitive type and layout information of the image, and composing the composition image according to the scene construction information corresponding to the image generation conditions Perform scene construction, determine the primitives corresponding to each primitive type in the composition image and multiple primitive types, and obtain a scene construction image; adjust the color matching information corresponding to the image generation conditions to adjust the image elements in the scene construction image.
  • color to obtain a color matching image rendering the color matching image to obtain a target image, so that the target image can be flexibly generated according to the image generation conditions, and the flexibility of image generation is improved.
  • FIG. 8 a schematic flow 800 of another embodiment of the image generation method according to the present application is shown, including the following steps:
  • Step 801 based on the acquired image generation conditions, use a preset composition knowledge map to perform image composition at multiple levels from part to the whole, determine the type of graphics element and layout information in each layer of the image, and obtain the composition information representing each level. , a multi-layered composition image that is nested in layers.
  • Step 802 for each layer of composition images in the multi-layer composition images, perform the following operations:
  • Step 8021 perform the following first knowledge transfer operation until it is determined that the first preset termination condition is reached:
  • Step 80211 Based on the similarity, determine the first target primitive type set corresponding to the first primitive type set.
  • Step 80212 Determine the scene construction information corresponding to the first target primitive type set as the target scene construction information.
  • Step 80213 Determine the primitive type included in the first primitive type set but not included in the first target primitive type set as the first primitive type set in the next first knowledge migration operation;
  • the first primitive type set of the first knowledge transfer operation is the primitive type set corresponding to the composition image of this layer.
  • Step 8022 Determine the final scene construction information corresponding to the composition image of this layer according to the target scene construction information obtained by each first knowledge transfer operation.
  • Step 8023 according to the final scene construction information, perform scene construction on the layered composition image, determine the graphic element corresponding to each graphic element type in the layered composition image and among the multiple graphic element types, and obtain the scene corresponding to the layered composition image Build the image.
  • Step 803 for each layer of scene construction image in the multi-layer scene construction image, perform the following operations:
  • Step 8031 perform the following second knowledge transfer operation until it is determined that the second preset termination condition is reached:
  • Step 80311 Based on the similarity, determine a second target primitive type set corresponding to the second primitive type set.
  • Step 80312 Determine the color matching information corresponding to the second target primitive type set as the target color matching information.
  • Step 80313 Determine the primitive types included in the second primitive type set but not included in the second target primitive type set as the second primitive type set in the next second knowledge transfer operation.
  • the second primitive type set of the first second knowledge transfer operation is the primitive type set corresponding to the scene construction image of this layer.
  • Step 8032 Determine the final color matching information corresponding to the composition image of this layer according to the target color matching information obtained by each second knowledge transfer operation.
  • Step 8033 According to the color matching information, adjust the color of each graphic element in the scene construction image of this layer to obtain a color matching image corresponding to the scene construction image of this layer.
  • Step 804 Render the color matching image to obtain a target image.
  • the process 800 of the image generation method in this embodiment specifically describes the hierarchical composition process, and the knowledge transfer in the scene construction process and the color matching process. Process. In this way, the quality and efficiency of image generation in this embodiment are improved.
  • the present disclosure provides an embodiment of an image generating apparatus, and the apparatus embodiment corresponds to the method embodiment shown in FIG. 2 .
  • the apparatus may Used in various electronic devices.
  • the image generation apparatus includes: a composition unit 901, configured to perform image composition based on acquired image generation conditions, determine the primitive type and layout information of the image, and obtain a composition image; a scene construction unit 902, is configured to perform scene construction on the composition image according to the scene construction information corresponding to the image generation conditions, determine the primitive corresponding to each primitive type in the composition image and among the plurality of primitive types, and obtain the scene construction image; the color matching unit 903, configured to adjust the color of each primitive in the scene construction image according to the color matching information corresponding to the image generation conditions, to obtain a color matching image; the rendering unit 904, configured to render the color matching image to obtain a target image.
  • a composition unit 901 configured to perform image composition based on acquired image generation conditions, determine the primitive type and layout information of the image, and obtain a composition image
  • a scene construction unit 902 is configured to perform scene construction on the composition image according to the scene construction information corresponding to the image generation conditions, determine the primitive corresponding to each primitive type in the composition image and among the plurality of primitive
  • the composition unit 901 is further configured to: based on the acquired image generation conditions, use a preset composition knowledge graph to perform image composition at multiple levels from part to the whole, and determine the type of primitives in each layer of the image and layout information to obtain a multi-layered composition image that represents the composition information of each layer and is nested in layers, wherein the layer-by-layer nesting is used to represent the composition image for each layer in the multi-layer composition image.
  • the composition image of the lower layer is nested with some of the primitive types in the composition image of this layer as a whole.
  • the composition unit 901 is further configured to: for each layer of the composition image in the multi-layer composition image, perform the following operation: in the composition process of the layer composition image, for the composition image based on the image generation condition and the preset Assuming that each composition parameter in the plurality of composition parameters determined by the knowledge graph, the parameter value of the composition parameter is determined by the judgment information corresponding to the composition parameter, wherein the composition parameter includes each image element in the composition image for this layer.
  • the operation parameters of the type, and the parameters for calling the composition image of the lower layer; based on the value of each parameter, the composition image of the layer is obtained.
  • the parameter values determined by the same composition parameter are different, and the layer of composition images includes a plurality of sub-composition images with the same primitive type and different layouts; the composition unit 901 is further configured to: For each composition parameter among the multiple composition parameters in the composition process of the composition image of this layer, the parameter value of the composition parameter whose parameter value has been determined in the pre-order is called nestedly, so as to determine the multiple composition parameter including the parameter value of the multiple composition parameters.
  • Sets of parameter value groups; based on multiple sets of parameter value groups, multiple sub-composition images corresponding to the layer composition image with the same primitive type and different layout are obtained.
  • the above-mentioned apparatus further includes: a selection unit (not shown in the figure), configured to, in response to determining that a selection instruction is received, and based on the selection instruction, determine the composition image of the upper layer of the composition image to be nested and invoked. At least one sub-composition image of the layer composition image.
  • a selection unit (not shown in the figure), configured to, in response to determining that a selection instruction is received, and based on the selection instruction, determine the composition image of the upper layer of the composition image to be nested and invoked. At least one sub-composition image of the layer composition image.
  • the scene construction unit 902 is further configured to: for each layer of composition images in the multi-layer composition images, perform the following operations: perform a first knowledge transfer operation until it is determined that a first preset termination condition is reached; According to the target scene construction information obtained by each first knowledge transfer operation, the final scene construction information corresponding to the composition image of this layer is determined; according to the final scene construction information, the scene construction of the composition image of this layer is carried out, and the composition image of this layer is determined to be in and out of the composition image.
  • the picture element corresponding to each picture element type in the picture element types is obtained, and the scene construction image corresponding to the composition image of this layer is obtained, wherein the first knowledge transfer operation includes: based on the similarity, determining the first picture element type set corresponding to the first picture element type set.
  • a set of target primitive types determining the scene construction information corresponding to the first target primitive type set as target scene construction information; including those included in the first primitive type set but not included in the first target primitive type set
  • the primitive type is determined as the first primitive type set of the next first knowledge transfer operation; wherein, the first primitive type set of the first first knowledge transfer operation is the primitive type set corresponding to the composition image of this layer .
  • the scene construction unit 902 is further configured to: according to a pre-established search tree representing the primitive types and primitive relationships, determine from the primitive library, the composition image of the layer, among the plurality of primitive types The picture element corresponding to each picture element type of , obtains the scene construction image corresponding to the composition image of this layer.
  • the first preset termination condition includes: the first primitive type set is an empty set, and the first primitive type set of this first knowledge transfer operation is the same as the first graph of the previous first knowledge transfer operation.
  • the set of meta types is the same;
  • the apparatus further includes: a first adding unit (not shown in the figure), configured to respond to determining that the first preset termination condition is the set of first primitive types of the first knowledge transfer operation and the above
  • the first primitive type set of a first knowledge migration operation is the same, and based on the received input instruction, the primitive type representing the first primitive type set of the first knowledge migration operation is added to the final scene construction information Information for scene building.
  • the color matching unit 903 is further configured to: for each layer of scene construction images in the multi-layer scene construction images, perform the following operation: perform a second knowledge transfer operation until it is determined that a second preset termination condition is reached ; According to the target color matching information obtained by each second knowledge transfer operation, determine the final color matching information corresponding to the composition image of this layer; According to the color matching information, adjust the color of each graphic element in the scene construction image of this layer to obtain the corresponding scene construction image of this layer.
  • the color matching image wherein the second knowledge transfer operation includes: based on the similarity, determining the second target graphic element type set corresponding to the second graphic element type set; determining the color matching information corresponding to the second target graphic element type set as the target Color matching information; determine the primitive types included in the second primitive type set but not included in the second target primitive type set as the second primitive type set in the next second knowledge transfer operation; wherein, for the first time
  • the second primitive type set of the second knowledge transfer operation is the primitive type set corresponding to the scene construction image of this layer.
  • the second preset termination condition includes: the second primitive type set is an empty set, and the second primitive type set of this second knowledge transfer operation is the same as the second graph of the previous second knowledge transfer operation.
  • the meta-type sets are the same;
  • the apparatus further includes: a second adding unit (not shown in the figure), configured to respond to determining that the second preset termination condition is the second primitive type set of the second knowledge transfer operation and the above
  • the second primitive type set of a second knowledge transfer operation is the same, and based on the received input instruction, adding a representation of the primitive types in the second primitive type set of the second knowledge transfer operation in the final color matching information. Color information.
  • the composition unit in the image generation device performs image composition based on the acquired image generation conditions, determines the primitive type and layout information of the image, and obtains a composition image; the scene construction unit constructs the image according to the scene construction information corresponding to the image generation conditions , perform scene construction on the composition image, determine the primitive corresponding to each primitive type in the composition image and multiple primitive types, and obtain the scene construction image; the color matching unit adjusts the scene construction according to the color matching information corresponding to the image generation conditions.
  • the color of each primitive in the image is used to obtain a color matching image; the rendering unit renders the color matching image to obtain a target image, so that the target image can be flexibly generated according to the image generation conditions, which improves the flexibility of image generation.
  • FIG. 10 shows a schematic structural diagram of a computer system 1000 suitable for implementing the devices of the embodiments of the present application (for example, the devices 101 , 102 , 103 , and 105 shown in FIG. 1 ).
  • the device shown in FIG. 10 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • a computer system 1000 includes a processor (eg, CPU, central processing unit) 1001 that can be loaded into a random access memory (RAM) according to a program stored in a read only memory (ROM) 1002 or from a storage section 1008 1003 to execute various appropriate actions and processes.
  • RAM random access memory
  • ROM read only memory
  • various programs and data necessary for the operation of the system 1000 are also stored.
  • the processor 1001 , the ROM 1002 and the RAM 1003 are connected to each other through a bus 1004 .
  • An input/output (I/O) interface 1005 is also connected to the bus 1004 .
  • the following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, etc.; an output section 1007 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 1008 including a hard disk, etc. ; and a communication section 1009 including a network interface card such as a LAN card, a modem, and the like. The communication section 1009 performs communication processing via a network such as the Internet.
  • a drive 1010 is also connected to the I/O interface 1005 as needed.
  • a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 1010 as needed so that a computer program read therefrom is installed into the storage section 1008 as needed.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication portion 1009, and/or installed from the removable medium 1011.
  • the computer program is executed by the processor 1001, the above-mentioned functions defined in the method of the present application are performed.
  • the computer-readable medium of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedures, or a combination thereof programming language - such as "C" or a similar programming language.
  • the program code may execute entirely on the client computer, partly on the client computer, as a stand-alone software package, partly on the client computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the client computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the described unit can also be set in the processor, for example, it can be described as: a processor including a composition unit, a scene construction unit, a color matching unit and a rendering unit. Among them, the names of these units do not constitute a limitation of the unit itself under certain circumstances.
  • the composition unit can also be described as "based on the acquired image generation conditions, the image composition is performed, and the image element type and layout are determined. information, to obtain the unit of the composed image".
  • the present application also provides a computer-readable medium.
  • the computer-readable medium may be included in the device described in the above-mentioned embodiments, or may exist alone without being assembled into the device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the device, the computer equipment: based on the acquired image generation conditions, perform image composition, and determine the primitive type and layout of the image information to obtain a composition image; according to the scene construction information corresponding to the image generation conditions, perform scene construction on the composition image, determine the primitive corresponding to each primitive type in the composition image and multiple primitive types, and obtain the scene construction image ; According to the color matching information corresponding to the image generation conditions, adjust the color of each graphic element in the scene construction image to obtain a color matching image; Render the color matching image to obtain a target image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种图像生成方法及装置,所述方法包括:基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像(201);根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像(202);根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像(203);对配色图像进行渲染,得到目标图像(204),从而根据图像生成条件,可以灵活地生成目标图像,提高了图像生成的灵活性。

Description

图像生成方法及装置
本专利申请要求于2020年9月14日提交的、申请号为202010958609.5、申请人为北京沃东天骏信息技术有限公司及北京京东世纪贸易有限公司、发明名称为“图像生成方法及装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本申请实施例涉及计算机技术领域,具体涉及一种图像生成方法及装置。
背景技术
目前,在图像生成领域,由于采用人工方式生成图像的效率低下,一般采用机器合图引擎生成图像。采用机器合图引擎生成图像的方式一般包括:一、基于机器学习方法,使用大量的样本训练得到图像生成模型,通过图像生成模型生成图像;二、建立一些简单的知识规则,在此知识规则下演绎生成大量的图像。
发明内容
本申请实施例提出了一种图像生成方法及装置。
第一方面,本申请实施例提供了一种图像生成方法,包括:基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;对配色图像进行渲染,得到目标图像。
在一些实施例中,上述基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像,包括:基于获取的 图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,其中,层层嵌套用于表征针对于多层构图图像中的每层构图图像,该层构图图像中嵌套有以该层构图图像中的部分图元类型为整体的下层的构图图像。
在一些实施例中,上述基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,包括:针对于多层构图图像中的每层构图图像,执行如下操作:在该层构图图像的构图过程中,针对于基于图像生成条件和预设知识图谱所确定的多个构图参数中的每个构图参数,通过对应于该构图参数的判断信息确定该构图参数的参数值,其中,构图参数包括针对于该层构图图像中的各图元类型的操作参数、对下层构图图像的调用参数;基于各参数值,得到该层构图图像。
在一些实施例中,针对于该层构图图像,同一构图参数所确定的参数值不同,该层构图图像包括图元类型相同、布局不同的多个子构图图像;上述基于各参数值,得到该层构图图像,包括:针对于该层构图图像的构图过程中的多个构图参数中的每个构图参数,嵌套调用前序已确定参数值的构图参数的参数值,以确定包括多个构图参数的参数值的多套参数值组;基于多套参数值组,得到该层构图图像对应的图元类型相同、布局不同的多个子构图图像。
在一些实施例中,上述方法还包括:响应于确定接收到选取指令,基于选取指令,确定该层构图图像的上层构图图像所要嵌套调用的该层构图图像的至少一个子构图图像。
在一些实施例中,上述根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像,包括:针对于多层构图图像中的每层构图图像,执行如下操作:执行第一知识迁移操作,直至确定达到第一预设终止条件;根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息;根据最 终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像,其中,所述第一知识迁移操作包括:基于相似度,确定第一图元类型集合对应的第一目标图元类型集合;将第一目标图元类型集合所对应的场景构建信息确定为目标场景构建信息;将第一图元类型集合所包括的、但第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合;其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。
在一些实施例中,上述确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像,包括:根据预先建立的表征图元类型和图元关系的搜索树,从图元库中确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像。
在一些实施例中,第一预设终止条件包括:第一图元类型集合为空集合,该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同;上述方法还包括:响应于确定第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同,基于接收到的输入指令,在最终场景构建信息中添加表征对该次第一知识迁移操作的第一图元类型集合中的图元类型进行场景构建的信息。
在一些实施例中,上述根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像,包括:针对于多层场景构建图像中的每层场景构建图像,执行如下操作:执行第二知识迁移操作,直至确定达到第二预设终止条件;根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息;根据配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像,其中,所述第二知识迁移操作包括:基于相似度,确定第二图元类型集合对应的第二目标图元类型集合;将第二目标图元类型集合所对应的配色信息确定为目标配色信息;将第 二图元类型集合所包括的、但第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合;其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
在一些实施例中,第二预设终止条件包括:第二图元类型集合为空集合,该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同;上述方法还包括:响应于确定第二预设终止条件为该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同,基于接收到的输入指令,在最终配色信息中添加表征对该次第二知识迁移操作的第二图元类型集合中的图元类型进行配色的信息。
第二方面,本申请实施例提供了一种图像生成装置,包括:构图单元,被配置成基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;场景构建单元,被配置成根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;配色单元,被配置成根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;渲染单元,被配置成对配色图像进行渲染,得到目标图像。
在一些实施例中,构图单元,进一步被配置成:基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,其中,层层嵌套用于表征针对于多层构图图像中的每层构图图像,该层构图图像中嵌套有以该层构图图像中的部分图元类型为整体的下层的构图图像。
在一些实施例中,构图单元,进一步被配置成:针对于多层构图图像中的每层构图图像,执行如下操作:在该层构图图像的构图过程中,针对于基于图像生成条件和预设知识图谱所确定的多个构图参数中的每个构图参数,通过对应于该构图参数的判断信息确定该构图参数的参数值,其中,构图参数包括针对于该层构图图像中的各图元类 型的操作参数、对下层构图图像的调用参数;基于各参数值,得到该层构图图像。
在一些实施例中,针对于该层构图图像,同一构图参数所确定的参数值不同,该层构图图像包括图元类型相同、布局不同的多个子构图图像;构图单元,进一步被配置成:针对于该层构图图像的构图过程中的多个构图参数中的每个构图参数,嵌套调用前序已确定参数值的构图参数的参数值,以确定包括多个构图参数的参数值的多套参数值组;基于多套参数值组,得到该层构图图像对应的图元类型相同、布局不同的多个子构图图像。
在一些实施例中,上述装置还包括:选取单元,被配置成响应于确定接收到选取指令,基于选取指令,确定该层构图图像的上层构图图像所要嵌套调用的该层构图图像的至少一个子构图图像。
在一些实施例中,场景构建单元,进一步被配置成:针对于多层构图图像中的每层构图图像,执行如下操作:执行第一知识迁移操作,直至确定达到第一预设终止条件;根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息;根据最终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像,其中,所述第一知识迁移操作包括:基于相似度,确定第一图元类型集合对应的第一目标图元类型集合;将第一目标图元类型集合所对应的场景构建信息确定为目标场景构建信息;将第一图元类型集合所包括的、但第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合;其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。
在一些实施例中,场景构建单元,进一步被配置成:根据预先建立的表征图元类型和图元关系的搜索树,从图元库中确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像。
在一些实施例中,第一预设终止条件包括:第一图元类型集合为 空集合,该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同;装置还包括:第一添加单元,被配置成响应于确定第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同,基于接收到的输入指令,在最终场景构建信息中添加表征对该次第一知识迁移操作的第一图元类型集合中的图元类型进行场景构建的信息。
在一些实施例中,配色单元,进一步被配置成:针对于多层场景构建图像中的每层场景构建图像,执行如下操作:执行第二知识迁移操作,直至确定达到第二预设终止条件;根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息;根据配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像,其中,所述第二知识迁移操作包括:基于相似度,确定第二图元类型集合对应的第二目标图元类型集合;将第二目标图元类型集合所对应的配色信息确定为目标配色信息;将第二图元类型集合所包括的、但第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合;其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
在一些实施例中,第二预设终止条件包括:第二图元类型集合为空集合,该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同;装置还包括:第二添加单元,被配置成响应于确定第二预设终止条件为该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同,基于接收到的输入指令,在最终配色信息中添加表征对该次第二知识迁移操作的第二图元类型集合中的图元类型进行配色的信息。
第三方面,本申请实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现如第一方面任一实现方式描述的方法。
第四方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个程 序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面任一实现方式描述的方法。
本申请实施例提供的图像生成方法及装置,通过基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;对配色图像进行渲染,得到目标图像,从而根据图像生成条件,可以灵活地生成目标图像,提高了图像生成的灵活性。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请的一个实施例可以应用于其中的示例性***架构图;
图2是根据本申请图像生成方法的一个实施例的流程图;
图3是根据本申请的多个操作得到构图图像的示意图;
图4是根据本申请的构图过程中的参数的嵌套调用示意图;
图5是根据本申请的实施例的构图操作的示意图;
图6是根据本申请的搜索树的示意图;
图7是根据本实施例的图像生成方法的应用场景的示意图;
图8是根据本申请的图像生成方法的又一个实施例的流程图;
图9是根据本申请的图像生成装置的一个实施例的结构图;
图10是适于用来实现本申请实施例的计算机***的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的图像生成方法及装置的示例性架构100。
如图1所示,***架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
终端设备101、102、103可以是支持网络连接从而进行数据交互和数据处理的硬件设备或软件。当终端设备101、102、103为硬件时,其可以是支持网络连接,信息交互、显示、处理等功能的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如响应于确定基于用户在终端设备101、102、103的输入的图像生成条件,进行图像生成的后台处理服务器。后台处理服务器可以基于图像生成条件进行构图、场景构建、配色、渲染等处理,从而得到目标图像。可选的,后台处理服务器可以将目标图像反馈至终端设备,以供终端设备显示。作为示例,服务器105可以是云端服务器。
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
还需要说明的是,本公开的实施例所提供的图像生成方法可以由服务器执行,也可以由终端设备执行,还可以由服务器和终端设备彼 此配合执行。相应地,图像生成装置包括的各个部分(例如各个单元、子单元、模块、子模块)可以全部设置于服务器中,也可以全部设置于终端设备中,还可以分别设置于服务器和终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。当图像生成方法运行于其上的电子设备不需要与其他电子设备进行数据传输时,该***架构可以仅包括图像生成方法运行于其上的电子设备(例如服务器或终端设备)。
继续参考图2,示出了图像生成方法的一个实施例的流程200,包括以下步骤:
步骤201,基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像。
本实施例中,图像生成方法的执行主体(例如图1中的终端设备或服务器)可以获取图像生成条件,并基于图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像。
其中,图像生成条件表征所生成的图像的限制条件,包括以下至少一项:图像主题、图像尺寸、图像风格、图像文案内容。
图元一般指构成图像的基本图形元素,图像的构图过程为确定所生成的图像中的图元类型以及图元类型的布局信息的过程。图元类型可以根据图像中包括的各种图元的一种或多重属性进行分类得到。作为示例,根据图元在图像中的位置信息,可以将图元分为上方图元、下方图元、左方图元、右方图元等图元类型;根据图元的类别,可以将图元区分为背景、图形、文本等图元类型。
需要说明的是,本实施例中的图元类型可以是根据实际情况而灵活划分的粗粒度或细粒度的图元类型。
根据图像生成条件,上述执行主体可以在预设构图知识图谱中查找对应的构图信息,其中,预设构图知识图谱中包括各种图像生成条件与构图信息之间的对应关系的知识。作为示例,图像生成条件为年会主题,则上述执行主体可以从与年会相关的构图信息中确定对应的 图元类型与布局信息。例如,确定所要生成的图像的背景为表现热闹、欢快氛围的背景。
在本实施例的一些可选的实现方式中,为了得到更细致的构图图像,上述执行主体可以通过如下方式执行上述步骤201:
基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像。其中,层层嵌套用于表征针对于多层构图图像中的每层构图图像,该层构图图像中嵌套有以该层构图图像中的部分图元类型为整体的下层的构图图像。
上述执行主体根据图像生成条件,首先确定图像中一部分的图元类型和布局信息,得到该部分对应的构图图像。在嵌套调用该层构图图像的基础上,逐步增加新的图元类型,确定该层构图图像与新增加的图元类型的布局信息,确定下一层构图图像。如此,上述执行主体最终得到完整的构图图像。
作为示例,上述执行主体可以首先确定包括背景、文本内容两种图元类型以及上述两种图元类型的布局信息的第一层构图图像。在第一层构图图像的基础上,确定新的图元类型—***的图片,并确定第一层构图图像与新***的图片的布局信息,得到第二层构图图像。在第二层构图图像的基础上,继续确定新的图元类型—修饰图形,并确定第二层构图图像与新***的修饰图形的布局信息,得到最终的构图图像。
进一步的,本实现方式中,针对于多层构图图像中的每层构图图像,执行主体执行如下操作:
首先,在该层构图图像的构图过程中,针对于基于图像生成条件和预设知识图谱所确定的多个构图参数中的每个构图参数,通过对应于该构图参数的判断信息确定该构图参数的参数值,其中,构图参数包括针对于该层构图图像中的各图元类型的操作参数、对下层构图图像的调用参数。
作为示例,其构图参数可以是表征根据图像的尺寸确定横向、纵 向排列的参数,也可以是表征根据所***的图片确定环绕图片排列还是位于图片一侧设置的参数。判断信息则可以表征图像的长与宽的比较结果,从而可以确定当所要生成的图像的长大于宽时,所***的文本采用纵向排列;否则,所***的文本可以横向排列。
然后,基于各参数值,得到该层构图图像。
在图元类型确定的基础上,确定了各参数的参数值,则确定了该层构图图像。
本实现方式中,如图3所示,上述执行主体可以以构图操作301为单位逐步实现该层构图图像的构图过程。其中,构图操作可以是构图过程中涉及到的任意操作。作为示例,构图操作可以是针对于该层构图图像中的不同的图元类型的对齐操作。每一个构图操作均包括对应的构图参数302和构图对象303,其中,构图对象包括下层的构图图像和新添加的图元类型,通过每一构图参数对应的判断信息304确定构图参数具体的参数值。
该构图图像中的每一种构图对象可以看作组成该构图图像的一种图形。各种图形存储于对应的容器305中。容器对下层构图图像的选取和存储即是对下层的构图图像的嵌套调用。针对于存储该层构图图像的构图对象的容器,也可以设置相应的表征嵌套调用下层构图图像的构图参数,以实现对下层构图图像的嵌套调用。
可以理解,上层构图图像对该层构图图像的嵌套调用,会影响该层构图图像的构图过程,因此,该层构图图像构图过程中的每一构图操作,需要受到上层构图图像的影响。在图3中,通过变量306提现上层构图图像对构图操作的影响。
针对于构图过程中的每个构图操作,确定对应的构图参数和构图对象,其中,构图对象中包括下层的构图图像,从而实现从部分到整体的、层层嵌套的构图过程。
在本实现方式中,上述执行主体中预设有针对构图操作的类别信息。构图操作的分类信息可以是根据实际情况而划分细粒度或粗粒度的类别信息。上述执行主体基于接收到的分类信息选取操作,可以确定对应的类别的构图操作。并给予确定的类别的构图操作进行构图图 像的构图。
可以理解,针对于一层构图图像,同一构图参数所确定的参数值可能不同,也即,该层构图图像的多个参数中存在这种参数:该参数下对应的不同参数值均满足该层构图图像的图像生成条件和预设构图知识库中的知识。如此,则会使得该层构图图像包括图元类型相同、布局不同的多个子构图图像。
针对于上述同一构图参数所确定的参数值可能不同的情形,首先,上述执行主体针对于该层构图图像的构图过程中的多个构图参数中的每个构图参数,嵌套调用前序已确定参数值的构图参数的参数值,以确定包括多个构图参数的参数值的多套参数值组。对于唯一确定参数值的构图参数而言,多套参数值组之间的参数值是相同的;而对于选取不同的参数值的同一参数而言,多套参数值组之间的参数值是不同。
如图4所示,该层构图图像中包括N+2个参数节点,每个参数节点对应于一个参数。则第二个参数节点可以嵌套调用第一个参数节点,同理,第N+2个参数节点可以调用前N+1个参数节点,最终确定出参数值数组。
然后,上述执行主体基于多套参数值组,得到该层构图图像对应的图元类型相同、布局不同的多个子构图图像。在上层图像的构图过程中,可以调用下层图像中的任意子构图图像,从而增加的构图图像的丰富性。
在本实施例的一些可选的而实现方式中,上述执行主体响应于确定接收到选取指令,基于选取指令,确定该层构图图像的上层构图图像所要嵌套调用的该层构图图像的至少一个子构图图像。
如图5所示,示出了本申请实施例的构图过程中的选取控制逻辑示意图500。其中,构图过程涉及到第一层构图图像501、第二层构图图像502、第三层构图图像503和第四层构图图像504。具体的,每一层构图图像都包括多个子构图图像,每层构图图像嵌套调用下层的构图图像,最终得到第四层构图图像。
其中,每层构图图像中的子构图图像可以看做一个节点,基于在构图过程中所接收到的选取指令,上述执行主体确定了第一层构图图 像501中的子构图图像5011、5012、5013,第二层构图图像502中的子构图图像5021、5022,第二层构图图像503中的子构图图像5031、5032,以及第四层构图图像504中的子构图图像5041。上述执行主体基于所确定的各层构图图像中的子构图图像对应的节点,可以确定出构图过程中最上层的构图图像的构图路径。例如,由子构图图像5011、5021、5031、5041所指定的构图路径所对应的构图图像。
步骤202,根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像。
本实施例中,上述执行主体根据对应于图像生成条件的场景构建信息,可以对步骤201得到的构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像。
其中,构图图像的场景构建过程为确定构图图像中的图元类型对应的图元的过程,其中,还可能涉及对图元的位置信息的调整。
作为示例,场景构建信息可以是根据场景构建知识库所确定的场景构建脚本,场景构建知识库中包括各种图像生成条件下对应的场景构建知识。根据场景构建信息可以从图元库中确定构图图像中、多个图元类型中的每个图元类型对应的图元。
本实施例中,为了提高图元库的实用性,要求图元库中包括丰富的图元。除了通过第三方获取现有的图元、自主设计图元外,上述执行主体可以在第三方的原子图元(不可分解的图元)、自主设计的图元的基础上,进行图元组合,生成复合型图元。
图元库中的图元数量巨大,种类丰富,为了提高图元的确定效率,预先建立表征图元类型和图元的对应关系的搜索树。如图6所示,示出了一种搜索树示意图。搜索树500中包括各粗粒度图元类型601,针对于各粗粒度图元类型601,进行进一步划分得到细粒度图元类型602。每一细粒度图元类型602包括对应的图元603,并记载有具体的图元信息。
上述执行主体根据搜索树,可以快速地从图元库中确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图 图像对应的场景构建图像。
在本实施例的一些可选的实现方式中,可以只创建经常使用的、比较典型的、少数量的场景构建信息,在已有的场景构建信息的基础上,上述执行主体通过知识迁移确定场景构建过程中所需的场景构建信息。
具体的,针对于多层构图图像中的每层构图图像,上述执行主体执行如下操作:
首先,执行如下第一知识迁移操作,直至确定达到第一预设终止条件:
第一,基于相似度,确定第一图元类型集合对应的第一目标图元类型集合。
其中,第一目标图元类型集合是与第一图元类型集合相似度最高的集合。作为示例,当集合之间的交集的图元类型越多时,相似度越高。
第二,将第一目标图元类型集合所对应的场景构建信息确定为目标场景构建信息。
第三,将第一图元类型集合所包括的、但第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合。
本实现方式中,针对于每层构图图像,上述执行主体均可能执行多次第一知识迁移操作。其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。该层构图图像所对应的图元类型集合也就是由该层构图图像中包括的所有图元类型所组成的集合。
然后,根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息。
作为示例,可以将每次第一知识迁移操作得到的目标场景构建信息进行组合,得到最终场景构建信息。
最后,根据最终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元, 得到该层构图图像对应的场景构建图像。
在第一知识迁移操作中,第一预设终止条件包括:第一图元类型集合为空集合,该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同。
第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同时,表明该层构图图像对应的图元类型集合中包括至今还未涉及到的图元类型,需要基于用户的输入操作,添加对应的场景构建信息。
具体的,上述执行主体响应于确定第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同,基于接收到的输入指令,在最终场景构建信息中添加表征对该次第一知识迁移操作的第一图元类型集合中的图元类型进行场景构建的信息。
步骤203,根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像。
本实施例中,上述执行主体可以根据对应于图像生成条件的配色信息,调整步骤202得到的场景构建图像中各图元的颜色,得到配色图像。
配色信息可以是根据配色知识库所确定的配色脚本,配色知识库中包括各种图像生成条件下对应的配色知识。根据配色信息可以从图元库确定构图图像中、多个图元类型中的每个图元类型对应的图元。
以包括背景、图片、文本、堆图的构图图像示例,根据对应的配色信息可以执行如下操作:首先,确定背景的颜色。例如,根据上传的图片来确定背景的颜色。具体的,背景的颜色可以是上传的图片的最大颜色分量、互补色或相邻色。又例如,背景的颜色从可以从图像生成条件中限定的主题所对应的颜色集合中随机获取。然后,确定文本的颜色,文本的颜色与背景的颜色色相相同,并在此基础上修改其亮度;或者通过预设配色表进行配色,然后是对文本的阴影进行配色。文本阴影的配色与文字颜色相同,并在此基础上修改亮度。最后,对堆图进行色相迁移,使其与背景相同。
在本实施例的一些可选的实现方式中,可以只创建经常使用的、比较典型的、少数量的配色信息,在已有的配色信息的基础上,上述执行主体通过知识迁移确定配色过程中所需的配色信息。
具体的,针对于多层场景构建图像中的每层场景构建图像,执行如下操作:
首先,执行如下第二知识迁移操作,直至确定达到第二预设终止条件:
第一,基于相似度,确定第二图元类型集合对应的第二目标图元类型集合。
其中,第二目标图元类型集合是与第二图元类型集合相似度最高的集合。作为示例,当集合之间的交集的图元类型越多时,相似度越高。
第二,将第二目标图元类型集合所对应的配色信息确定为目标配色信息。
第三,将第二图元类型集合所包括的、但第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合。
本实现方式中,针对于每层场景构建图像,上述执行主体均可能执行多次第二知识迁移操作。其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
然后,根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息。
作为示例,可以将每次第二知识迁移操作得到的目标配色信息进行组合,得到最终配色场景构建信息。
最后,根据配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像。
在第二知识迁移操作中,第二预设终止条件包括:第二图元类型集合为空集合,该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同。
当第二预设终止条件为该次第二知识迁移操作的第二图元类型集 合与上一次第二知识迁移操作的第二图元类型集合相同时,表明该层场景构建图像对应的图元类型集合中包括至今还未涉及到的图元类型,需要基于用户的输入操作,添加对应的配色信息。
具体的,上述执行主体响应于确定第二预设终止条件为该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同,基于接收到的输入指令,在最终配色信息中添加表征对该次第二知识迁移操作的第二图元类型集合中的图元类型进行配色的信息。
步骤204,对配色图像进行渲染,得到目标图像。
本实施例中,上述执行主体可以对步骤203得到的配色图像进行渲染,得到目标图像。
图像渲染中要完成的工作是:通过几何变换,投影变换,透视变换和窗口剪裁,再通过获取的材质与光影信息,生成图像。
继续参见图7,图7是根据本实施例的图像生成方法的应用场景的一个示意图。在图7的应用场景中,用户701通过终端设备702输入图像生成条件。服务器703基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像704;根据对应于图像生成条件的场景构建信息,对构图图像704进行场景构建,确定构图图像704中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像705;根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像706;对配色图像进行渲染,得到目标图像707,并将目标图像707反馈至终端设备702。
本公开的上述实施例提供的方法,通过基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;对配色图像进行渲染,得到目标图像,从而根据图像生成条件,可以灵活地生成目标图像,提高了图像生成的灵活性。
继续参考图8,示出了根据本申请的图像生成方法的另一个实施例的示意性流程800,包括以下步骤:
步骤801,基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像。
步骤802,针对于多层构图图像中的每层构图图像,执行如下操作:
步骤8021,执行如下第一知识迁移操作,直至确定达到第一预设终止条件:
步骤80211,基于相似度,确定第一图元类型集合对应的第一目标图元类型集合。
步骤80212,将第一目标图元类型集合所对应的场景构建信息确定为目标场景构建信息。
步骤80213,将第一图元类型集合所包括的、但第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合;其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。
步骤8022,根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息。
步骤8023,根据最终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像。
步骤803,针对于多层场景构建图像中的每层场景构建图像,执行如下操作:
步骤8031,执行如下第二知识迁移操作,直至确定达到第二预设终止条件:
步骤80311,基于相似度,确定第二图元类型集合对应的第二目标图元类型集合。
步骤80312,将第二目标图元类型集合所对应的配色信息确定为目标配色信息。
步骤80313,将第二图元类型集合所包括的、但第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合。
其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
步骤8032,根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息。
步骤8033,根据配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像。
步骤804,对配色图像进行渲染,得到目标图像。
从本实施例中可以看出,与图2对应的实施例相比,本实施例中的图像生成方法的流程800具体说明了分层次的构图过程,以及场景构建过程和配色过程中的知识迁移过程。如此,提高了本实施例图像生成的质量和效率。
继续参考图9,作为对上述各图所示方法的实现,本公开提供了一种图像生成装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图9所示,图像生成装置包括:包括:构图单元901,被配置成基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;场景构建单元902,被配置成根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;配色单元903,被配置成根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;渲染单元904,被配置成对配色图像进行渲染,得到目标图像。
在一些实施例中,构图单元901,进一步被配置成:基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的 图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,其中,层层嵌套用于表征针对于多层构图图像中的每层构图图像,该层构图图像中嵌套有以该层构图图像中的部分图元类型为整体的下层的构图图像。
在一些实施例中,构图单元901,进一步被配置成:针对于多层构图图像中的每层构图图像,执行如下操作:在该层构图图像的构图过程中,针对于基于图像生成条件和预设知识图谱所确定的多个构图参数中的每个构图参数,通过对应于该构图参数的判断信息确定该构图参数的参数值,其中,构图参数包括针对于该层构图图像中的各图元类型的操作参数、对下层构图图像的调用参数;基于各参数值,得到该层构图图像。
在一些实施例中,针对于该层构图图像,同一构图参数所确定的参数值不同,该层构图图像包括图元类型相同、布局不同的多个子构图图像;构图单元901,进一步被配置成:针对于该层构图图像的构图过程中的多个构图参数中的每个构图参数,嵌套调用前序已确定参数值的构图参数的参数值,以确定包括多个构图参数的参数值的多套参数值组;基于多套参数值组,得到该层构图图像对应的图元类型相同、布局不同的多个子构图图像。
在一些实施例中,上述装置还包括:选取单元(图中未示出),被配置成响应于确定接收到选取指令,基于选取指令,确定该层构图图像的上层构图图像所要嵌套调用的该层构图图像的至少一个子构图图像。
在一些实施例中,场景构建单元902,进一步被配置成:针对于多层构图图像中的每层构图图像,执行如下操作:执行第一知识迁移操作,直至确定达到第一预设终止条件;根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息;根据最终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像,其中,第一知识迁移操作包括:基于相似度,确定第一图元类型集合对应的第一目标图元类型集合;将第一 目标图元类型集合所对应的场景构建信息确定为目标场景构建信息;将第一图元类型集合所包括的、但第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合;其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。
在一些实施例中,场景构建单元902,进一步被配置成:根据预先建立的表征图元类型和图元关系的搜索树,从图元库中确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像。
在一些实施例中,第一预设终止条件包括:第一图元类型集合为空集合,该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同;装置还包括:第一添加单元(图中未示出),被配置成响应于确定第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同,基于接收到的输入指令,在最终场景构建信息中添加表征对该次第一知识迁移操作的第一图元类型集合中的图元类型进行场景构建的信息。
在一些实施例中,配色单元903,进一步被配置成:针对于多层场景构建图像中的每层场景构建图像,执行如下操作:执行第二知识迁移操作,直至确定达到第二预设终止条件;根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息;根据配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像,其中,第二知识迁移操作包括:基于相似度,确定第二图元类型集合对应的第二目标图元类型集合;将第二目标图元类型集合所对应的配色信息确定为目标配色信息;将第二图元类型集合所包括的、但第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合;其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
在一些实施例中,第二预设终止条件包括:第二图元类型集合为 空集合,该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同;装置还包括:第二添加单元(图中未示出),被配置成响应于确定第二预设终止条件为该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同,基于接收到的输入指令,在最终配色信息中添加表征对该次第二知识迁移操作的第二图元类型集合中的图元类型进行配色的信息。
本实施例中,图像生成装置中的构图单元基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;场景构建单元根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;配色单元根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;渲染单元对配色图像进行渲染,得到目标图像,从而根据图像生成条件,可以灵活地生成目标图像,提高了图像生成的灵活性。
下面参考图10,其示出了适于用来实现本申请实施例的设备(例如图1所示的设备101、102、103、105)的计算机***1000的结构示意图。图10示出的设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图10所示,计算机***1000包括处理器(例如CPU,中央处理器)1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从存储部分1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理。在RAM1003中,还存储有***1000操作所需的各种程序和数据。处理器1001、ROM1002以及RAM1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
以下部件连接至I/O接口1005:包括键盘、鼠标等的输入部分1006;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分1007;包括硬盘等的存储部分1008;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分1009。通信部分1009经由诸 如因特网的网络执行通信处理。驱动器1010也根据需要连接至I/O接口1005。可拆卸介质1011,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1010上,以便于从其上读出的计算机程序根据需要被安装入存储部分1008。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分1009从网络上被下载和安装,和/或从可拆卸介质1011被安装。在该计算机程序被处理器1001执行时,执行本申请的方法中限定的上述功能。
需要说明的是,本申请的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,程序设计语言包括面向目标的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在客户计算机上执行、部分地在客户计算机上执行、作为一个独立的软件包执行、部分在客户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到客户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本申请各种实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器,包括构图单元、场景构建单元、配色单元和渲染单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,构图单元还可以被描述为“基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备中所包含的;也可以是单独 存在,而未装配入该设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该计算机设备:基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;根据对应于图像生成条件的场景构建信息,对构图图像进行场景构建,确定构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;根据对应于图像生成条件的配色信息,调整场景构建图像中各图元的颜色,得到配色图像;对配色图像进行渲染,得到目标图像。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (22)

  1. 一种图像生成方法,包括:
    基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;
    根据对应于所述图像生成条件的场景构建信息,对所述构图图像进行场景构建,确定所述构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;
    根据对应于所述图像生成条件的配色信息,调整所述场景构建图像中各图元的颜色,得到配色图像;
    对所述配色图像进行渲染,得到目标图像。
  2. 根据权利要求1所述的方法,其中,所述基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像,包括:
    基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,其中,所述层层嵌套用于表征针对于所述多层构图图像中的每层构图图像,该层构图图像中嵌套有以该层构图图像中的部分图元类型为整体的下层的构图图像。
  3. 根据权利要求1所述的方法,其中,所述基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,包括:
    针对于所述多层构图图像中的每层构图图像,执行如下操作:
    在该层构图图像的构图过程中,针对于基于所述图像生成条件和所述预设知识图谱所确定的多个构图参数中的每个构图参数,通过对应于该构图参数的判断信息确定该构图参数的参数值,其中,所述构 图参数包括针对于该层构图图像中的各图元类型的操作参数、对下层构图图像的调用参数;
    基于各参数值,得到该层构图图像。
  4. 根据权利要求3所述的方法,其中,针对于该层构图图像,同一构图参数所确定的参数值不同,该层构图图像包括图元类型相同、布局不同的多个子构图图像;
    所述基于各参数值,得到该层构图图像,包括:
    针对于该层构图图像的构图过程中的多个构图参数中的每个构图参数,嵌套调用前序已确定参数值的构图参数的参数值,以确定包括所述多个构图参数的参数值的多套参数值组;
    基于所述多套参数值组,得到该层构图图像对应的图元类型相同、布局不同的多个子构图图像。
  5. 根据权利要求4所述的方法,还包括:
    响应于确定接收到选取指令,基于所述选取指令,确定该层构图图像的上层构图图像所要嵌套调用的该层构图图像的至少一个子构图图像。
  6. 根据权利要求2所述的方法,其中,所述根据对应于所述图像生成条件的场景构建信息,对所述构图图像进行场景构建,确定所述构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像,包括:
    针对于所述多层构图图像中的每层构图图像,执行如下操作:
    执行第一知识迁移操作,直至确定达到第一预设终止条件;
    根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息;
    根据所述最终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像;
    其中,所述第一知识迁移操作包括:基于相似度,确定第一图元类型集合对应的第一目标图元类型集合;将所述第一目标图元类型集合所对应的场景构建信息确定为目标场景构建信息;将所述第一图元类型集合所包括的、但所述第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合;其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。
  7. 根据权利要求6所述的方法,其中,所述确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像,包括:
    根据预先建立的表征图元类型和图元关系的搜索树,从图元库中确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像。
  8. 根据权利要求6所述的方法,其中,所述第一预设终止条件包括:第一图元类型集合为空集合,该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同;
    所述方法还包括:
    响应于确定所述第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同,基于接收到的输入指令,在所述最终场景构建信息中添加表征对该次第一知识迁移操作的第一图元类型集合中的图元类型进行场景构建的信息。
  9. 根据权利要求6所述的方法,其中,所述根据对应于所述图像生成条件的配色信息,调整所述场景构建图像中各图元的颜色,得到配色图像,包括:
    针对于多层场景构建图像中的每层场景构建图像,执行如下操作:
    执行第二知识迁移操作,直至确定达到第二预设终止条件;
    根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息;
    根据所述配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像;
    其中,所述第二知识迁移操作包括:基于相似度,确定第二图元类型集合对应的第二目标图元类型集合;将所述第二目标图元类型集合所对应的配色信息确定为目标配色信息;将所述第二图元类型集合所包括的、但所述第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合;其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
  10. 根据权利要求9所述的方法,其中,所述第二预设终止条件包括:第二图元类型集合为空集合,该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同;
    所述方法还包括:
    响应于确定所述第二预设终止条件为该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同,基于接收到的输入指令,在所述最终配色信息中添加表征对该次第二知识迁移操作的第二图元类型集合中的图元类型进行配色的信息。
  11. 一种图像生成装置,包括:
    构图单元,被配置成基于获取的图像生成条件,进行图像构图,确定图像的图元类型和布局信息,得到构图图像;
    场景构建单元,被配置成根据对应于所述图像生成条件的场景构建信息,对所述构图图像进行场景构建,确定所述构图图像中、多个图元类型中的每个图元类型对应的图元,得到场景构建图像;
    配色单元,被配置成根据对应于所述图像生成条件的配色信息,调整所述场景构建图像中各图元的颜色,得到配色图像;
    渲染单元,被配置成对所述配色图像进行渲染,得到目标图像。
  12. 根据权利要求11所述的装置,其中,所述构图单元,进一步被配置成:
    基于获取的图像生成条件,利用预设构图知识图谱进行从部分到整体的多个层次的图像构图,确定每层图像中的图元类型和布局信息,得到表征各层次的构图信息的、层层嵌套的多层构图图像,其中,所述层层嵌套用于表征针对于所述多层构图图像中的每层构图图像,该层构图图像中嵌套有以该层构图图像中的部分图元类型为整体的下层的构图图像。
  13. 根据权利要求11所述的装置,其中,所述构图单元,进一步被配置成:
    针对于所述多层构图图像中的每层构图图像,执行如下操作:在该层构图图像的构图过程中,针对于基于所述图像生成条件和所述预设知识图谱所确定的多个构图参数中的每个构图参数,通过对应于该构图参数的判断信息确定该构图参数的参数值,其中,所述构图参数包括针对于该层构图图像中的各图元类型的操作参数、对下层构图图像的调用参数;基于各参数值,得到该层构图图像。
  14. 根据权利要求13所述的装置,其中,针对于该层构图图像,同一构图参数所确定的参数值不同,该层构图图像包括图元类型相同、布局不同的多个子构图图像;
    所述构图单元,进一步被配置成:针对于该层构图图像的构图过程中的多个构图参数中的每个构图参数,嵌套调用前序已确定参数值的构图参数的参数值,以确定包括所述多个构图参数的参数值的多套参数值组;基于所述多套参数值组,得到该层构图图像对应的图元类型相同、布局不同的多个子构图图像。
  15. 根据权利要求14所述的装置,还包括:
    选取单元,被配置成响应于确定接收到选取指令,基于所述选取 指令,确定该层构图图像的上层构图图像所要嵌套调用的该层构图图像的至少一个子构图图像。
  16. 根据权利要求12所述的装置,其中,所述场景构建单元,进一步被配置成:
    针对于所述多层构图图像中的每层构图图像,执行如下操作:执行第一知识迁移操作,直至确定达到第一预设终止条件;根据每次第一知识迁移操作得到的目标场景构建信息,确定该层构图图像对应的最终场景构建信息;根据所述最终场景构建信息,对该层构图图像进行场景构建,确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像,其中,所述第一知识迁移操作包括:基于相似度,确定第一图元类型集合对应的第一目标图元类型集合;将所述第一目标图元类型集合所对应的场景构建信息确定为目标场景构建信息;将所述第一图元类型集合所包括的、但所述第一目标图元类型集合所不包括的图元类型,确定为下一次第一知识迁移操作的第一图元类型集合;其中,首次的第一知识迁移操作的第一图元类型集合为该层构图图像所对应的图元类型集合。
  17. 根据权利要求16所述的装置,其中,所述场景构建单元,进一步被配置成:
    根据预先建立的表征图元类型和图元关系的搜索树,从图元库中确定该层构图图像中、多个图元类型中的每个图元类型对应的图元,得到该层构图图像对应的场景构建图像。
  18. 根据权利要求16所述的装置,其中,所述第一预设终止条件包括:第一图元类型集合为空集合,该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同;
    所述装置还包括:第一添加单元,被配置成响应于确定所述第一预设终止条件为该次第一知识迁移操作的第一图元类型集合与上一次第一知识迁移操作的第一图元类型集合相同,基于接收到的输入指令, 在所述最终场景构建信息中添加表征对该次第一知识迁移操作的第一图元类型集合中的图元类型进行场景构建的信息。
  19. 根据权利要求16所述的装置,其中,所述配色单元,进一步被配置成:
    针对于多层场景构建图像中的每层场景构建图像,执行如下操作:执行第二知识迁移操作,直至确定达到第二预设终止条件;根据每次第二知识迁移操作得到的目标配色信息,确定该层构图图像对应的最终配色信息;根据所述配色信息,调整该层场景构建图像中各图元的颜色,得到该层场景构建图像对应的配色图像,其中,所述第二知识迁移操作包括:基于相似度,确定第二图元类型集合对应的第二目标图元类型集合;将所述第二目标图元类型集合所对应的配色信息确定为目标配色信息;将所述第二图元类型集合所包括的、但所述第二目标图元类型集合所不包括的图元类型,确定为下一次第二知识迁移操作的第二图元类型集合;其中,首次的第二知识迁移操作的第二图元类型集合为该层场景构建图像所对应的图元类型集合。
  20. 根据权利要求19所述的装置,其中,所述第二预设终止条件包括:第二图元类型集合为空集合,该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同;
    所述装置还包括:第二添加单元,被配置成响应于确定所述第二预设终止条件为该次第二知识迁移操作的第二图元类型集合与上一次第二知识迁移操作的第二图元类型集合相同,基于接收到的输入指令,在所述最终配色信息中添加表征对该次第二知识迁移操作的第二图元类型集合中的图元类型进行配色的信息。
  21. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-10中任一所述的方法。
  22. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-10中任一所述的方法。
PCT/CN2021/095458 2020-09-14 2021-05-24 图像生成方法及装置 WO2022052509A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21865570.2A EP4213097A1 (en) 2020-09-14 2021-05-24 Image generation method and apparatus
US18/245,081 US20230401763A1 (en) 2020-09-14 2021-05-24 Image Generation Method and Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010958609.5 2020-09-14
CN202010958609.5A CN112308939B (zh) 2020-09-14 2020-09-14 图像生成方法及装置

Publications (1)

Publication Number Publication Date
WO2022052509A1 true WO2022052509A1 (zh) 2022-03-17

Family

ID=74483888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095458 WO2022052509A1 (zh) 2020-09-14 2021-05-24 图像生成方法及装置

Country Status (4)

Country Link
US (1) US20230401763A1 (zh)
EP (1) EP4213097A1 (zh)
CN (1) CN112308939B (zh)
WO (1) WO2022052509A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308939B (zh) * 2020-09-14 2024-04-16 北京沃东天骏信息技术有限公司 图像生成方法及装置
CN113822957A (zh) * 2021-02-26 2021-12-21 北京沃东天骏信息技术有限公司 用于合成图像的方法和装置
CN112966617B (zh) * 2021-03-11 2022-10-21 北京三快在线科技有限公司 摆盘图像的生成方法、图像生成模型的训练方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0381877A (ja) * 1989-08-25 1991-04-08 Hitachi Ltd 3次元コンピユータグラフイツクスのレンダリングシステム
US20080186519A1 (en) * 2007-02-06 2008-08-07 Fuji Xerox Co., Ltd. Image generating apparatus, image generating method and computer readable medium
JP2009294917A (ja) * 2008-06-05 2009-12-17 Meiji Univ 配色支援装置、配色支援方法、及び配色支援プログラム
CN109345612A (zh) * 2018-09-13 2019-02-15 腾讯数码(天津)有限公司 一种图像生成方法、装置、设备和存储介质
CN111009041A (zh) * 2019-11-15 2020-04-14 广东智媒云图科技股份有限公司 一种绘画创作方法、装置、终端设备及可读存储介质
CN111402364A (zh) * 2018-12-27 2020-07-10 北京字节跳动网络技术有限公司 一种图像生成方法、装置、终端设备及存储介质
CN112308939A (zh) * 2020-09-14 2021-02-02 北京沃东天骏信息技术有限公司 图像生成方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060214947A1 (en) * 2005-03-23 2006-09-28 The Boeing Company System, method, and computer program product for animating drawings
CN105243119B (zh) * 2015-09-29 2019-05-24 百度在线网络技术(北京)有限公司 确定图像的待叠加区域、叠加图像、图片呈现方法和装置
GB2551689B (en) * 2016-04-22 2021-05-12 Advanced Risc Mach Ltd Method and Apparatus for processing graphics
CN111061902B (zh) * 2019-12-12 2023-12-19 广东智媒云图科技股份有限公司 一种基于文本语义分析的绘图方法、装置及终端设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0381877A (ja) * 1989-08-25 1991-04-08 Hitachi Ltd 3次元コンピユータグラフイツクスのレンダリングシステム
US20080186519A1 (en) * 2007-02-06 2008-08-07 Fuji Xerox Co., Ltd. Image generating apparatus, image generating method and computer readable medium
JP2009294917A (ja) * 2008-06-05 2009-12-17 Meiji Univ 配色支援装置、配色支援方法、及び配色支援プログラム
CN109345612A (zh) * 2018-09-13 2019-02-15 腾讯数码(天津)有限公司 一种图像生成方法、装置、设备和存储介质
CN111402364A (zh) * 2018-12-27 2020-07-10 北京字节跳动网络技术有限公司 一种图像生成方法、装置、终端设备及存储介质
CN111009041A (zh) * 2019-11-15 2020-04-14 广东智媒云图科技股份有限公司 一种绘画创作方法、装置、终端设备及可读存储介质
CN112308939A (zh) * 2020-09-14 2021-02-02 北京沃东天骏信息技术有限公司 图像生成方法及装置

Also Published As

Publication number Publication date
US20230401763A1 (en) 2023-12-14
EP4213097A1 (en) 2023-07-19
CN112308939A (zh) 2021-02-02
CN112308939B (zh) 2024-04-16

Similar Documents

Publication Publication Date Title
WO2022052509A1 (zh) 图像生成方法及装置
US11989802B2 (en) System for supporting flexible color assignment in complex documents
US10346143B2 (en) Systems and methods for transforming service definitions in a multi-service containerized application
US10439987B2 (en) Systems and methods for securing network traffic flow in a multi-service containerized application
US8830272B2 (en) User interface for a digital production system including multiple window viewing of flowgraph nodes
US10580013B2 (en) Method and apparatus for autonomous services composition
US10347012B2 (en) Interactive color palette interface for digital painting
US20110167336A1 (en) Gesture-based web site design
WO2022242352A1 (zh) 构建图像语义分割模型和图像处理的方法、装置、电子设备及介质
WO2022052973A1 (zh) 一种模型处理方法、装置、设备及计算机可读存储介质
CN113010612B (zh) 一种图数据可视化构建方法、查询方法及装置
CN109725980A (zh) 生成镜像标签的方法、设备以及计算机可读介质
WO2023077951A1 (zh) 数据渲染方法及装置
JP2019200782A (ja) グリフを生成する方法並びに生成されたグリフに基づきデータの視覚的表現を作成する方法、プログラム及びコンピュータ装置
CN108958611B (zh) 一种信息编辑方法及装置
US20230401346A1 (en) Session collaboration system
WO2023207981A1 (zh) 配置文件生成方法、装置、电子设备、介质及程序产品
CN115617441A (zh) 绑定模型和图元的方法、装置、存储介质及计算机设备
US20190384615A1 (en) Containerized runtime environments
CN115359145A (zh) 流程图元的绘制方法、装置、存储介质及计算机设备
CN115130442A (zh) 报表生成的方法、装置、存储介质及计算机设备
US11068145B2 (en) Techniques for creative review of 3D content in a production environment
US10691418B1 (en) Process modeling on small resource constraint devices
US20190347078A1 (en) Generating Application Mock-Ups Based on Captured User Sessions
US10930036B2 (en) Bar chart optimization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865570

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021865570

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021865570

Country of ref document: EP

Effective date: 20230414