CN110866138A - Background generation method and system, computer system, and computer-readable storage medium - Google Patents

Background generation method and system, computer system, and computer-readable storage medium Download PDF

Info

Publication number
CN110866138A
CN110866138A CN201810945726.0A CN201810945726A CN110866138A CN 110866138 A CN110866138 A CN 110866138A CN 201810945726 A CN201810945726 A CN 201810945726A CN 110866138 A CN110866138 A CN 110866138A
Authority
CN
China
Prior art keywords
background
target
neural network
context
present disclosure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810945726.0A
Other languages
Chinese (zh)
Inventor
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JD Digital Technology Holdings Co Ltd
Original Assignee
JD Digital Technology Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JD Digital Technology Holdings Co Ltd filed Critical JD Digital Technology Holdings Co Ltd
Priority to CN201810945726.0A priority Critical patent/CN110866138A/en
Publication of CN110866138A publication Critical patent/CN110866138A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a background generation method, which includes acquiring description information input by a user; determining at least one feature for describing a target background from the description information, wherein the target background is used for rendering the target object; and inputting the at least one feature into a target neural network, and generating a target background through the target neural network. The present disclosure also provides a background generation system, a computer system, and a computer-readable storage medium.

Description

Background generation method and system, computer system, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a background generation method and system, a computer system, and a computer-readable storage medium.
Background
With the rapid development of artificial intelligence technology, machine understanding and creation capabilities are continuously improved. For example, Google Duplex voice assistant in Google can communicate with human more naturally for a particular activity, helping people to complete activities and the like. Microsoft ice attempts to structure poetry in an artistic form of algorithm, and finally the machine has certain creativity, so that the machine can also 'feel' when seeing good pictures and landscapes.
However, in the related art, when a user needs to obtain a background meeting a certain requirement to render a desktop of a device, or to render a service product such as a composition of an article, the device often only provides limited materials, and randomly allocates the limited materials to the user or presets a plurality of background pictures for the user to select, so that the coverage rate required by the user is relatively low.
Therefore, in the course of implementing the disclosed concept, the inventors found that there are at least the following problems in the related art:
when a user needs to acquire a background including different materials in the related art, the user's needs cannot be flexibly met, so that the user experience is poor.
Disclosure of Invention
In view of the above, the present disclosure provides a background generation method and system, a computer system, and a computer-readable storage medium.
One aspect of the present disclosure provides a background generation method, including obtaining description information input by a user; determining at least one feature for describing a target background from the description information, wherein the target background is used for rendering a target object; and inputting the at least one feature into a target neural network, and generating the target background through the target neural network.
According to an embodiment of the present disclosure, the method further includes training the target neural network in advance, including: obtaining a background material sample, wherein each background in the background material sample comprises a plurality of basic elements, and each basic element is used for representing one characteristic of the corresponding background; and pre-training a neural network based on the basic elements contained in each background in the background material sample to obtain a target neural network capable of outputting the corresponding background according to one or more basic elements.
According to the embodiment of the disclosure, when each background in the background material sample is a picture background, the resolution of the picture background is greater than or equal to a preset value.
According to an embodiment of the present disclosure, determining at least one feature for describing a target context from the description information includes segmenting the description information based on a semantic segmentation method to obtain one or more information segments; and determining at least one feature describing the target context from the one or more pieces of information.
According to an embodiment of the present disclosure, the target background includes a plurality of target backgrounds, and the method further includes displaying the plurality of target backgrounds after the plurality of target backgrounds are generated by the target neural network; acquiring selection operation of a user; and in response to the selection operation, taking the target background targeted by the selection operation as a final background.
Another aspect of the present disclosure provides a context generating system, including a first obtaining module, configured to obtain description information input by a user; a determining module, configured to determine, from the description information, at least one feature for describing a target background, where the target background is used for rendering a target object; and a generating module, configured to input the at least one feature into a target neural network, and generate the target background through the target neural network.
According to an embodiment of the present disclosure, the system further includes a training module, configured to train the target neural network in advance, and includes an obtaining unit, configured to obtain background material samples, where each background in the background material samples includes a plurality of basic elements, and each basic element is used to characterize a feature of a corresponding background; and the training unit is used for pre-training the neural network based on the basic elements contained in each background in the background material sample to obtain a target neural network capable of outputting the corresponding background according to one or more basic elements.
According to the embodiment of the disclosure, when each background in the background material sample is a picture background, the resolution of the picture background is greater than or equal to a preset value.
According to an embodiment of the present disclosure, the determining module includes a segmenting unit, configured to segment the description information based on a semantic segmentation system to obtain one or more information segments; and a determining unit for determining at least one feature describing the target context from the one or more pieces of information.
According to an embodiment of the present disclosure, the target background includes a plurality of target backgrounds, and the system further includes a display module configured to display the plurality of target backgrounds after the plurality of target backgrounds are generated by the target neural network; the second acquisition module is used for acquiring the selection operation of the user; and the response module is used for responding to the selection operation and taking the target background aimed by the selection operation as a final background.
Another aspect of the disclosure provides a computer system comprising one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the background generation method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement a background generation method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing the background generation method as described above.
According to the embodiment of the disclosure, by receiving the description information of the user, one or more features for describing the target background are extracted from the description information, and the extracted one or more features are input into the target neural network, the target neural network can generate a corresponding technical means of the target background according to the one or more features, and because the background image is not obtained by searching and matching based on the existing image, but the corresponding background image is flexibly generated according to the requirements of the user, the technical problem that the user cannot flexibly meet the requirements of the user when the user needs to obtain the background comprising different materials in the related art is at least partially overcome, and the technical effects of improving the user experience and increasing the utilization rate of the user are achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the context generation method and system may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a background generation method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow diagram for generating a background using a neural network according to an embodiment of the present disclosure;
FIG. 4 schematically shows a schematic diagram of a context generated using a neural network, in accordance with an embodiment of the present disclosure;
FIG. 5 schematically shows a schematic diagram of a context generated using a neural network, in accordance with another embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow diagram for pre-training the target neural network, in accordance with an embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart for determining at least one feature from description information for describing a target context according to an embodiment of the present disclosure;
FIG. 8 schematically shows a flow chart of a background generation method according to another embodiment of the present disclosure;
FIG. 9 schematically shows a block diagram of a context generation system according to an embodiment of the present disclosure;
FIG. 10 schematically shows a block diagram of a context generation system according to another embodiment of the present disclosure;
FIG. 11 schematically shows a block diagram of a determination module according to an embodiment of the disclosure;
FIG. 12 schematically shows a block diagram of a context generation system according to another embodiment of the present disclosure; and
FIG. 13 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method, in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
The embodiment of the disclosure provides a background generation method and a system, wherein the method comprises the steps of obtaining description information input by a user; determining at least one feature for describing a target background from the description information, wherein the target background is used for rendering the target object; and inputting the at least one feature into a target neural network, and generating a target background through the target neural network.
Fig. 1 schematically illustrates an exemplary system architecture to which the context generation method and system may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the background generation method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the context generation system provided by the embodiments of the present disclosure may be generally disposed in the server 105. The background generation method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the context generation system provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the background generation method provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the background generation system provided by the embodiment of the present disclosure may also be provided in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, the user may enter the description information at any of the terminal devices 101, 102, or 103 (e.g., the terminal device 101, but not limited thereto), or may be stored on an external storage device and imported into the terminal device 101. Then, the terminal device 101 may locally perform the background generation method provided by the embodiment of the present disclosure, or transmit the description information to another terminal device, a server, or a server cluster, and perform the background generation method provided by the embodiment of the present disclosure by another terminal device, a server, or a server cluster that receives the description information.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a background generation method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, description information input by a user is acquired.
According to embodiments of the present disclosure, a user may enter descriptive information via input devices including, but not limited to, a microphone, a keyboard, and other input devices. The descriptive information may be the user's requirements for the target context that he wants to obtain. For example, the description information is a night scene with a starry sky, a white cloud and a moon, it can be derived from the description information that the target background that the user wants to acquire is the night scene, and the requirement for the night scene is that the target background is the night scene with the starry sky, the white cloud and the moon.
In operation S220, at least one feature describing a target background for rendering the target object is determined from the description information.
According to an embodiment of the present disclosure, the at least one feature used to describe the target background may be a theme style, elements, colors, spatial layout, or the like. For example, if the description information is a night scene with a starry sky, a white cloud, and a moon, the features for describing the target background may be a starry sky, a white cloud, a moon, and a night scene. The target background desired by the user may be used to render the target object, e.g., may be used to render a desktop image of the electronic device, or may be used to decorate the rendered document, e.g., as an illustration of the document.
In operation S230, at least one feature is input into a target neural network, and a target background is generated through the target neural network.
According to embodiments of the present disclosure, the target neural network may generate a corresponding target context according to one or more features.
Fig. 3 schematically illustrates a flow diagram for generating a background using a neural network, in accordance with an embodiment of the present disclosure.
According to the embodiment of the disclosure, taking a target background as an image as an example, as shown in fig. 3, description information input by a user is obtained, at least one feature for describing the target background is determined from the description information, the at least one feature is input to a feature encoder, a feature vector can be generated by the feature encoder and input to a generator, the generator generates a background image meeting requirements according to the feature vector, wherein the feature encoder and the generator can form a target neural network.
Fig. 4 schematically shows a schematic diagram of a background generated with a neural network according to an embodiment of the present disclosure.
According to the embodiment of the disclosure, the description information input by the user is a puppy in a night scene with starry sky, white cloud and moon, and the background picture generated by using the neural network may be the scene shown in fig. 4.
Fig. 5 schematically shows a schematic diagram of a background generated with a neural network according to another embodiment of the present disclosure.
According to the embodiment of the disclosure, the description information input by the user is a person exercising in a park in the early morning, and the background picture generated by using the neural network can be a scene as shown in fig. 5.
According to the embodiment of the disclosure, by receiving the description information of the user, one or more features for describing the target background are extracted from the description information, and the extracted one or more features are input into the target neural network, the target neural network can generate a corresponding technical means of the target background according to the one or more features, and because the background image is not obtained by searching and matching based on the existing image, but the corresponding background image is flexibly generated according to the requirements of the user, the technical problem that the user cannot flexibly meet the requirements of the user when the user needs to obtain the background comprising different materials in the related art is at least partially overcome, and the technical effects of improving the user experience and increasing the utilization rate of the user are achieved.
The method shown in fig. 2 is further described with reference to fig. 6-8 in conjunction with specific embodiments.
FIG. 6 schematically illustrates a flow diagram for pre-training the target neural network, according to an embodiment of the present disclosure.
As shown in fig. 6, pre-training the target neural network includes operations S241 to S242.
In operation S241, a background material sample is obtained, wherein each background in the background material sample includes a plurality of basic elements, and each basic element is used for characterizing one feature of the corresponding background.
In operation S242, a neural network is pre-trained based on the basic elements included in each background in the background material sample, so as to obtain a target neural network that can output a corresponding background according to one or more basic elements.
According to the embodiment of the disclosure, in the neural network training process, the original picture in the background material sample can be divided into different parts (the sample of the part can be defined as T), such as theme style, elements, colors, spatial layout, etc., and these parts can be used as a plurality of basic elements of each background.
When the neural network is trained in advance based on the basic elements contained in each background in the background material sample, different parts divided from the original picture can be respectively projected to a ' hidden space ' (the generated sample of the part can be defined as T '), then the accuracy degree corresponding to the T-T ' is verified through a large number of picture training, namely whether the ' hidden space ' is correct or not is continuously verified, so that continuous iteration is carried out, the process of the T-T ' is ensured not to be generated randomly, a certain rule is kept, and the neural network containing the ' hidden space ' gradually tends to be perfect.
Through the embodiment of the disclosure, the machine can understand the requirements of the user by training the neural network, and the picture background meeting the requirements of the user is generated according to the requirements of the user so as to meet the diversified requirements of the user.
According to the embodiment of the disclosure, under the condition that each background in the background material sample is the picture background, the resolution of the picture background is greater than or equal to the preset value. For example, the resolution of the picture background is 256 × 256 or more.
Through the embodiment of the disclosure, the higher the resolution of the picture background is, the more complete and rich the detail information contained in each part of the picture is, and the accuracy of the neural network output background can be improved.
Fig. 7 schematically illustrates a flow chart for determining at least one feature for describing a target context from description information according to an embodiment of the present disclosure.
As shown in fig. 7, determining at least one feature for describing the target context from the description information includes operations S221 and S222.
In operation S221, the description information is segmented based on a semantic segmentation method to obtain one or more information segments.
In operation S222, at least one feature describing a target context is determined from the one or more pieces of information.
According to the embodiment of the disclosure, after the description information input by the user is received, the trained neural network can perform semantic segmentation on the description information input by the user to obtain one or more information segments, and the features of the target background are extracted from the information segments.
Through the embodiment of the disclosure, a plurality of information segments can be generated within a few milliseconds by a semantic segmentation method, and the characteristic information is determined from the information segments, so that the recognition efficiency can be improved, and the picture material highly consistent with the character description can be generated more efficiently.
Fig. 8 schematically shows a flow chart of a background generation method according to another embodiment of the present disclosure.
In the case where the target background includes a plurality of backgrounds, as shown in fig. 8, the background generation method further includes operations S250 to S270.
In operation S250, after a plurality of target backgrounds are generated by a target neural network, the plurality of target backgrounds are presented.
In operation S260, a selection operation of the user is acquired.
In operation S270, in response to the selection operation, the target background to which the selection operation is directed is taken as a final background.
According to the embodiment of the disclosure, a plurality of different target backgrounds can be generated according to the description information input by the user, for example, the description information input by the user is a dog in a night scene with starry sky, white cloud and moon, but the user does not specifically require what kind of dog is, what color is, what quantity is, and the like. Thus, the neural network can determine different types of puppies based on existing material to generate different target backgrounds, for example, the neural network generates a background describing a Teddy and a background describing a Tibetan mastiff. After a plurality of backgrounds are generated, the user can select the favorite background from the backgrounds, and the user experience is improved.
Through the embodiment of the disclosure, a plurality of results generated by calculation are displayed to the user, and the user can autonomously select the most satisfactory background pattern from the results, so that more choices are flexibly provided for the user.
Fig. 9 schematically shows a block diagram of a context generation system according to an embodiment of the present disclosure.
As shown in fig. 9, the context generation system 400 includes a first acquisition module 410, a determination module 420, and a generation module 430.
The first obtaining module 410 is used for obtaining the description information input by the user.
The determining module 420 is configured to determine at least one feature describing a target context from the description information, wherein the target context is used for rendering the target object.
The generating module 430 is configured to input the at least one feature into the target neural network and generate a target context through the target neural network.
According to the embodiment of the disclosure, by receiving the description information of the user, one or more features for describing the target background are extracted from the description information, and the extracted one or more features are input into the target neural network, the target neural network can generate a corresponding technical means of the target background according to the one or more features, and because the background image is not obtained by searching and matching based on the existing image, but the corresponding background image is flexibly generated according to the requirements of the user, the technical problem that the user cannot flexibly meet the requirements of the user when the user needs to obtain the background meeting certain requirements in the related art is at least partially overcome, and the technical effects of improving the user experience and increasing the utilization rate of the user are achieved.
Fig. 10 schematically shows a block diagram of a context generation system according to another embodiment of the present disclosure.
As shown in fig. 10, the background generation system 400 further includes a training module 440 for training the target neural network in advance, and the training module 440 includes an obtaining unit 441 and a training unit 442.
The obtaining unit 441 is configured to obtain background material samples, where each background in the background material samples includes a plurality of basic elements, and each basic element is used to characterize one feature of the corresponding background.
The training unit 442 is configured to pre-train the neural network based on the basic elements included in each background of the background material sample, so as to obtain a target neural network that can output a corresponding background according to one or more basic elements.
Through the embodiment of the disclosure, the machine can understand the requirements of the user by training the neural network, and the picture background meeting the requirements of the user is generated according to the requirements of the user so as to meet the diversified requirements of the user.
According to the embodiment of the disclosure, under the condition that each background in the background material sample is the picture background, the resolution of the picture background is greater than or equal to the preset value.
Through the embodiment of the disclosure, the higher the resolution ratio is, the more perfect and rich the detail information contained in each part of the picture is, and the accuracy of the neural network output background can be improved.
Fig. 11 schematically illustrates a block diagram of a determination module according to an embodiment of the present disclosure.
As shown in fig. 11, the determination module 420 includes a segmentation unit 421 and a determination unit 422.
The segmentation unit 421 is configured to segment the description information based on a semantic segmentation system to obtain one or more information segments.
The determining unit 422 is configured to determine at least one feature describing a target context from the one or more pieces of information.
Through the embodiment of the disclosure, a plurality of information segments can be generated within a few milliseconds by a semantic segmentation method, and the characteristic information is determined from the information segments, so that the recognition efficiency can be improved, and the picture material highly consistent with the character description can be generated more efficiently.
Fig. 12 schematically shows a block diagram of a context generation system according to another embodiment of the present disclosure.
As shown in fig. 12, the context generation system 400 further includes a presentation module 450, a second acquisition module 460, and a response module 470.
The presentation module 450 is configured to present the plurality of target contexts after the plurality of target contexts are generated by the target neural network.
The second obtaining module 460 is used for obtaining the selection operation of the user.
The response module 470 is used for responding to the selection operation, and taking the target background targeted by the selection operation as the final background.
Through the embodiment of the disclosure, a plurality of results generated by calculation are displayed to the user, and the user can autonomously select the most satisfactory background pattern from the results, so that more choices are flexibly provided for the user.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first obtaining module 410, the determining module 420, the generating module 430, the training module 440, the presenting module 450, the second obtaining module 460, the responding module 470, the dividing unit 421, the determining unit 422, the obtaining unit 441, and the training unit 442 may be combined to be implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the first obtaining module 410, the determining module 420, the generating module 430, the training module 440, the presenting module 450, the second obtaining module 460, the responding module 470, the dividing unit 421, the determining unit 422, the obtaining unit 441, and the training unit 442 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three manners of software, hardware, and firmware, or by a suitable combination of any of them. Alternatively, at least one of the first obtaining module 410, the determining module 420, the generating module 430, the training module 440, the presenting module 450, the second obtaining module 460, the responding module 470, the segmenting unit 421, the determining unit 422, the obtaining unit 441 and the training unit 442 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
FIG. 13 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method, in accordance with an embodiment of the present disclosure. The computer system illustrated in FIG. 13 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 13, a computer system 500 according to an embodiment of the present disclosure includes a processor 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the system 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, system 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer readable medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, a computer-readable storage medium may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (12)

1. A background generation method, comprising:
acquiring description information input by a user;
determining at least one feature for describing a target context from the description information, wherein the target context is used for rendering a target object; and
inputting the at least one feature into a target neural network, and generating the target context by the target neural network.
2. The method of claim 1, wherein the method further comprises:
pre-training the target neural network, comprising:
obtaining background material samples, wherein each background in the background material samples comprises a plurality of basic elements, and each basic element is used for representing one characteristic of the corresponding background; and
and pre-training a neural network based on the basic elements contained in each background in the background material sample to obtain a target neural network capable of outputting the corresponding background according to one or more basic elements.
3. The method of claim 2, wherein, when each background in the background material sample is a picture background, the resolution of the picture background is greater than or equal to a preset value.
4. The method of claim 1, wherein determining at least one feature from the description information to describe a target context comprises:
segmenting the description information based on a semantic segmentation method to obtain one or more information segments; and
determining at least one feature from the one or more pieces of information that describes the target context.
5. The method of claim 1, wherein the target context comprises a plurality, the method further comprising:
after generating a plurality of target contexts by the target neural network, presenting the plurality of target contexts;
acquiring selection operation of a user; and
and responding to the selection operation, and taking the target background aimed by the selection operation as a final background.
6. A context generation system, comprising:
the first acquisition module is used for acquiring description information input by a user;
a determining module, configured to determine, from the description information, at least one feature describing a target context, where the target context is used for rendering a target object; and
and the generating module is used for inputting the at least one characteristic into a target neural network and generating the target background through the target neural network.
7. The system of claim 6, wherein the system further comprises:
a training module, configured to pre-train the target neural network, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring background material samples, each background in the background material samples comprises a plurality of basic elements, and each basic element is used for representing one characteristic of the corresponding background; and
and the training unit is used for pre-training the neural network based on the basic elements contained in each background in the background material sample to obtain a target neural network capable of outputting the corresponding background according to one or more basic elements.
8. The system of claim 7, wherein, in the case that each background in the background material sample is a picture background, the resolution of the picture background is greater than or equal to a preset value.
9. The system of claim 6, wherein the determination module comprises:
the segmentation unit is used for segmenting the description information based on a semantic segmentation system to obtain one or more information segments; and
a determining unit configured to determine at least one feature describing the target context from the one or more pieces of information.
10. The system of claim 6, wherein the target context comprises a plurality, the system further comprising:
a presentation module for presenting a plurality of target backgrounds after the plurality of target backgrounds are generated by the target neural network;
the second acquisition module is used for acquiring the selection operation of the user; and
and the response module is used for responding to the selection operation and taking the target background aimed by the selection operation as a final background.
11. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the context generation method of any of claims 1-5.
12. A computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the context generation method of any one of claims 1 to 5.
CN201810945726.0A 2018-08-17 2018-08-17 Background generation method and system, computer system, and computer-readable storage medium Pending CN110866138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810945726.0A CN110866138A (en) 2018-08-17 2018-08-17 Background generation method and system, computer system, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810945726.0A CN110866138A (en) 2018-08-17 2018-08-17 Background generation method and system, computer system, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN110866138A true CN110866138A (en) 2020-03-06

Family

ID=69650849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810945726.0A Pending CN110866138A (en) 2018-08-17 2018-08-17 Background generation method and system, computer system, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110866138A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927321A (en) * 2021-03-17 2021-06-08 北京太火红鸟科技有限公司 Intelligent image design method, device, equipment and storage medium based on neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866308A (en) * 2015-05-18 2015-08-26 百度在线网络技术(北京)有限公司 Scenario image generation method and apparatus
US9245205B1 (en) * 2013-10-16 2016-01-26 Xerox Corporation Supervised mid-level features for word image representation
CN107506469A (en) * 2017-08-31 2017-12-22 北京小米移动软件有限公司 Image acquisition method, device and computer-readable recording medium
CN107707823A (en) * 2017-10-18 2018-02-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107729099A (en) * 2017-09-25 2018-02-23 联想(北京)有限公司 Background method of adjustment and its system
CN107909114A (en) * 2017-11-30 2018-04-13 深圳地平线机器人科技有限公司 The method and apparatus of the model of training Supervised machine learning
CN108108215A (en) * 2017-12-19 2018-06-01 北京百度网讯科技有限公司 Skin generation method, device, terminal and computer readable storage medium
CN108230332A (en) * 2017-10-30 2018-06-29 北京市商汤科技开发有限公司 The treating method and apparatus of character image, electronic equipment, computer storage media

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9245205B1 (en) * 2013-10-16 2016-01-26 Xerox Corporation Supervised mid-level features for word image representation
CN104866308A (en) * 2015-05-18 2015-08-26 百度在线网络技术(北京)有限公司 Scenario image generation method and apparatus
CN107506469A (en) * 2017-08-31 2017-12-22 北京小米移动软件有限公司 Image acquisition method, device and computer-readable recording medium
CN107729099A (en) * 2017-09-25 2018-02-23 联想(北京)有限公司 Background method of adjustment and its system
CN107707823A (en) * 2017-10-18 2018-02-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108230332A (en) * 2017-10-30 2018-06-29 北京市商汤科技开发有限公司 The treating method and apparatus of character image, electronic equipment, computer storage media
CN107909114A (en) * 2017-11-30 2018-04-13 深圳地平线机器人科技有限公司 The method and apparatus of the model of training Supervised machine learning
CN108108215A (en) * 2017-12-19 2018-06-01 北京百度网讯科技有限公司 Skin generation method, device, terminal and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SCOTT REED等: "Generative Adversarial Text to Image Synthesis", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION *
蔡晓龙: "基于DCGAN算法的图像生成技术研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 05, pages 28 - 30 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927321A (en) * 2021-03-17 2021-06-08 北京太火红鸟科技有限公司 Intelligent image design method, device, equipment and storage medium based on neural network
CN112927321B (en) * 2021-03-17 2022-04-22 北京太火红鸟科技有限公司 Intelligent image design method, device, equipment and storage medium based on neural network

Similar Documents

Publication Publication Date Title
US20200410732A1 (en) Method and apparatus for generating information
CN111476871B (en) Method and device for generating video
CN110162670B (en) Method and device for generating expression package
CN107609506B (en) Method and apparatus for generating image
CN111800671B (en) Method and apparatus for aligning paragraphs and video
CN109981787B (en) Method and device for displaying information
CN109992187B (en) Control method, device, equipment and storage medium
CN111311480B (en) Image fusion method and device
CN110059623B (en) Method and apparatus for generating information
CN112839223A (en) Image compression method, image compression device, storage medium and electronic equipment
CN111726685A (en) Video processing method, video processing device, electronic equipment and medium
CN112308950A (en) Video generation method and device
CN108521366A (en) Expression method for pushing and electronic equipment
CN111461967B (en) Picture processing method, device, equipment and computer readable medium
CN110636362B (en) Image processing method, device and system and electronic equipment
CN110866138A (en) Background generation method and system, computer system, and computer-readable storage medium
CN106021279B (en) Information display method and device
CN116522012A (en) User interest mining method, system, electronic equipment and medium
CN114697568B (en) Special effect video determining method and device, electronic equipment and storage medium
CN110708238A (en) Method and apparatus for processing information
CN115988255A (en) Special effect generation method and device, electronic equipment and storage medium
CN114786069A (en) Video generation method, device, medium and electronic equipment
CN110888583B (en) Page display method, system and device and electronic equipment
CN114422698A (en) Video generation method, device, equipment and storage medium
CN110188833B (en) Method and apparatus for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

CB02 Change of applicant information