CN109933320B - Image generation method and server - Google Patents

Image generation method and server Download PDF

Info

Publication number
CN109933320B
CN109933320B CN201811629005.5A CN201811629005A CN109933320B CN 109933320 B CN109933320 B CN 109933320B CN 201811629005 A CN201811629005 A CN 201811629005A CN 109933320 B CN109933320 B CN 109933320B
Authority
CN
China
Prior art keywords
information
target
displayed
target image
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811629005.5A
Other languages
Chinese (zh)
Other versions
CN109933320A (en
Inventor
郝瑞祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811629005.5A priority Critical patent/CN109933320B/en
Publication of CN109933320A publication Critical patent/CN109933320A/en
Application granted granted Critical
Publication of CN109933320B publication Critical patent/CN109933320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image generation method, which comprises the following steps: monitoring whether the target information in the target area of the page changes or not to obtain a monitoring result; if the monitoring result shows that the target information changes, information to be displayed is obtained; determining a target image based on the acquired information to be displayed, wherein the target image is used for representing the information to be displayed; and outputting the target image so as to display the target image in the target area. The invention also discloses a server.

Description

Image generation method and server
Technical Field
The present invention relates to image generation technologies, and in particular, to an image generation method and a server.
Background
In the prior art, in the process of generating corresponding icons based on characters in a webpage, the icons can only be generated by designers, when the characters in the webpage change, the designers often need to redesign the corresponding icons to adapt to the changed characters, the consumed time is long, and the synchronous updating of the characters and images in the webpage is not facilitated.
Disclosure of Invention
The embodiment of the invention provides an image generation method and a server, which can determine the constituent elements of a target image based on target information when a target character is obtained; combining the determined constituent elements of the target image to form a target image to be output, wherein the target image is used for representing the text content of the target character information
The technical scheme of the embodiment of the invention is realized as follows:
the invention provides an image generation method, which is characterized by comprising the following steps:
monitoring whether the target information in the target area of the page changes or not to obtain a monitoring result;
if the monitoring result shows that the target information changes, information to be displayed is obtained;
determining a target image based on the acquired information to be displayed, wherein the target image is used for representing the information to be displayed;
and outputting the target image so as to display the target image in the target area.
In the above scheme, the method further comprises:
when the number of the information to be displayed is at least two,
determining a display style according to the incidence relation of the at least two pieces of information to be displayed;
based on the determined display style, a target image corresponding to the respective information to be displayed is determined.
In the foregoing solution, the determining a target image based on the acquired information to be displayed includes:
and decomposing the complete text of the information to be displayed through an algorithm matched with a deconvolution neural network model, and determining a display style corresponding to the information to be displayed.
In the foregoing solution, the determining a target image based on the acquired information to be displayed includes:
processing the complete text of the information to be displayed through a deconvolution neural network model, and confirming corresponding target information;
and carrying out graphical processing on the determined target information to form a target image.
In the foregoing solution, the processing the complete text of the information to be displayed through the deconvolution neural network model to determine corresponding target information includes:
decomposing the target information through a first decoder model for sentence-level decoding in the deconvolution neural network model;
decoding the processing result of the first decoder model through a second decoder model for word level decoding in the deconvolution neural network model, and determining the keywords in the target information.
In the above scheme, the method further comprises:
and acquiring the constituent elements of the target image based on the determined keywords in the target information.
In the foregoing solution, the performing the graphic processing on the determined target information to form the target image includes:
processing the intersection of the constituent elements of the image corresponding to the target information through an deconvolution layer and an anti-pooling layer of a deconvolution neural network model to obtain a down-sampling result of the constituent elements of the target image;
and processing the down-sampling result through an inverse pooling layer of the deconvolution neural network model to form a target image to be output.
In the above scheme, the method further comprises:
and determining pixels of the target image through a deconvolution layer of a deconvolution neural network model based on the target area characteristic information so as to realize that the target image to be output is matched with the target area.
In the above scheme, the method further comprises:
training a deconvolution neural network model for generating a target image based on information based on the image sample and the classification label and information of the image sample.
In the above scheme, the training of the deconvolution neural network model for generating the target image based on the information based on the image sample, the classification label of the image sample, and the information includes:
and training a first decoder model for sentence-level decoding in the deconvolution neural network model based on the sentence samples in the information and the corresponding decoding results.
In the above scheme, the method further comprises:
and training a second decoder model for performing word-level decoding in the deconvolution neural network model based on the word samples in the information and the corresponding decoding results.
In the above scheme, the method further comprises:
updating an adaptation algorithm and/or model parameters of the deconvolution neural network model according to a training result for training the deconvolution neural network model for generating a target image based on information;
iteratively training the deconvolution neural network model based on the updated adaptation algorithm and/or model parameters of the deconvolution neural network model.
The present invention also provides a server, comprising:
the information acquisition module is used for monitoring whether the target information in the target area of the page changes or not and acquiring a monitoring result;
the information acquisition module is used for acquiring information to be displayed;
the information processing module is used for determining a target image based on the acquired information to be displayed, and the target image is used for representing the information to be displayed;
and the information output module is used for outputting the target image so as to display the target image in the target area.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining a display style according to the incidence relation of the at least two pieces of information to be displayed when the number of the information to be displayed is at least two;
and the information processing module is used for determining a target image corresponding to the corresponding information to be displayed based on the determined display style.
In the above-mentioned scheme, the first step of the method,
and the information processing module is used for decomposing the complete text of the information to be displayed through an algorithm matched with the deconvolution neural network model, and determining the display style corresponding to the information to be displayed.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for processing the complete text of the information to be displayed through a deconvolution neural network model and confirming corresponding target information;
and the information processing module is used for carrying out graphical processing on the determined target information to form a target image.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for decomposing the target information through a first decoder model for decoding at a statement level in the deconvolution neural network model;
decoding the processing result of the first decoder model through a second decoder model for word level decoding in the deconvolution neural network model, and determining the keywords in the target information.
In the above-mentioned scheme, the first step of the method,
and the information processing module is used for acquiring the constituent elements of the target image based on the determined keywords in the target information.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for processing the intersection of the constituent elements of the image corresponding to the target information through an deconvolution layer and an anti-pooling layer of a deconvolution neural network model to obtain a down-sampling result of the constituent elements of the target image;
and the information processing module is used for processing the down-sampling result through an inverse pooling layer of the deconvolution neural network model to form a target image to be output.
In the above-mentioned scheme, the first step of the method,
and the information processing module is used for determining the pixels of the target image through a deconvolution layer of a deconvolution neural network model based on the target area characteristic information so as to realize that the target image to be output is matched with the target area.
In the above solution, the server further includes:
and the training module is used for training a deconvolution neural network model for generating a target image based on the information based on the image sample, the classification label of the image sample and the information.
In the above-mentioned scheme, the first step of the method,
and the training module is used for training a first decoder model for sentence-level decoding in the deconvolution neural network model based on the sentence samples in the information and the corresponding decoding results.
In the above-mentioned scheme, the first step of the method,
and the training module is used for training a second decoder model for performing word-level decoding in the deconvolution neural network model based on the word samples in the information and the corresponding decoding results.
In the above-mentioned scheme, the first step of the method,
the training module is used for updating an adaptation algorithm and/or model parameters of the deconvolution neural network model according to a training result for training the deconvolution neural network model for generating a target image based on information;
and the training module is used for carrying out iterative training on the deconvolution neural network model based on the updated adaptive algorithm and/or model parameters of the deconvolution neural network model.
The present invention also provides a server, comprising:
a memory for storing executable instructions;
and the processor is used for executing the executable instructions stored in the memory to execute the image generation method provided by the invention.
In the embodiment of the invention, target information is acquired; determining constituent elements of a target image based on the target information; the determined constituent elements of the target image are combined, so that the situation that when the character information in the webpage changes, a designer needs to manually update the image corresponding to the changed character information is avoided, and the generation of the image is realized, so that the change of the information in the webpage is flexibly adapted.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an alternative image generation method according to an embodiment of the present invention;
FIG. 2 is an alternative structural diagram of a server according to an embodiment of the present invention;
FIG. 3 is an alternative structural diagram of a server according to an embodiment of the present invention;
FIG. 4A is a schematic diagram of an alternative usage scenario of the image generation method provided by the embodiment of the invention;
FIG. 4B is a schematic diagram of an alternative usage scenario of the image generation method provided by the embodiment of the invention;
fig. 5 is an alternative structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) And the display style is used for representing the display characteristics with diversity and identity in the webpage.
Fig. 1 is an optional flowchart of an image generation method according to an embodiment of the present invention, and as shown in fig. 1, an optional flowchart of an image generation method according to an embodiment of the present invention explains the steps shown. The method shown in fig. 1 is applied to a server, and the server can process a page of a web page.
Step 101: and monitoring target information in the target area of the page to obtain a monitoring result.
Step 102: and judging whether the target information in the page target area is changed, if so, executing step 103, otherwise, executing step 104.
Step 103: and acquiring information to be displayed.
Step 104: and continuing to monitor the target information in the target area of the page.
Step 105: and determining a target image based on the acquired information to be displayed.
The target image is used for representing the information to be displayed.
In an embodiment of the present invention, when the number of the information to be displayed is at least two, determining a display style according to an association relationship of the at least two information to be displayed; based on the determined display style, a target image corresponding to the respective information to be displayed is determined. Through the technical scheme shown in this embodiment, when the number of the acquired information to be displayed is at least two, and when it is determined that the at least two information to be displayed have an association relationship, it may be determined to use the same display style, and according to the determined same display style, a target image corresponding to the corresponding information to be displayed is determined, for example: for the information to be displayed which also represents the child type, the display style which also represents the child element can be used for displaying the corresponding image, and for the information to be displayed which also represents the time-administration information type, the display style which also represents the red font and the golden background can be used.
In an embodiment of the present invention, the determining a target image based on the acquired information to be displayed includes:
and decomposing the complete text of the information to be displayed through an algorithm matched with a deconvolution neural network model, and determining a display style corresponding to the information to be displayed. Through the technical scheme shown in the embodiment, when the complete text of the information to be displayed is long, the complete text of the information to be displayed is decomposed through an algorithm matched with the deconvolution neural network model, the display style corresponding to the complete text of the information to be displayed can be confirmed, so that the corresponding information to be displayed is accurately reflected through the image of the corresponding display style, and a user can visually know the content of the information to be displayed conveniently.
In an embodiment of the present invention, the determining a target image based on the acquired information to be displayed includes:
processing the complete text of the information to be displayed through a deconvolution neural network model, and confirming corresponding target information; and carrying out graphical processing on the determined target information to form a target image. According to the technical scheme shown in the embodiment, the complete text of the information to be displayed can be processed by utilizing the deconvolution neural network model, the target information in the complete text is extracted, the target image is determined through the graphical processing of the target information, the obtained target image is more attached to the complete text of the information to be displayed, and a user can visually know the content corresponding to the complete text with the display information through the determined target image.
In an embodiment of the present invention, the processing the complete text of the information to be displayed through the deconvolution neural network model to confirm the corresponding target information includes:
decomposing the target information through a first decoder model for sentence-level decoding in the deconvolution neural network model; decoding the processing result of the first decoder model through a second decoder model for word level decoding in the deconvolution neural network model, and determining the keywords in the target information. Because the complete text of the information to be displayed includes more text contents, the complete text information is generally composed of different sentences, and each sentence can be decomposed into different words, the target information is decomposed by the first decoder model for sentence-level decoding in the deconvolution neural network model according to the technical scheme shown in this embodiment; and decoding the processing result of the first decoder model through a second decoder model for word-level decoding in the deconvolution neural network model, and determining the corresponding keywords (keywords) in the complete text after two continuous decoding processes.
In one embodiment of the invention, the method further comprises:
and acquiring the constituent elements of the target image based on the determined keywords in the target information. Because the image generation method is applied to the server, the server can store the image component elements, and the corresponding target image can be quickly formed through the combination of different image component elements. Through the technical scheme shown in the embodiment, when continuous processing is performed on the first decoding model and the second decoding model in the deconvolution neural network model, keywords (keywords) in a corresponding complete text can be obtained, and by searching for constituent elements of a target image corresponding to the keywords (keywords), the target image can be quickly generated through the deconvolution neural network model, so that the waiting time for generating and outputting the target image is reduced.
In an embodiment of the present invention, the performing the graphic processing on the determined target information to form the target image includes:
processing the intersection of the constituent elements of the image corresponding to the target information through an deconvolution layer and an anti-pooling layer of a deconvolution neural network model to obtain a down-sampling result of the constituent elements of the target image; and processing the down-sampling result through an inverse pooling layer of the deconvolution neural network model to form a target image to be output. Specifically, word-level decoding and sentence-level decoding may be performed on at least two types of text information of the picture through a Bi-directional long-short term memory recurrent neural network (Bi-directional LSTM RNN), respectively, where the word-level decoding or the sentence-level decoding of the at least two types of text information of the picture may use the same decoder model. When the first decoder model is a statement decoder, the second decoder model is a Long Short-Term Memory (LSTM) network.
In one embodiment of the invention, the method further comprises:
and determining pixels of the target image through a deconvolution layer of a deconvolution neural network model based on the target area characteristic information so as to realize that the target image to be output is matched with the target area. Since the image generation method is suitable for a server, and the target area has uncertainty with different types of terminals, the technical scheme shown in this embodiment can determine the pixels of the target image through a deconvolution layer of a deconvolution neural network model, so that the length, width, and height of the formed target image to be output are all suitable for the corresponding target display area.
Step 106: and outputting the target image so as to display the target image in the target area.
In one embodiment of the invention, the method further comprises:
training a deconvolution neural network model for generating a target image based on information based on the image sample and the classification label and information of the image sample.
In an embodiment of the present invention, the training, based on the image sample and the classification label of the image sample, of the deconvolution neural network model for generating the target image based on the information includes:
and training a first decoder model for sentence-level decoding in the deconvolution neural network model based on the sentence samples in the information and the corresponding decoding results. Further, the method further comprises:
and training a second decoder model for performing word-level decoding in the deconvolution neural network model based on the word samples in the information and the corresponding decoding results. By the technical scheme shown in the embodiment, the neural network model and the different decoders can be trained pertinently, so that the adaptive parameters of the different decoders can be adjusted in time.
In one embodiment of the invention, the method further comprises:
updating an adaptation algorithm and/or model parameters of the deconvolution neural network model according to a training result for training the deconvolution neural network model for generating a target image based on information; iteratively training the deconvolution neural network model based on the updated adaptation algorithm and/or model parameters of the deconvolution neural network model. Because accidental errors may exist in the decoding process of the neural network model and different decoders, by the technical scheme shown in this embodiment, the adaptation algorithm and/or the model parameters of the deconvolution neural network model are updated and iterative training is performed, so that the triggering probability of the accidental errors can be reduced, and the target image generated by the deconvolution neural network model is better matched with the information to be displayed.
Fig. 2 is a schematic diagram of an optional structure of a server 200 according to an embodiment of the present invention, and as shown in fig. 2, an optional structure of the server 200 according to an embodiment of the present invention includes:
the information acquisition module 201 is configured to monitor whether target information in a target area of a page changes, and acquire a monitoring result;
the information acquisition module 201 is configured to acquire information to be displayed;
the information processing module 202 is configured to determine a target image based on the acquired information to be displayed, where the target image is used to represent the information to be displayed;
and the information output module 203 is used for outputting the target image so as to display the target image in the target area.
In an embodiment of the present invention, the information processing module 202 is configured to determine a display style according to an association relationship between at least two pieces of information to be displayed when the number of the information to be displayed is at least two; the information processing module 202 is configured to determine a target image corresponding to the corresponding information to be displayed based on the determined display style. Through the technical scheme shown in this embodiment, when the number of the acquired information to be displayed is at least two, and when it is determined that the at least two information to be displayed have an association relationship, it may be determined to use the same display style, and according to the determined same display style, a target image corresponding to the corresponding information to be displayed is determined, for example: for the information to be displayed which also represents the child type, the display style which also represents the child element can be used for displaying the corresponding image, and for the information to be displayed which also represents the time-administration information type, the display style which also represents the red font and the golden background can be used.
In an embodiment of the present invention, the information processing module 202 is configured to decompose a complete text of the information to be displayed through an algorithm adapted to a deconvolution neural network model, and determine a display style corresponding to the information to be displayed. Through the technical scheme shown in the embodiment, when the complete text of the information to be displayed is long, the complete text of the information to be displayed is decomposed through an algorithm matched with the deconvolution neural network model, the display style corresponding to the complete text of the information to be displayed can be confirmed, so that the corresponding information to be displayed is accurately reflected through the image of the corresponding display style, and a user can visually know the content of the information to be displayed conveniently.
In an embodiment of the present invention, the information processing module 202 is configured to process a complete text of the information to be displayed through a deconvolution neural network model, and determine corresponding target information; the information processing module 202 is configured to perform graphical processing on the determined target information to form a target image. According to the technical scheme shown in the embodiment, the complete text of the information to be displayed can be processed by utilizing the deconvolution neural network model, the target information in the complete text is extracted, the target image is determined through the graphical processing of the target information, the obtained target image is more attached to the complete text of the information to be displayed, and a user can visually know the content corresponding to the complete text with the display information through the determined target image.
In an embodiment of the present invention, the information processing module 202 is configured to decompose the target information through a first decoder model for sentence-level decoding in the deconvolution neural network model; the information processing module 202 is configured to decode a processing result of the first decoder model through a second decoder model in the deconvolution neural network model, where word-level decoding is performed, and determine a keyword in the target information. Because the complete text of the information to be displayed includes more text contents, the complete text information is generally composed of different sentences, and each sentence can be decomposed into different words, the target information is decomposed by the first decoder model for sentence-level decoding in the deconvolution neural network model according to the technical scheme shown in this embodiment; and decoding the processing result of the first decoder model through a second decoder model for word-level decoding in the deconvolution neural network model, and determining the corresponding keywords (keywords) in the complete text after two continuous decoding processes.
In an embodiment of the present invention, the information processing module 202 is configured to obtain a constituent element of a target image based on the determined keyword in the target information. Because the image generation method is applied to the server, the server can store the image component elements, and the corresponding target image can be quickly formed through the combination of different image component elements. Through the technical scheme shown in the embodiment, when continuous processing is performed on the first decoding model and the second decoding model in the deconvolution neural network model, keywords (keywords) in a corresponding complete text can be obtained, and by searching for constituent elements of a target image corresponding to the keywords (keywords), the target image can be quickly generated through the deconvolution neural network model, so that the waiting time for generating and outputting the target image is reduced.
In an embodiment of the present invention, the information processing module 202 is configured to perform cross processing on the component elements of the image corresponding to the target information through an deconvolution layer and an inverse pooling layer of an deconvolution neural network model, so as to obtain a downsampling result of the component elements of the target image; the information processing module 202 is configured to process the downsampling result through an inverse pooling layer of the deconvolution neural network model to form a target image to be output. Specifically, word-level decoding and sentence-level decoding may be performed on at least two types of text information of the picture through a Bi-directional long-short term memory recurrent neural network (Bi-directional LSTM RNN), respectively, where the word-level decoding or the sentence-level decoding of the at least two types of text information of the picture may use the same decoder model. When the first decoder model is a statement decoder, the second decoder model is a Long Short-Term Memory (LSTM) network.
In an embodiment of the present invention, the information processing module 202 is configured to determine, based on the target area feature information, pixels of the target image through a deconvolution layer of a deconvolution neural network model, so as to implement that the target image to be output is adapted to the target area. Since the image generation method is suitable for a server, and the target area has uncertainty with different types of terminals, the technical scheme shown in this embodiment can determine the pixels of the target image through a deconvolution layer of a deconvolution neural network model, so that the length, width, and height of the formed target image to be output are all suitable for the corresponding target display area.
In one embodiment of the present invention, the server further includes:
and a training module (not shown in the figure) for training a deconvolution neural network model for generating a target image based on the information based on the image sample and the classification label of the image sample.
In an embodiment of the present invention, the training module is configured to train a first decoder model for performing statement level decoding in the deconvolution neural network model based on the statement samples in the information and corresponding decoding results. Further, the training module is configured to train a second decoder model for performing word-level decoding in the deconvolution neural network model based on the word samples in the information and the corresponding decoding results. By the technical scheme shown in the embodiment, the neural network model and the different decoders can be trained pertinently, so that the adaptive parameters of the different decoders can be adjusted in time.
In an embodiment of the present invention, the training module is configured to update an adaptation algorithm and/or model parameters of the deconvolution neural network model according to a training result of training the deconvolution neural network model that generates a target image based on information; and the training module is used for carrying out iterative training on the deconvolution neural network model based on the updated adaptive algorithm and/or model parameters of the deconvolution neural network model. Because accidental errors may exist in the decoding process of the neural network model and different decoders, by the technical scheme shown in this embodiment, the adaptation algorithm and/or the model parameters of the deconvolution neural network model are updated and iterative training is performed, so that the triggering probability of the accidental errors can be reduced, and the target image generated by the deconvolution neural network model is better matched with the information to be displayed.
Fig. 3 is an optional structural diagram of a server according to an embodiment of the present invention, and as shown in fig. 3, an optional structural diagram of a server according to an embodiment of the present invention is provided, and the following describes modules related to fig. 3 respectively.
The image encoder 301 is configured to process the image intersection through an inverse convolution layer and a maximum inverse pooling layer of an inverse convolution neural network model to obtain a down-sampling result of the image; and processing the down-sampling result through an average inverse pooling layer of the deconvolution neural network model to obtain a target image corresponding to the information to be displayed. Specifically, word-level decoding and sentence-level decoding may be performed on at least two types of text information of the picture through a Bi-directional long-short term memory recurrent neural network (Bi-directional LSTM RNN), respectively, where the word-level decoding or the sentence-level decoding of the at least two types of text information of the picture may use the same decoder model. When the first decoder model is a statement decoder, the second decoder model is a Long Short-Term Memory (LSTM) network.
The text decoder 302 is configured to decompose the target information through a first decoder model for performing statement level decoding in the deconvolution neural network model, so as to form a decoding result at a statement level.
And the text decoder 303 is configured to decode a processing result of the first decoder model, and determine a keyword in the target information.
Fig. 4A is a schematic view of an optional use scenario of the image generation method according to the embodiment of the present invention, and as shown in fig. 4A, the server information obtaining module is configured to monitor whether target information in a target area of a page changes, obtain a monitoring result, and further may be configured to obtain two pieces of information to be displayed; and the information processing module determines the display style to be the sports display style according to the incidence relation of the at least two pieces of information to be displayed. The information processing module decomposes the two pieces of target information through a first decoder model for decoding at a statement level in the deconvolution neural network model; decoding the processing result of the first decoder model through a second decoder model for word-level decoding in the deconvolution neural network model, determining that the keywords in the target information are sports and basketball, acquiring the constituent elements of the target image based on the determined keywords and movement in the target information, and carrying out graphical processing on the corresponding constituent elements to form the target image. And the information output module outputs the target image so as to display the target image in the target area.
Fig. 4B is a schematic view of an optional usage scenario of the image generation method according to the embodiment of the present invention, and is different from the usage scenario shown in fig. 4A, in that an image generated by the server in fig. 4B needs to be output to a browser display interface of a mobile phone terminal, so that, on the basis of the processing procedure shown in fig. 4A, pixels of the target image need to be determined by a deconvolution layer of a deconvolution neural network model based on the target area feature information, so as to implement that the target image to be output is adapted to the target area.
Fig. 5 is an alternative structure diagram of the server provided by the embodiment of the present invention, and as shown in fig. 5, the server 500 may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, etc. with image generation function. The server 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and a user interface 503. The various components in the server 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like, among others.
It will be appreciated that the memory 502 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 502 described in connection with the embodiments of the invention is intended to comprise these and any other suitable types of memory.
Memory 502 in embodiments of the present invention includes, but is not limited to: the ternary content addressable memory, static random access memory, can store a wide variety of data such as image data, text data image generation programs, etc. to support the operation of the server 500. Examples of such data include: any computer programs for operating on the server 500, such as an operating system 5021 and application programs 5022, image data, text data, image generation programs, and so forth. The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 5022 may comprise various applications, such as a client with an image generating function, an application, or the like, for implementing various application services including acquiring image information and first text information, and generating second text information based on the image information and the first text information. The program for implementing the power adjustment method according to the embodiment of the present invention may be included in the application program 5022.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method may be implemented by integrated logic circuits of hardware or operations in the form of software in the processor 501. The Processor 501 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the server 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field-Programmable Gate arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the power adjustment method.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 502, comprising a computer program, which is executable by a processor 501 of a server 500 to perform the steps of the aforementioned method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs:
monitoring whether the target information in the target area of the page changes or not to obtain a monitoring result;
if the monitoring result shows that the target information changes, information to be displayed is obtained;
determining a target image based on the acquired information to be displayed, wherein the target image is used for representing the information to be displayed;
and outputting the target image so as to display the target image in the target area.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including magnetic disk storage, optical storage, and the like) having computer-usable program code embodied in the medium.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program operations. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the operations performed by the processor of the computer or other programmable data processing apparatus produce means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program operations may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the operations stored in the computer-readable memory produce an article of manufacture including operating means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program operations may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the operations executed on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (10)

1. An image generation method, characterized in that the method comprises:
monitoring whether the target information in the target area of the page changes or not to obtain a monitoring result;
if the monitoring result shows that the target information changes, information to be displayed is obtained;
determining a component element corresponding to the information to be displayed based on the acquired key word of the information to be displayed; determining a target image based on the composition elements, wherein the target image is used for representing the information to be displayed and corresponds to the display style of the information to be displayed;
and outputting the target image so as to display the target image in the target area.
2. The method of claim 1, further comprising:
when the number of the information to be displayed is at least two,
determining a display style according to the incidence relation of the at least two pieces of information to be displayed;
based on the determined display style, a target image corresponding to the respective information to be displayed is determined.
3. The method according to claim 1, wherein the determining a target image based on the acquired information to be displayed comprises:
and decomposing the complete text of the information to be displayed through an algorithm matched with a deconvolution neural network model, and determining a display style corresponding to the information to be displayed.
4. The method according to claim 1, wherein the determining a target image based on the acquired information to be displayed comprises:
processing the complete text of the information to be displayed through a deconvolution neural network model, and confirming corresponding target information;
and carrying out graphical processing on the determined target information to form a target image.
5. The method of claim 4, wherein the processing the complete text of the information to be displayed through the deconvolution neural network model to confirm the corresponding target information comprises:
decomposing the target information through a first decoder model for sentence-level decoding in the deconvolution neural network model;
decoding the processing result of the first decoder model through a second decoder model for word level decoding in the deconvolution neural network model, and determining the keywords in the target information.
6. The method of claim 4, wherein the graphically processing the determined target information to form a target image comprises:
processing the intersection of the constituent elements of the image corresponding to the target information through an deconvolution layer and an anti-pooling layer of a deconvolution neural network model to obtain a down-sampling result of the constituent elements of the target image;
and processing the down-sampling result through an inverse pooling layer of the deconvolution neural network model to form a target image to be output.
7. The method of claim 1, further comprising:
training a deconvolution neural network model for generating a target image based on information based on an image sample and classification label information of the image sample.
8. The method of claim 1, further comprising:
and determining pixels of the target image through a deconvolution layer of a deconvolution neural network model based on the target area characteristic information so as to realize that the target image to be output is matched with the target area.
9. A server, characterized in that the server comprises:
the information acquisition module is used for monitoring whether the target information in the target area of the page changes or not and acquiring a monitoring result;
the information acquisition module is used for acquiring information to be displayed;
the information processing module is used for determining the corresponding component elements of the information to be displayed based on the acquired keywords of the information to be displayed; determining a target image based on the composition elements, wherein the target image is used for representing the information to be displayed and corresponds to the display style of the information to be displayed;
and the information output module is used for outputting the target image so as to display the target image in the target area.
10. A server, characterized in that the server comprises:
a memory for storing executable instructions;
a processor for executing the executable instructions stored by the memory to perform the image generation method of claims 1 to 8.
CN201811629005.5A 2018-12-28 2018-12-28 Image generation method and server Active CN109933320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811629005.5A CN109933320B (en) 2018-12-28 2018-12-28 Image generation method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811629005.5A CN109933320B (en) 2018-12-28 2018-12-28 Image generation method and server

Publications (2)

Publication Number Publication Date
CN109933320A CN109933320A (en) 2019-06-25
CN109933320B true CN109933320B (en) 2021-05-18

Family

ID=66984888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811629005.5A Active CN109933320B (en) 2018-12-28 2018-12-28 Image generation method and server

Country Status (1)

Country Link
CN (1) CN109933320B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014045314A1 (en) * 2012-09-18 2014-03-27 株式会社ソニー・コンピュータエンタテインメント Information processing device and information processing method
CN104615764A (en) * 2015-02-13 2015-05-13 北京搜狗科技发展有限公司 Display method and electronic equipment
CN108182016A (en) * 2016-12-08 2018-06-19 Lg电子株式会社 Mobile terminal and its control method
CN108549850A (en) * 2018-03-27 2018-09-18 联想(北京)有限公司 A kind of image-recognizing method and electronic equipment
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106847294B (en) * 2017-01-17 2018-11-30 百度在线网络技术(北京)有限公司 Audio-frequency processing method and device based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014045314A1 (en) * 2012-09-18 2014-03-27 株式会社ソニー・コンピュータエンタテインメント Information processing device and information processing method
CN104615764A (en) * 2015-02-13 2015-05-13 北京搜狗科技发展有限公司 Display method and electronic equipment
CN108182016A (en) * 2016-12-08 2018-06-19 Lg电子株式会社 Mobile terminal and its control method
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image
CN108549850A (en) * 2018-03-27 2018-09-18 联想(北京)有限公司 A kind of image-recognizing method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Convolutional neural network features based change detection in satellite images;Mohammed El Amin Larabi等;《First International Workshop on Pattern Recognition》;20160731 *
基于反卷积特征提取的深度卷积神经网络学习;吕恩辉;《控制与决策》;20180331;第33卷(第3期);第448-454页 *

Also Published As

Publication number Publication date
CN109933320A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN108549850B (en) Image identification method and electronic equipment
US20200142917A1 (en) Deep Reinforced Model for Abstractive Summarization
CN109583952B (en) Advertisement case processing method, device, equipment and computer readable storage medium
CN108228766B (en) Page generation method and device and storage medium
US9043300B2 (en) Input method editor integration
CN109087380B (en) Cartoon drawing generation method, device and storage medium
US7599838B2 (en) Speech animation with behavioral contexts for application scenarios
US20200327413A1 (en) Neural programming
CN110929094A (en) Video title processing method and device
CN107479868B (en) Interface loading method, device and equipment
CN109040767B (en) Live broadcast room loading method, system, server and storage medium
US9946712B2 (en) Techniques for user identification of and translation of media
CN112799658B (en) Model training method, model training platform, electronic device, and storage medium
CN114356479B (en) Page rendering method and device
CN109933320B (en) Image generation method and server
US20160342284A1 (en) Electronic device and note reminder method
CN112905944A (en) Page online dynamic generation method and device, electronic equipment and readable storage medium
CN108920241B (en) Display state adjusting method, device and equipment
CN112085103A (en) Data enhancement method, device and equipment based on historical behaviors and storage medium
CN111241274A (en) Criminal law document processing method and device, storage medium and electronic device
CN116862595A (en) Advertisement landing page generation method and device, electronic equipment and storage medium
US20240037896A1 (en) Method of constructing transformer model for answering questions about video story and computing apparatus for performing the same
CN112241453B (en) Emotion attribute determining method and device and electronic equipment
CN116303937A (en) Reply method, reply device, electronic equipment and readable storage medium
CN116129210A (en) Training method of feature extraction model, feature extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant