CN116501217A - Visual data processing method, visual data processing device, computer equipment and readable storage medium - Google Patents

Visual data processing method, visual data processing device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN116501217A
CN116501217A CN202310758944.4A CN202310758944A CN116501217A CN 116501217 A CN116501217 A CN 116501217A CN 202310758944 A CN202310758944 A CN 202310758944A CN 116501217 A CN116501217 A CN 116501217A
Authority
CN
China
Prior art keywords
visual data
style
user
candidate
predictive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310758944.4A
Other languages
Chinese (zh)
Other versions
CN116501217B (en
Inventor
高熙和
张�浩
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanbo Semiconductor Shanghai Co ltd
Original Assignee
Hanbo Semiconductor Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanbo Semiconductor Shanghai Co ltd filed Critical Hanbo Semiconductor Shanghai Co ltd
Priority to CN202310758944.4A priority Critical patent/CN116501217B/en
Publication of CN116501217A publication Critical patent/CN116501217A/en
Application granted granted Critical
Publication of CN116501217B publication Critical patent/CN116501217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a visual data processing method, apparatus, computer device, and readable storage medium. The method comprises the following steps: acquiring visual data to be processed; stylizing the visual data to be processed to obtain base style visual data; performing style adjustment on the base style visual data to generate a plurality of candidate style visual data for display on the interactive interface; determining predictive stylization parameters associated with a style of interest to the user based on a user selection of the plurality of candidate style visual data; performing at least one cycle: adding correction terms to the predictive stylized parameters to generate multiple sets of corrected predictive stylized parameters; generating corrected candidate style visual data for display on the interactive interface based on the plurality of sets of corrected predictive stylized parameters; responding to the user indication to finish the interactive operation, and obtaining visual data conforming to the interesting style of the user based on the current selection of the user; and executing the next cycle in response to the user not indicating to end the interactive operation.

Description

Visual data processing method, visual data processing device, computer equipment and readable storage medium
Technical Field
The present disclosure relates to the field of visual data processing technology, and in particular, to a visual data processing method, a visual data processing device, a computer device, and a computer readable storage medium.
Background
When the picture or the video is processed, the picture or the video can obtain the artistic style of the style picture through an image style migration technology, so that the common picture or the video is changed into the picture or the video with the specific style. But the same style is not suitable for all pictures or videos. With the development of image processing technology, users put higher demands on stylized processing of pictures or videos. How to optimize the style of the picture or the video and realize better image processing effect is still one of the research hot spots and difficulties in the industry.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a visual data processing method, apparatus, computer device, and computer-readable storage medium.
According to an aspect of the present disclosure, there is provided a visual data processing method including: acquiring visual data to be processed; stylizing the visual data to be processed to obtain at least one base style visual data having at least one base style, each base style visual data having a set of stylized parameters to characterize a corresponding base style; performing style adjustment on at least one base style visual data to generate a plurality of candidate style visual data, wherein each candidate style visual data has a corresponding candidate style; displaying the multiple candidate style visual data on the interactive interface; determining at least one set of predictive stylized parameters associated with a style of interest to a user based on a user's selection of a plurality of candidate style visual data presented on an interactive interface; and performing at least one cycle including the following operations to obtain visual data that conforms to a style of interest to the user: adding correction terms to at least one set of predictive stylized parameters to generate a plurality of sets of corrected predictive stylized parameters; generating a plurality of modified candidate style visual data based on the plurality of sets of modified predictive stylized parameters; displaying the plurality of modified candidate style visual data on the interactive interface for current selection by the user; determining whether the user indicates to end the interactive operation after making the current selection; responsive to determining that the user indicates to end the interactive operation after making the current selection, obtaining visual data conforming to a style of interest to the user based on the current selection of the user; in response to determining that the user does not indicate to end the interactive operation after the current selection is made, at least one set of predictive stylized parameters, determined based on the current selection of the user, among the plurality of sets of modified predictive stylized parameters is obtained and a next cycle is performed.
According to another aspect of the present disclosure, there is provided a visual data processing apparatus comprising: the visual data acquisition module is configured to acquire visual data to be processed; a visual data stylization module configured to stylize visual data to be processed to obtain at least one base style visual data having at least one base style, each base style visual data having a set of stylization parameters to characterize a corresponding base style; a visual data style adjustment module configured to perform a style adjustment on at least one base style visual data to generate a plurality of candidate style visual data, wherein each candidate style visual data has a corresponding candidate style; the first visual data display module is configured to display the plurality of candidate style visual data on the interactive interface; a predictive stylization parameter determination module configured to determine at least one set of predictive stylization parameters associated with a style of interest to a user based on a user selection of a plurality of candidate style visual data presented on the interactive interface; and a first loop execution module configured to execute at least one loop including the following operations to obtain visual data conforming to a style of interest of a user, wherein the first loop execution module includes: a correction term adding module configured to add a correction term to at least one set of predictive stylized parameters to generate a plurality of sets of corrected predictive stylized parameters; a candidate style visual data generation module configured to generate a plurality of modified candidate style visual data based on the plurality of sets of modified predictive stylized parameters; a second visual data presentation module configured to present the plurality of revised candidate styles visual data on the interactive interface for current selection by the user; a determining module configured to determine whether the user indicates to end the interactive operation after making the current selection; an obtaining module configured to obtain visual data conforming to a style of interest of the user based on a current selection of the user in response to determining that the user indicates to end the interactive operation after the current selection is made; a second loop execution module configured to obtain at least one set of predictive stylized parameters determined based on the current selection of the user among the plurality of sets of modified predictive stylized parameters and execute a next loop in response to determining that the user does not indicate to end the interactive operation after the current selection is made.
According to another aspect of the present disclosure, there is provided a computer apparatus comprising: at least one processor; and a memory having stored thereon a computer program which, when executed by the processor, causes the processor to perform the method of the present disclosure as provided above.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method of the present disclosure as provided above.
According to one or more embodiments of the present disclosure, visual data that meets the style of interest of the user may be obtained, thereby meeting the user's personalized customization needs.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for example only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 is a schematic diagram illustrating an example system in which various methods described herein may be implemented, according to an example embodiment;
FIG. 2 is a flowchart illustrating a visual data processing method according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a visual data processing apparatus according to an example embodiment;
FIG. 4 is a table illustrating example parameters of a stylized parameter according to an example embodiment;
FIG. 5 is a schematic block diagram illustrating a visual data processing apparatus according to an example embodiment;
fig. 6 is a block diagram illustrating an exemplary computer device that can be applied to exemplary embodiments.
Detailed Description
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
In the related art, when a picture or video is processed, an artistic style of a style picture can be obtained through an image style migration technology, so that a common picture is changed into a picture or video with a specific style. But the same style is not suitable for all pictures or videos.
One conventional approach is to employ artificial intelligence (Artificial Intelligence, AI) to stylize a picture or video. When the AI mode is adopted to stylize the visual data to be processed, a neural network model needs to be trained in advance, the number and the style types of styles which can be selected by a user are associated with the pre-trained neural network model, and only one style mode can be obtained through one pre-trained neural network model. In general, when the visual data to be processed is stylized in the AI mode, the style modes selectable by each user are the same and the number of style modes is limited, so that multiple style modes cannot be generated in real time for the user to select, and the fixed style modes are difficult to meet the requirements of each user. The conventional method cannot provide the user with enough freedom of operation to meet the demand of personalized customization of the picture or video style.
In order to meet the personalized customization needs of users for pictures or video styles, the present disclosure provides a visual data processing method.
Exemplary embodiments of the present disclosure are described in detail below with reference to the attached drawings. Before describing in detail the visual data processing method according to embodiments of the present disclosure, an example system in which the present method may be implemented is first described.
FIG. 1 is a schematic diagram illustrating an example system 100 in which various methods described herein may be implemented, according to an example embodiment.
Referring to fig. 1, the system 100 includes a client device 110, a server 120, and a network 130 communicatively coupling the client device 110 with the server 120.
The client device 110 includes a display 114 and a client Application (APP) 112 that is displayable via the display 114. The client application 112 may be an application program that needs to be downloaded and installed before running or an applet (liteapp) that is a lightweight application program. In the case where the client application 112 is an application program that needs to be downloaded and installed before running, the client application 112 may be pre-installed on the client device 110 and activated. In the case where the client application 112 is an applet, the user 102 may run the client application 112 directly on the client device 110 by searching the client application 112 in the host application (e.g., by name of the client application 112, etc.) or by scanning a graphical code (e.g., bar code, two-dimensional code, etc.) of the client application 112, etc., without installing the client application 112. In some embodiments, the client device 110 may be any type of mobile computer device, including a mobile computer, a mobile phone, a wearable computer device (e.g., a smart watch, a head-mounted device, including smart glasses, etc.), or other type of mobile device. In some embodiments, client device 110 may alternatively be a stationary computer device, such as a desktop, server computer, or other type of stationary computer device.
Server 120 is typically a server deployed by an Internet Service Provider (ISP) or Internet Content Provider (ICP). Server 120 may represent a single server, a cluster of multiple servers, a distributed system, or a cloud server providing basic cloud services (such as cloud databases, cloud computing, cloud storage, cloud communication). It will be appreciated that although server 120 is shown in fig. 1 as communicating with only one client device 110, server 120 may provide background services for multiple client devices simultaneously.
Examples of network 130 include a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), and/or a combination of communication networks such as the internet. The network 130 may be a wired or wireless network. In some embodiments, the data exchanged over the network 130 is processed using techniques and/or formats including hypertext markup language (HTML), extensible markup language (XML), and the like. In addition, all or some of the links may also be encrypted using encryption techniques such as Secure Sockets Layer (SSL), transport Layer Security (TLS), virtual Private Network (VPN), internet protocol security (IPsec), and the like. In some embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
For purposes of embodiments of the present disclosure, in the example of fig. 1, client application 112 may be a visual data processing application. In response, the server 120 may be a server for use with visual data processing applications. The server 120 may provide visual data to the client device 110, with visual data processing services provided by the client application 112 running in the client device 110.
Fig. 2 is a flowchart illustrating a visual data processing method 200 according to an exemplary embodiment. In some embodiments, the visual data processing method 200 may be performed at a client device (e.g., the client device 110 shown in fig. 1). In some embodiments, the visual data processing method 200 may be performed at a server (e.g., the server 120 shown in fig. 1). In some embodiments, the visual data processing method 200 may be performed by a client device (e.g., client device 110) and a server (e.g., server 120) in combination.
Referring to fig. 2, at step S210, visual data to be processed is acquired.
In an example, the visual data may be visual content displayable on a display device (e.g., a television, monitor, tablet, smart phone, etc.). By way of example, visual data may include pictures, video or game streams, and the like.
In an example, the visual data to be processed may be a picture or video taken by the client device 110 shown in fig. 1, a picture, video or game stream acquired by the client device 110 via the network 130, or a picture, video or game stream stored in a local storage device of the client device 110.
At step S220, the visual data to be processed is stylized to obtain at least one base style visual data having at least one base style, each base style visual data having a set of stylized parameters to characterize a corresponding base style.
In an example, the base style is a preset visual data style. For example, the base style may include one or more of antique, pictorial, literature, and the like.
In an example, the visual data to be processed is stylized according to the base style, that is, the style of the visual data to be processed is converted to be the same as the base style, thereby obtaining the base style visual data having the base style. The number of base style visual data may correspond to the number of base styles.
In an example, the stylized parameters may include contrast parameters, brightness parameters, and color saturation parameters, among others. The base style visual data may have a different number of stylized parameters in a set of stylized parameters, and the corresponding values for each stylized parameter.
In step S230, the at least one base style visual data is adjusted in style to generate a plurality of candidate style visual data, where each candidate style visual data has a corresponding candidate style.
In an example, when the base style visual data is one, a plurality of candidate style visual data may be generated by adjusting different stylized parameters in the base style visual data; when the base style visual data is multiple, the stylized parameters in each base style visual data can be respectively adjusted to generate multiple candidate style visual data; the base style visual data may also be visual data blended to generate a plurality of candidate style visual data; the base style visual data may also be visual data blended and stylized parameters of the blended base style visual data may be adjusted to generate a plurality of candidate style visual data. That is, an unlimited number of candidate style visual data may be generated by making a stylistic adjustment to a limited number of base style visual data.
In an example, the candidate styles corresponding to each candidate style visual data are not the same.
In step S240, the plurality of candidate style visual data are displayed on the interactive interface.
In an example, the plurality of candidate style visual data may be presented on the interactive interface in any order. For example, when the number of candidate style visual data is nine, the candidate style visual data may be arranged in three rows and three columns. In the present disclosure, the specific location of each candidate style visual data on the interactive interface is not limited.
In an example, the visual data of multiple candidate styles displayed on the interactive interface may be pictures or videos, the visual data of each candidate style may be displayed on the interactive interface in a picture form or a video form according to different acquired visual data to be processed, and the visual data of each candidate style may be more clearly displayed to a user, so that the user may compare differences between the visual data of each candidate style, and further, the user may conveniently and efficiently select the visual data of candidate styles of interest from the visual data of multiple candidate styles displayed on the interactive interface.
At step S250, at least one set of predictive stylized parameters associated with a style of interest to the user is determined based on the user' S selection of the plurality of candidate style visual data presented on the interactive interface.
In an example, a plurality of candidate style visual data presented on the interactive interface, each candidate style visual data having a corresponding candidate style. The user can select candidate style visual data according to the interests of the user from a plurality of candidate style visual data displayed on the interactive interface. The number of candidate style visual data selected by the user is less than the total number of candidate style visual data, and the specific number may be one or more.
In an example, each candidate style visual data selected by the user corresponds to a set of predictive stylized parameters. Thus, based on the candidate style visual data selected by the user, predictive stylization parameters associated with the style of interest to the user may be determined. The user can select at least one candidate style visual data from the plurality of candidate style visual data displayed according to his own interests. Accordingly, at least one set of predictive stylized parameters associated with the style of interest to the user may be determined.
At step S260, at least one cycle including the following operations is performed to obtain visual data conforming to a style of interest to the user:
in step S261, a correction term is added to the at least one set of predictive stylized parameters to generate a plurality of sets of corrected predictive stylized parameters.
In an example, the correction term is added to at least one set of predictive stylized parameters, and all parameters in the at least one set of predictive stylized parameters may be corrected, or some parameters in the at least one set of predictive stylized parameters may be corrected. That is, by adding correction terms, more sets of predictive stylized parameters, i.e., corrected predictive stylized parameters, may be derived from the current set of predictive stylized parameters to enable a gradual approximation of the style of interest to the user through at least one cycle, thereby enabling visual data style customization.
In step S262, a plurality of modified candidate style visual data is generated based on the plurality of sets of modified predictive stylized parameters.
In an example, the number of sets of modified predictive stylized parameters may be the same as the number of candidate style visual data presented on the interactive interface, and based on the set of modified predictive stylized parameters, one modified candidate style visual data may be generated. I.e., based on the plurality of sets of modified predictive stylized parameters, a plurality of modified candidate style visual data may be generated.
In step S263, the plurality of modified candidate style visual data are displayed on the interactive interface for the user to make a current selection.
In an example, the user may select among the plurality of modified candidate style visual data presented on the interactive interface, and if the user selects visual data that matches the style of interest to the user, the user may be instructed to end the interactive operation, and this selection may be the last selection by the user.
In step S264, it is determined whether the user instructs to end the interactive operation after making the current selection.
In an example, the user may indicate ending the interactive operation by an end button displayed on the interactive interface, or may indicate ending the interactive operation by not selecting on the interactive interface for a certain period of time.
In step S265, in response to determining that the user indicates to end the interactive operation after making the current selection, visual data conforming to the style of interest to the user is obtained based on the current selection of the user.
In an example, if the user indicates to end the interactive operation after making the current selection, the visual data currently selected by the user among the plurality of revised candidate style visual data is the visual data of the style of interest to the user.
In step S266, at least one set of predictive stylized parameters determined based on the current selection of the user among the plurality of sets of corrected predictive stylized parameters is acquired and the next cycle is performed in response to determining that the user does not instruct the end of the interactive operation after the current selection is made.
In an example, if the user does not instruct to end the interactive operation after making the current selection, at least one set of predictive stylized parameters among the plurality of sets of corrected predictive stylized parameters is determined based on the visual data currently selected by the user among the plurality of corrected candidate style visual data, and the foregoing steps S261 to S264 are performed again until the process may proceed to step S265, i.e., the user instructs to end the interactive operation after the current selection.
According to the embodiment of the disclosure, the limited amount of base style visual data generated by stylizing the visual data to be processed is subjected to style adjustment to generate a plurality of candidate style visual data for display on the interactive interface, a user can select the candidate style visual data according to own interests, and after the user selects the candidate style visual data, the user selects the candidate style visual data by executing a cyclic operation of correcting the user-selected candidate style visual data, and the user-selected candidate style visual data is corrected each time until the user selects the visual data conforming to the user-interested style. According to the visual data processing method, the limited number of base style visual data are subjected to style adjustment to generate infinite number of candidate style visual data, the style visual data which are interested by a user are selected through interactive operation between the user and an interactive interface, each time the user selects whether the user is interested or not, and meanwhile, correction items are added to at least one group of predictive stylized parameters which are associated with the style which is interested by the user, so that the obtained style visual data which are interested by the user are more in line with the self requirements of the user, the personalized customization requirements of the user are met, and meanwhile, the user can select the style visual data which are in line with the user's interests through interactive operation, so that the operation process of visual data processing is simpler.
According to embodiments of the present disclosure, the visual data processing methods of the present disclosure may be applied to interactive deployments of video data enhancements and stylization. It can be appreciated that the visual data processing method of the present disclosure can provide a visual data style customizing service for a user, and based on efficiently acquiring the preference of the user, the preference of the user (the visual data of the style of interest of the user) is used for video data enhancement and stylization, so that the method can be applied to service scenes such as video or games. For example, before video processing or game starting, visual data processing can be performed on visual data in the video or game on the basis of user preference at the cloud end or the terminal to obtain visual data in accordance with the style of interest of the user, during subsequent video processing and game processing can be performed on visual data in the video or game on the basis of the visual data in the style of interest of the user, thereby customizing the visual data style can be completed through one interactive operation process, and the customized visual data style can be used for multiple subsequent visual data enhancement and stylization.
According to some embodiments, in step S220, the stylizing the visual data to be processed to obtain at least one base style visual data having at least one base style may include: the visual data to be processed is stylized by a static processing module for providing at least one base style to obtain at least one base style visual data having at least one base style, wherein the static processing module may comprise at least one pre-trained neural network model corresponding to the at least one base style, respectively.
In an example, the neural network model may be any type of neural network model capable of styling visual data, for example, may be a Deep Neural Network (DNN) model, such as may include a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN), and so forth, to which the present disclosure is not limited.
In an example, the static processing module may include a plurality of pre-trained neural network models, each of which may stylize visual data to be processed to obtain a base style visual data having a base style. For example, the static processing module may include three pre-trained neural network models, so that base style visual data having three different base styles may be obtained by the static processing module.
According to the embodiment of the disclosure, the basic stylized processing of the visual data to be processed can be quickly and simply realized by utilizing the static processing module, so that candidate style visual data similar to the style of interest of the user can be derived from the obtained base style visual data.
According to some embodiments, in step S260, the at least one cycle may be performed by a dynamic processing module coupled to the static processing module, which may adaptively adjust the at least one set of predictive stylized parameters based on a current selection by a user, and wherein the dynamic processing module may be further configured to perform the step S230 described above.
In an example, a pre-trained model of AI is used as a reference model of a static processing module for stylizing visual data to be processed to obtain at least one base style visual data having at least one base style. And coupling the dynamic processing module with the static processing module, wherein the dynamic processing module can carry out style adjustment on at least one base style visual data with at least one base style, so that a plurality of candidate styles can be generated for users to select, and meanwhile, the dynamic processing module can also carry out self-adaptive adjustment on at least one group of predictive stylization parameters which are determined based on the selection of the plurality of candidate style visual data displayed on the interactive interface by the users and are associated with the style of interest of the users, so that the customization of the visual data styles is realized.
In an example, the dynamic processing module may execute at least one visual data processing algorithm.
In an example, the dynamic processing module may be coupled to a static processing module, which may input at least one base style visual data having at least one base style, which is stylized for visual data to be processed, to the dynamic processing module, and the at least one base style visual data may be stylistically adjusted by the dynamic processing module.
In an example, the dynamic processing module may be to add a correction term to at least one set of predictive stylized parameters of the current selection of the user that are associated with a style of interest to the user to generate a plurality of sets of corrected predictive stylized parameters.
According to embodiments of the present disclosure, the user's selection may be used as feedback to gradually approximate the style of interest to the user by means of adaptive adjustment of the stylized parameters by the dynamic processing module.
Fig. 3 is a schematic diagram illustrating a visual data processing apparatus 300 according to an exemplary embodiment. As shown in fig. 3, the visual data processing apparatus 300 may include a static processing module 301, a dynamic processing module 302, and an interactive interface 303. The static processing module 301 may include at least one pre-trained neural network model corresponding to at least one base style, respectively, and the dynamic processing module 302 may adaptively adjust at least one set of predictive stylized parameters based on the candidate styles currently selected by the user, and may present a plurality of candidate style visual data on the interactive interface 303.
The overall process of the visual data processing apparatus 300 may be: the static processing module 301 stylizes the visual data to be processed to obtain at least one base style visual data (e.g., three base style visual data) having at least one base style, the dynamic processing module 302 performs a stylistic adjustment on the at least one base style visual data to generate a plurality of candidate style visual data, and the plurality of candidate style visual data is presented on the interactive interface 303 (e.g., nine candidate style visual data shown in fig. 3). The user may select a style of interest from the plurality of candidate style visual data presented on the interactive interface 303, determine at least one set of predictive stylized parameters associated with the style of interest to the user (e.g., the user selects one of the nine candidate style visual data by clicking, thus determining a set of predictive stylized parameters based on the candidate style visual data). The dynamic processing module 302 may add correction terms to the at least one set of predictive stylized parameters to generate multiple sets of corrected predictive stylized parameters (e.g., nine sets of corrected predictive stylized parameters); a plurality of modified candidate style visual data (e.g., nine) generated based on the plurality of sets of modified predictive stylized parameters is presented on the interactive interface 303 for current selection by the user. If the user selects visual data which accords with the interested style of the user, the user can instruct to finish the interactive operation after the current selection is carried out; if the user does not select visual data that meets the user's style of interest, i.e., does not indicate to end the interactive operation, then after the user's current selection, the dynamic processing module 302 may perform the next cycle until the user selects visual data that meets the user's style of interest.
According to some embodiments, in step S220 to step S266, the stylized parameter may include at least one base style parameter and an additional style parameter, and the additional style parameter may include at least one of a brightness parameter, a contrast parameter, a detail enhancement dynamics parameter, a color saturation parameter, and a gamma correction parameter, wherein the at least one base style parameter and the additional style parameter have respective weights that are determined based on the corresponding base style.
Fig. 4 is a table illustrating example parameters of a stylized parameter according to an example embodiment.
As shown in fig. 4, the stylized parameters may include three base style parameters and additional style parameters. The base style parameters may correspond to the base style visual data in the foregoing embodiment, and the base style parameters may include a base style parameter 1, a base style parameter 2, and a base style parameter 3, where the respective weights of the base style parameter 1, the base style parameter 2, and the base style parameter 3 may be 1/3, for example. The additional style parameters may include a brightness parameter, a contrast parameter, a detail enhancement effort parameter, a color saturation parameter, and a gamma correction parameter, and the respective weights of the brightness parameter, the contrast parameter, the detail enhancement effort parameter, the color saturation parameter, and the gamma correction parameter may each be, for example, 0.5.
In the example, the values of the base style parameter 1, the base style parameter 2, the base style parameter 3, the brightness parameter, the contrast parameter, the detail enhancement strength parameter, the color saturation parameter and the gamma correction parameter are all 0 to 1.
In an example, the weights corresponding to the base style parameters and the additional style parameters are determined based on the corresponding base styles. The values of these parameters are shown in fig. 4 by way of example only, but those skilled in the art will appreciate that the present disclosure is not limited to these specific exemplary values.
According to some embodiments, in step S230, performing a stylistic adjustment on the at least one base style visual data includes performing at least one of: changing the weights of the at least one base style parameter and the additional style parameter; at least one base style visual data is visual data blended.
In an example, the stylistic adjustment to the base style visual data may be a change in the weight of any one or more of the base style parameters and the additional style parameters for each base style visual data, where the change may include an increase or a decrease.
In an example, when there are multiple base style visual data, the base style visual data may be adjusted in style by blending the base style visual data according to weights.
In an example, when there are multiple base style visual data, the base style visual data may be adjusted in style, by mixing the base style visual data with the visual data according to weights, and changing the weights of any one or more of the base style parameters and the additional style parameters of the mixed visual data. For example, multiple base style visual data may be mixed by weight, e.g., the visual data mixing operation may be performed in a perceptually balanced color space, such as CIELAB; the blended visual data may then undergo a series of post-processing, which may include, for example, at least one of brightness adjustment, contrast adjustment, detail enhancement, color adjustment, gamma correction.
According to embodiments of the present disclosure, the base style visual data may be adjusted from a variety of different dimensions in order to derive candidate style visual data that approximates the style of interest to the user.
According to some embodiments, in step S261, the correction term may include a product of a correction step size and a set of random numbers, wherein the correction step size may be decremented with each cycle, and the number of random numbers in the set of random numbers may be the same as the number of stylized parameters in the set of stylized parameters, and wherein the number of sets of corrected predictive stylized parameters may be the same as the number of the plurality of candidate style visual data presented on the interactive interface.
Illustratively, the correction term may be sχrj, where S is the correction step size and Rj is a set of random numbers. The initial value of the correction step S may be 0.2 at the first cycle, and the correction step S may be gradually decreased as the number of cycles increases, for example, the correction step s=sx0.8 is updated after each cycle.
Illustratively, rj is a set of randomly generated random numbers that fit a standard normal distribution, the number of random numbers in Rj being the same as the number of stylized parameters in the set of stylized parameters described above. For example, when the set of stylized parameters includes eight stylized parameters, rj may be a set of randomly generated arrays containing eight random numbers.
For example, based on a user's selection of multiple candidate styles visual data presented on the interactive interface, a set of predictive stylized parameters associated with the style of interest to the user may be determined to be Xi. Adding a correction term sxrj to the set of predictive stylized parameters Xi may generate a corrected predictive stylized parameter Xj, i.e., xj=xi+sxrj. Assuming that the number of candidate style visual data presented on the interactive interface is nine, nine sets of randomly generated random numbers may be generated at this time to obtain nine sets of modified predictive stylized parameters.
According to the embodiment of the disclosure, by constructing the correction term by using the product of the correction step length and a set of random numbers, the corrected predictive stylized parameter obtained by adding the correction term in each cycle can be gradually approximated to the style of interest of the user, so that the candidate style visual data which can comprise the style of interest of the user is finally generated.
According to another aspect of the present disclosure, there is also provided a visual data processing apparatus.
Fig. 5 is a schematic block diagram illustrating a visual data processing apparatus 500 according to an exemplary embodiment.
As shown in fig. 5, the visual data processing apparatus 500 includes: a visual data acquisition module 510 configured to acquire visual data to be processed; a visual data stylization module 520 configured to stylize visual data to be processed to obtain at least one base style visual data having at least one base style, each base style visual data having a set of stylization parameters to characterize a corresponding base style; a visual data style adjustment module 530 configured to stylistically adjust at least one base style visual data to generate a plurality of candidate style visual data, wherein each candidate style visual data has a corresponding candidate style; a first visual data presentation module 540 configured to present a plurality of candidate style visual data on an interactive interface; a stylized parameter determination module 550 configured to determine at least one set of predictive stylized parameters associated with a style of interest to a user based on a user selection of a plurality of candidate style visual data presented on the interactive interface; and a first loop execution module 560 configured to execute at least one loop including the following operations to obtain visual data conforming to a style of interest of a user, the first loop execution module 560 comprising: a correction term addition module 561 configured to add a correction term to at least one set of predictive stylized parameters to generate a plurality of sets of corrected predictive stylized parameters; a candidate style visual data generation module 562 configured to generate a plurality of modified candidate style visual data based on the plurality of sets of modified predictive stylized parameters; a second visual data presentation module 563 configured to present the plurality of revised candidate styles visual data on the interactive interface for current selection by the user; an indication module 564 configured to determine whether the user indicates to end the interactive operation after making the current selection; an obtaining module 565 configured to obtain visual data that corresponds to a style of interest of the user based on the current selection of the user in response to determining that the user indicates to end the interactive operation after the current selection is made; a second loop execution module 566 configured to obtain at least one set of predictive stylized parameters, determined based on the current selection of the user, among the sets of modified predictive stylized parameters, and to execute a next loop, in response to determining that the user has not indicated to end the interactive operation after the current selection.
According to the embodiment of the disclosure, the limited amount of base style visual data generated by stylizing the visual data to be processed is subjected to style adjustment to generate a plurality of candidate style visual data for display on the interactive interface, a user can select the candidate style visual data according to own interests, and after the user selects the candidate style visual data, the user selects the candidate style visual data by executing a cyclic operation of correcting the user-selected candidate style visual data, and the user-selected candidate style visual data is corrected each time until the user selects the visual data conforming to the user-interested style. According to the visual data processing method, the limited number of base style visual data are subjected to style adjustment to generate infinite number of candidate style visual data, the style visual data which are interested by a user are selected through interactive operation between the user and an interactive interface, each time the user selects whether the user is interested or not, and meanwhile, correction items are added to at least one group of predictive stylized parameters which are associated with the style which is interested by the user, so that the obtained style visual data which are interested by the user are more in line with the self requirements of the user, the personalized customization requirements of the user are met, and meanwhile, the user can select the style visual data which are in line with the user's interests through interactive operation, so that the operation process of visual data processing is simpler.
It should be appreciated that the various modules of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to method 200 are equally applicable to apparatus 500 and the modules comprising it. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module. The particular module performing the actions discussed herein includes the particular module itself performing the actions, or alternatively the particular module invoking or otherwise accessing another component or module that performs the actions (or performs the actions in conjunction with the particular module). Thus, a particular module that performs an action may include that particular module itself that performs the action and/or another module that the particular module invokes or otherwise accesses that performs the action.
It should also be appreciated that various techniques may be described herein in the general context of software or program modules. The various modules described above with respect to fig. 5 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or both of the visual data acquisition module 510, the visual data stylization module 520, the visual data style adjustment module 530, the first visual data presentation module 540, the stylization parameter determination module 550, the first loop execution module 560, the correction term addition module 561, the candidate style visual data generation module 562, the second visual data presentation module 563, the indication module 5564, the acquisition module 565, and the second loop execution module 566 may be implemented together in a System on Chip (SoC), as shown in fig. 5. The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to an aspect of the present disclosure, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory. The processor is configured to execute a computer program to implement the steps of any of the method embodiments described above.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
An illustrative example of such a computer device and computer readable storage medium is described below in connection with FIG. 6.
Fig. 6 illustrates an example configuration of a computer device 600 that may be used to implement the methods described herein. For example, the server 120 and/or client device 110 shown in fig. 1 may include an architecture similar to that of the computer device 600. The visual data processing apparatus described above may also be implemented, in whole or at least in part, by computer device 600 or a similar device or system.
The computer device 600 may be a variety of different types of devices. Examples of computer device 600 include, but are not limited to: a desktop, server, notebook, or netbook computer, a mobile device (e.g., tablet, cellular, or other wireless telephone (e.g., smart phone), notepad computer, mobile station), a wearable device (e.g., glasses, watch), an entertainment appliance (e.g., an entertainment appliance, a set-top box communicatively coupled to a display device, a gaming machine), a television or other display device, an automotive computer, and so forth.
Computer device 600 may include at least one processor 602, memory 604, communication interface(s) 606, display device 608, other input/output (I/O) devices 610, and one or more mass storage devices 612, capable of communicating with each other, such as via a system bus 614 or other suitable connection.
The processor 602 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. The processor 602 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor 602 may be configured to, among other capabilities, obtain and execute computer-readable instructions stored in the memory 604, mass storage device 612, or other computer-readable medium, such as program code for the operating system 616, program code for the application programs 618, program code for the other programs 620, and so forth.
Memory 604 and mass storage device 612 are examples of computer-readable storage media for storing instructions that are executed by processor 602 to implement the various functions as previously described. For example, memory 604 may generally include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, mass storage device 612 may generally include hard disk drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Memory 604 and mass storage device 612 may both be referred to herein collectively as memory or computer-readable storage media, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 602 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of programs may be stored on the mass storage device 612. These programs include an operating system 616, one or more application programs 618, other programs 620, and program data 622, and may be loaded into the memory 604 for execution. Examples of such application programs or program modules may include, for example, computer program logic (e.g., computer program code or instructions) for implementing the client application 112, the method 200, and/or additional embodiments described herein.
Although illustrated in fig. 6 as being stored in memory 604 of computer device 600, modules 616, 618, 620, and 622, or portions thereof, may be implemented using any form of computer-readable media accessible by computer device 600. As used herein, "computer-readable medium" includes at least two types of computer-readable media, namely computer-readable storage media and communication media.
Computer-readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information for access by a computer device. In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism. Computer-readable storage media as defined herein do not include communication media.
One or more communication interfaces 606 are used to exchange data with other devices, such as via a network, direct connection, or the like. Such communication interfaces may be one or more of the following: any type of network interface (e.g., a Network Interface Card (NIC)), a wired or wireless (such as IEEE 802.11 Wireless LAN (WLAN)) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, bluetooth, etc TM An interface, a Near Field Communication (NFC) interface, etc. Communication interface 606 may facilitate communication within a variety of network and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet, and so forth. The communication interface 906 may also provide for communication with external storage devices (not shown) such as in a storage array, network attached storage, storage area network, or the like.
In some examples, a display device 608, such as a monitor, may be included for displaying information and visual data to a user. Other I/O devices 610 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so on.
The techniques described herein may be supported by these various configurations of computer device 600 and are not limited to the specific examples of techniques described herein. For example, this functionality may also be implemented in whole or in part on a "cloud" using a distributed system. The cloud includes and/or represents a platform for the resource. The platform abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud. Resources may include applications and/or data that may be used when performing computing processes on servers remote from computer device 600. Resources may also include services provided over the internet and/or over subscriber networks such as cellular or Wi-Fi networks. The platform may abstract resources and functions to connect the computer device 600 with other computer devices. Thus, implementations of the functionality described herein may be distributed throughout the cloud. For example, the functionality may be implemented in part on computer device 600 and in part by a platform that abstracts the functionality of the cloud.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative and schematic and not restrictive; the present disclosure is not limited to the disclosed embodiments. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps than those listed and the indefinite article "a" or "an" does not exclude a plurality, the term "a" or "an" means two or more, and the term "based on" is to be interpreted as "based at least in part on". The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (9)

1. A method of visual data processing comprising:
acquiring visual data to be processed;
stylizing the visual data to be processed to obtain at least one base style visual data having at least one base style, each base style visual data having a set of stylized parameters to characterize a corresponding base style;
performing style adjustment on the at least one base style visual data to generate a plurality of candidate style visual data, wherein each candidate style visual data has a corresponding candidate style;
displaying the multiple candidate style visual data on an interactive interface;
determining at least one set of predictive stylization parameters associated with a style of interest to a user based on user selection of the plurality of candidate styles visual data presented on the interactive interface; and
performing at least one cycle comprising the following operations to obtain visual data conforming to a style of interest to the user:
adding correction terms to the at least one set of predictive stylized parameters to generate a plurality of sets of corrected predictive stylized parameters;
generating a plurality of modified candidate style visual data based on the plurality of sets of modified predictive stylized parameters;
Displaying the plurality of modified candidate style visual data on the interactive interface for current selection by the user;
determining whether the user indicates to end the interactive operation after making the current selection;
responsive to determining that the user indicates to end the interactive operation after making the current selection, obtaining visual data conforming to a style of interest to the user based on the current selection of the user;
in response to determining that the user does not indicate to end the interactive operation after making the current selection, at least one set of predictive stylized parameters, determined based on the current selection of the user, among the plurality of sets of modified predictive stylized parameters is obtained and the next cycle is performed.
2. The method of claim 1, wherein the stylizing the visual data to be processed to obtain at least one base style visual data having at least one base style comprises:
the visual data to be processed is stylized by a static processing module for providing the at least one base style, so as to obtain the at least one base style visual data with the at least one base style, wherein the static processing module comprises at least one pre-trained neural network model corresponding to the at least one base style respectively.
3. The method of claim 2, wherein the at least one cycle is performed by a dynamic processing module coupled with the static processing module, the dynamic processing module adaptively adjusting the at least one set of predictive stylized parameters based on the current selection of the user, and
wherein the dynamic processing module is further configured to perform the step of stylistically adjusting the at least one base style visual data.
4. The method of any one of claim 1 to 3, wherein the stylized parameters include at least one base style parameter and additional style parameters including at least one of a brightness parameter, a contrast parameter, a detail enhancement effort parameter, a color saturation parameter, and a gamma correction parameter,
wherein the at least one base style parameter and the additional style parameter have respective weights that are determined based on the corresponding base style.
5. The method of claim 4, wherein the stylistically adjusting the at least one base style visual data comprises performing at least one of:
changing the weights of the at least one base style parameter and the additional style parameter;
And performing visual data mixing on the at least one base style visual data.
6. A method according to any one of claims 1 to 3, wherein the correction term comprises a product of a correction step size and a set of random numbers, wherein the correction step size decreases with each of the cycles, and the number of random numbers in the set of random numbers is the same as the number of stylized parameters in the set of stylized parameters, and
wherein the number of sets of the plurality of sets of modified predictive stylized parameters is the same as the number of the plurality of candidate style visual data presented on the interactive interface.
7. A visual data processing apparatus comprising:
the visual data acquisition module is configured to acquire visual data to be processed;
a visual data stylization module configured to stylize the visual data to be processed to obtain at least one base style visual data having at least one base style, each base style visual data having a set of stylization parameters to characterize a corresponding base style;
a visual data style adjustment module configured to stylistically adjust the at least one base style visual data to generate a plurality of candidate style visual data, wherein each of the candidate style visual data has a corresponding candidate style;
The first visual data display module is configured to display the plurality of candidate style visual data on an interactive interface;
a stylized parameter determination module configured to determine at least one set of predictive stylized parameters associated with a style of interest to a user based on a user's selection of the plurality of candidate styles visual data presented on the interactive interface; and
a first loop execution module configured to perform at least one loop including the following operations to obtain visual data that conforms to a style of interest of the user, wherein the first loop execution module includes:
a correction term adding module configured to add a correction term to the at least one set of predictive stylized parameters to generate a plurality of sets of corrected predictive stylized parameters;
a candidate style visual data generation module configured to generate a plurality of modified candidate style visual data based on the plurality of sets of modified predictive stylized parameters;
a second visual data presentation module configured to present the plurality of revised candidate styles visual data on the interactive interface for current selection by the user;
An indication module configured to determine whether the user indicates to end an interactive operation after making the current selection;
an obtaining module configured to obtain visual data conforming to a style of interest of the user based on the current selection of the user in response to determining that the user indicates an end of an interactive operation after the current selection is made;
a second loop execution module configured to obtain at least one set of predictive stylized parameters determined based on the current selection of the user among the plurality of sets of modified predictive stylized parameters and execute a next loop in response to determining that the user does not indicate an end of an interactive operation after the current selection is made.
8. A computer device, comprising:
at least one processor; and
a memory on which a computer program is stored,
wherein the computer program, when executed by the processor, causes the processor to perform the method of any of claims 1 to 6.
9. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 6.
CN202310758944.4A 2023-06-26 2023-06-26 Visual data processing method, visual data processing device, computer equipment and readable storage medium Active CN116501217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310758944.4A CN116501217B (en) 2023-06-26 2023-06-26 Visual data processing method, visual data processing device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310758944.4A CN116501217B (en) 2023-06-26 2023-06-26 Visual data processing method, visual data processing device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116501217A true CN116501217A (en) 2023-07-28
CN116501217B CN116501217B (en) 2023-09-05

Family

ID=87328694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310758944.4A Active CN116501217B (en) 2023-06-26 2023-06-26 Visual data processing method, visual data processing device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116501217B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150098646A1 (en) * 2013-10-07 2015-04-09 Adobe Systems Incorporated Learning user preferences for photo adjustments
US20180082407A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Style transfer-based image content correction
CN111951345A (en) * 2020-08-10 2020-11-17 杭州趣维科技有限公司 GPU-based real-time image video oil painting stylization method
CN113610989A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Method and device for training style migration model and method and device for style migration
CN114897670A (en) * 2022-05-11 2022-08-12 咪咕文化科技有限公司 Stylized picture generation method, stylized picture generation device, stylized picture generation equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150098646A1 (en) * 2013-10-07 2015-04-09 Adobe Systems Incorporated Learning user preferences for photo adjustments
US20180082407A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Style transfer-based image content correction
CN111951345A (en) * 2020-08-10 2020-11-17 杭州趣维科技有限公司 GPU-based real-time image video oil painting stylization method
CN113610989A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Method and device for training style migration model and method and device for style migration
CN114897670A (en) * 2022-05-11 2022-08-12 咪咕文化科技有限公司 Stylized picture generation method, stylized picture generation device, stylized picture generation equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵敏;: "基于深度学习的卷积神经网络在图像风格化处理中的应用", 计算机产品与流通, no. 04 *

Also Published As

Publication number Publication date
CN116501217B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN102640103B (en) Method and apparatus for providing access to social content
US9659032B1 (en) Building a palette of colors from a plurality of colors based on human color preferences
KR102614263B1 (en) Interaction methods and apparatus, electronic devices and computer-readable storage media
CN109845250B (en) Method and system for sharing effect of image
CN111294663A (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
WO2016150386A1 (en) Interface processing method, apparatus, and system
CN111476708A (en) Model generation method, model acquisition method, model generation device, model acquisition device, model generation equipment and storage medium
CN110992256B (en) Image processing method, device, equipment and storage medium
US20140009470A1 (en) System, method, and computer program product for calculating settings for a device, utilizing one or more constraints
CN115917512A (en) Artificial intelligence request and suggestion card
US20220321974A1 (en) Display method and apparatus
US20210398336A1 (en) Image generation method, image generation apparatus, and image generation system
CN116320429B (en) Video encoding method, apparatus, computer device, and computer-readable storage medium
CN116501217B (en) Visual data processing method, visual data processing device, computer equipment and readable storage medium
CN115362438A (en) Searching and ordering modifiable videos in multimedia messaging applications
CN116962848A (en) Video generation method, device, terminal, storage medium and product
CN113076155A (en) Data processing method and device, electronic equipment and computer storage medium
CN114996484A (en) Data retrieval method and device, data processing method and device, equipment and medium
JP2017123103A (en) Terminal device, information processing method, and program
CN111135580B (en) Game character standby animation generation method and device
CN112527296A (en) User interface customizing method and device, electronic equipment and storage medium
CN116152403B (en) Image generation method and device, storage medium and electronic equipment
CN111080750A (en) Robot animation configuration method, device and system
CN109410128A (en) A kind of image processing method, device and electronic equipment
CN116246014B (en) Image generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant