CN111539795A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111539795A
CN111539795A CN202010350853.3A CN202010350853A CN111539795A CN 111539795 A CN111539795 A CN 111539795A CN 202010350853 A CN202010350853 A CN 202010350853A CN 111539795 A CN111539795 A CN 111539795A
Authority
CN
China
Prior art keywords
target
dynamic
target object
virtual resource
dynamic template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010350853.3A
Other languages
Chinese (zh)
Inventor
顾天鹏
陈威海
邹培君
金晶
陈连格
范怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010350853.3A priority Critical patent/CN111539795A/en
Publication of CN111539795A publication Critical patent/CN111539795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium, and belongs to the technical field of computers. The method comprises the following steps: receiving an image processing request of a target object, wherein the image processing request carries an object identifier of the target object; determining the category information of the target object according to the object identification of the target object; determining a target dynamic template matched with the category information of the target object; selecting a target image matched with the target dynamic template from the images corresponding to the target object; acquiring target dynamic virtual resources based on the combined data of the target image and the target dynamic template; and displaying the target dynamic virtual resources on a display interface corresponding to the target object. The target dynamic virtual resource obtained by the image processing method is a dynamic virtual resource matched with the category information of the target object, and the click rate of the target object can be improved to a certain extent, so that the attraction of the target object is improved, and the competitiveness of the target object in the similar object is improved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image processing method, an image processing device, electronic equipment and a computer readable storage medium.
Background
In recent years, with the rapid development of computer technology, more and more merchants are no longer limited to the traditional offline operation mode, and the online operation mode using the internet as a carrier becomes more and more popular. The online operation mode needs to put in virtual resources so as to improve the competitiveness of the shops of the merchants in the same kind of shops.
In the related art, a merchant can photograph a commodity sold in a store to obtain a plurality of images, and the plurality of images are led into a virtual resource manufacturing tool, so that a virtual resource in an image-text form is obtained, and the virtual resource is released. When the user clicks the virtual resource, the user can jump to the detail page of the merchant, so that the click rate of the merchant can be improved.
However, the processing process of the image processing method is single, and the obtained virtual resource is only in the form of the image and text, and the virtual resource in the form of the image and text has a small attraction to the user, so that the click rate of the merchant is not high, and the competitiveness of the merchant in the same merchant is influenced.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can be used for solving the problems in the related art. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an image processing method, where the method includes:
receiving an image processing request of a target object, wherein the image processing request carries an object identifier of the target object;
determining the category information of the target object according to the object identifier of the target object;
determining a target dynamic template matched with the category information of the target object;
selecting a target image matched with the target dynamic template from the images corresponding to the target object;
acquiring a target dynamic virtual resource based on the combined data of the target image and the target dynamic template;
and displaying the target dynamic virtual resource on a display interface corresponding to the target object.
In one possible implementation, acquiring a target dynamic virtual resource based on combined data of the target image and the target dynamic template includes:
the combined data of the target image and the target dynamic template is disassembled into pictures according to a frame rate;
recombining the disassembled pictures into a video according to the sequence;
converting the video into an initial dynamic virtual resource;
and in response to the size of the initial dynamic virtual resource not being larger than the target size, taking the initial dynamic virtual resource as the target dynamic virtual resource.
In one possible implementation, after converting the video into an initial dynamic virtual resource, the method further includes:
and responding to the fact that the size of the initial dynamic virtual resource is larger than the target size, adjusting the initial dynamic virtual resource to obtain the target dynamic virtual resource of which the size is smaller than the target size.
In one possible implementation, determining a target dynamic template matching the category information of the target object includes:
displaying the selectable dynamic template;
and determining a target dynamic template matched with the category information of the target object in the selectable dynamic templates.
In one possible implementation, presenting an optional dynamic template includes:
determining a theme matched with the category information of the target object, and acquiring a dynamic template corresponding to the theme from a dynamic template library;
and displaying the dynamic template corresponding to the theme as an optional dynamic template.
In a possible implementation manner, before obtaining the dynamic template corresponding to the theme from the dynamic template library, the method further includes:
obtaining at least one dynamic template and a theme corresponding to each dynamic template;
classifying the at least one dynamic template based on the theme corresponding to each dynamic template;
and establishing the dynamic template library according to the classified dynamic templates.
In another aspect, an embodiment of the present application provides an image processing apparatus, including:
the receiving module is used for receiving an image processing request of a target object, wherein the image processing request carries an object identifier of the target object;
the first determining module is used for determining the category information of the target object according to the object identifier of the target object;
the second determination module is used for determining a target dynamic template matched with the category information of the target object;
the selection module is used for selecting a target image matched with the target dynamic template from the images corresponding to the target object;
the acquisition module is used for acquiring target dynamic virtual resources based on the combined data of the target image and the target dynamic template;
and the display module is used for displaying the target dynamic virtual resource on a display interface corresponding to the target object.
In a possible implementation manner, the obtaining module is configured to disassemble the combined data of the target image and the target dynamic template into pictures according to a frame rate; recombining the disassembled pictures into a video according to the sequence; converting the video into an initial dynamic virtual resource; and in response to the size of the initial dynamic virtual resource not being larger than the target size, taking the initial dynamic virtual resource as the target dynamic virtual resource.
In one possible implementation, the apparatus further includes:
and the adjusting module is used for adjusting the initial dynamic virtual resource to obtain the target dynamic virtual resource with the size smaller than the target size in response to the fact that the size of the initial dynamic virtual resource is larger than the target size.
In a possible implementation manner, the second determining module is configured to display a selectable dynamic template; and determining a target dynamic template matched with the category information of the target object in the selectable dynamic templates.
In a possible implementation manner, the second determining module determines a theme matched with the category information of the target object, and acquires a dynamic template corresponding to the theme from a dynamic template library; and displaying the dynamic template corresponding to the theme as an optional dynamic template.
In a possible implementation manner, the obtaining module is further configured to obtain at least one dynamic template and a theme corresponding to each dynamic template;
the device also includes:
the classification module is used for classifying the at least one dynamic template based on the theme corresponding to each dynamic template;
and the establishing module is used for establishing the dynamic template library according to the classified dynamic templates.
In another aspect, an electronic device is provided, which includes a processor and a memory, where at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to implement any of the above-mentioned image processing methods.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement any of the above-mentioned image processing methods.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the technical scheme provided by the embodiment of the application obtains the target dynamic virtual resource based on the target image and the target dynamic template of the target object. Because the target image and the target dynamic template have strong correlation, the obtained target dynamic virtual resource is a dynamic virtual resource matched with the target object. The content of the dynamic virtual resource is more vivid and rich, and the attraction of the target object can be improved to a certain extent, so that the click rate of the target object is improved, and the competitiveness of the target object in the similar object is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 3 is a schematic illustration of a display interface of an alternative dynamic template provided in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating determination of a target dynamic template according to an embodiment of the present disclosure;
fig. 5 is a schematic illustration of a display interface corresponding to a target object according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method provided in an embodiment of the present application, and referring to fig. 1, the implementation environment includes: an electronic device 101.
The electronic device 101 may be at least one of a smartphone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, and a laptop computer. The electronic device 101 may receive an image processing request of a target object, and determine category information of the target object based on an object identifier of the target object carried in the image processing request. The electronic device 101 may determine a target dynamic template that matches the category information of the target object. The electronic device 101 selects a target image matching the target dynamic template among the images corresponding to the target object. The electronic device 101 may also obtain the target dynamic virtual resource based on the combined data of the target image and the target dynamic template, and display the target dynamic virtual resource on a display interface corresponding to the target object.
The electronic device 101 may be generally referred to as one of a plurality of electronic devices, and the embodiment is only illustrated by the electronic device 101. Those skilled in the art will appreciate that the number of electronic devices 101 described above may be greater or fewer. For example, the number of the electronic devices 101 may be only one, or the number of the electronic devices 101 may be tens or hundreds, or more, and the number of the electronic devices and the device types are not limited in the embodiment of the present application.
Based on the above implementation environment, the embodiment of the present application provides an image processing method, which can be executed by the electronic device 101 in fig. 1, taking the flowchart of the image processing method provided by the embodiment of the present application as an example shown in fig. 2. As shown in fig. 2, the method comprises the steps of:
in step 201, an image processing request of a target object is received, where the image processing request carries an object identifier of the target object.
In the embodiment of the present application, an application client supporting image processing is installed and operated in the electronic device, and the type of the application client is not limited in the embodiment of the present application.
In a possible implementation manner, the target object may click an image processing button in the application client, and in response to the click operation, an image processing request is generated, where the image processing request is an image processing request of the target object, that is, an image processing request acquired by the electronic device to the target object. The image processing request carries an object identifier of a target object, where the object identifier may be an object number corresponding to the target object or an object name of the target object.
Illustratively, the target object may be an object having image processing requirements, for example, for some recommended class applications, some recommendation information may be presented on a recommendation page, and the target object may be a shop, a product, or the like to be recommended presented on the recommendation page. For example, the target object is a target store, a merchant of the target store clicks an image processing button in the image processing client to obtain an image processing request of the target store, and the image processing request of the target store may carry a store number of the target store or a name of the target store.
In step 202, the category information of the target object is determined according to the object identifier of the target object.
In the exemplary embodiment of the present application, a correspondence between the object identifier and the category information is recorded in the electronic device. The electronic equipment analyzes the image processing request of the target object to obtain an object identifier carried in the image processing request, and determines the category information of the target object based on the object identifier.
In a possible implementation manner, taking the object identifier of the target object as an object number as an example, the correspondence between the object number and the category information recorded in the electronic device is shown in table 1 below:
TABLE 1
Object numbering Category information
0001-0050 Objects of the first kind
0051-0100 Objects of the second type
0101-0200 Objects of the third kind
As can be seen from table 1, the category information corresponding to the object with the object number 0001-.
For example, the image processing request of the target object carries the object number of the target object, and the image processing request of the target object is analyzed to obtain the object number of the target object, where the object number of the target object is 0020. And determining the category information of the target object as a first category object based on the object number of the target object.
In step 203, a target dynamic template matching the category information of the target object is determined.
In an exemplary embodiment of the present application, determining a target dynamic template matching with the category information of the target object may have the following steps:
step 2031, displaying the optional dynamic template.
In one possible implementation, presenting the selectable dynamic template includes the following steps:
step one, determining a theme matched with the category information of the target object, and acquiring a dynamic template corresponding to the theme from a dynamic template library.
In a possible implementation manner, the dynamic template library includes dynamic templates of various topics, the dynamic template of each topic may include a plurality of dynamic templates, and the process of establishing the dynamic template library may be as follows:
step 1, obtaining at least one dynamic template and a theme corresponding to each dynamic template.
In a possible implementation manner, the electronic device stores a plurality of dynamic templates and topics corresponding to the dynamic templates, and the electronic device obtains at least one dynamic template and a topic corresponding to each dynamic template from a storage space of the electronic device. Illustratively, the dynamic template stored in the electronic device can be a template of an animation which is input by designers according to categories and themes, and the template editing page supports input of pictures, characters and images, and supports path animation and preset animation. Meanwhile, a video file is imported by one key, a json (JavaScript Object Notation) file of time can be exported, and after the json file is imported into a dynamic template library, the video file can be rendered in a page by using a Lottie (animation rendering library).
For example, the electronic device stores a first dynamic template, a second dynamic template, a third dynamic template, a fourth dynamic template, a fifth dynamic template, a sixth dynamic template, a seventh dynamic template, and an eighth dynamic template. The theme corresponding to the dynamic template one is a first-class theme, the theme corresponding to the dynamic template two is a second-class theme, the theme corresponding to the dynamic template three is a third-class theme, the theme corresponding to the dynamic template four is a second-class theme, the theme corresponding to the dynamic template five is a second-class theme, the theme corresponding to the dynamic template six is a third-class theme, the theme corresponding to the dynamic template seven is a third-class theme, and the theme corresponding to the dynamic template eight is a first-class theme.
And 2, classifying at least one dynamic template based on the theme corresponding to each dynamic template.
In a possible implementation manner, the at least one dynamic template is classified based on the at least one dynamic template obtained in the step 1 and the theme corresponding to each dynamic template, so as to obtain the classified dynamic templates.
And (3) classifying the eight dynamic templates obtained in the step (1) and the topics corresponding to the dynamic templates to obtain dynamic templates with three types of topics, namely a first type topic, a second type topic and a third type topic. The first type of theme comprises a first dynamic template and a second dynamic template, the second type of theme comprises a second dynamic template, a fourth dynamic template and a fifth dynamic template, and the third type of theme comprises a third dynamic template, a sixth dynamic template and a seventh dynamic template.
It should be noted that the dynamic template included in the first-class theme is a dynamic template matched with the category information of the first-class object, the dynamic template included in the second-class theme is a dynamic template matched with the category information of the second-class object, and the dynamic template included in the third-class theme is a dynamic template matched with the category information of the third-class object.
And 3, establishing a dynamic template library according to the classified dynamic templates.
In a possible implementation manner, based on the classified dynamic templates obtained in step 2, a dynamic template library may be established, and the dynamic template library may store the corresponding relationship among the dynamic templates, the topics, and the category information in a table form. The target dynamic template matched with the category information of the target object can be determined more conveniently and rapidly by establishing the dynamic template library. The correspondence of the dynamic template, the topic, and the category information may be as shown in table 2 below.
TABLE 2
Themes Category information corresponding to subject Dynamic template
Subject matter of the first kind Class information of objects of a first class Dynamic template one, dynamic template eight
Subject matter of the second kind Class information of objects of a second class Dynamic template two, dynamic template four and dynamic template five
Subject matter of the third kind Class information of objects of a third class Three dynamic templates, six dynamic templates and seven dynamic templates
In a possible implementation manner, the storage space of the electronic device may be further divided into a target number of first storage spaces, and each first storage space is used for storing a dynamic template of a class of topics. For example, the storage space of the electronic device is divided into three first storage spaces, a first storage space is used for storing the dynamic templates of the first type of theme, that is, the first storage space is used for storing a first dynamic template and a eighth dynamic template. The second first storage space is used for storing the dynamic templates of the second type of subject, that is, the second first storage space is used for storing the dynamic template two, the dynamic template four and the dynamic template five. The third first storage space is used for storing the dynamic templates of the third type of theme, that is, the third first storage space is used for storing the dynamic template three, the dynamic template six and the dynamic template seven.
The plurality of first storage spaces of the target number can be set based on experience and can also be adjusted according to the number of the subjects, and the value of the target number is not limited in the embodiment of the application.
In the embodiment of the present application, after the dynamic template library is established, since all the dynamic templates in the dynamic template library are not the dynamic templates matched with the category information of the target object, the dynamic templates matched with the category information of the target object need to be determined in the dynamic template library, and the dynamic templates matched with the category information of the target object are used as selectable dynamic templates to be displayed.
In a possible implementation manner, the electronic device determines a topic matched with the category information based on the category information of the target object, and then obtains a dynamic template corresponding to the topic from a dynamic template library based on the topic.
For example, the category information of the target object is a second-class object, the topic corresponding to the second-class object is a second-class topic, and the dynamic templates included in the second-class topic, that is, the dynamic template two, the dynamic template four, and the dynamic template five, are determined in the dynamic template library.
And step two, displaying the dynamic template corresponding to the theme as a selectable dynamic template.
In a possible implementation manner, the dynamic template obtained in the step one is determined as a selectable dynamic template, that is, the dynamic template two, the dynamic template four, and the dynamic template five are determined as selectable dynamic templates, and the selectable dynamic template is displayed on the electronic device. Fig. 3 is a schematic view of a display interface of an optional dynamic template provided in an embodiment of the present application, and in fig. 3, a second dynamic template, a fourth dynamic template, and a fifth dynamic template are displayed on the display interface of the optional dynamic template.
Step 2032, determine the target dynamic template matching with the category information of the target object in the selectable dynamic templates.
In the embodiment of the present application, any one of the following implementations may be used to determine, from the presented selectable dynamic templates, a target dynamic template that matches the category information of the target object.
In response to a selection operation in the selectable dynamic templates, determining the selected dynamic template as a target dynamic template matched with the category information of the target object.
In a possible implementation manner, the target merchant may view the displayed selectable dynamic templates, if there is a dynamic template that meets the requirements of the target merchant in the displayed selectable dynamic templates, the target merchant may click a selection button behind the dynamic template, and the electronic device determines, in response to a selection operation of the target merchant in the selectable dynamic templates, the selected dynamic template as the dynamic template that matches the category information of the target object. Fig. 4 is a schematic diagram illustrating determination of a target dynamic template according to an embodiment of the present application, where in fig. 4, in response to an operation that a target merchant selects a dynamic template four from selectable dynamic templates, the dynamic template four is determined as a target dynamic template that matches category information of a target object.
And the electronic equipment automatically determines a target dynamic template matched with the category information of the target object from the selectable dynamic templates.
In one possible implementation, the electronic device may determine the target dynamic template matching the category information of the target object in any one of the following manners.
In the first implementation manner, the electronic device randomly determines one dynamic template from the selectable dynamic templates based on the selectable dynamic templates displayed in step 2031 as a target dynamic template matched with the category information of the target object.
For example, the selectable dynamic templates displayed in step 2031 include a second dynamic template, a fourth dynamic template, and a fifth dynamic template, and the electronic device may randomly determine one dynamic template from the second dynamic template, the fourth dynamic template, and the fifth dynamic template as a target dynamic template matching the category information of the target object. If the electronic device determines the second dynamic template as the target dynamic template matching the category information of the target object, it is needless to say that the fourth dynamic template may be determined as the target dynamic template matching the category information of the target object, or the fifth dynamic template may be determined as the target dynamic template matching the category information of the target object. The embodiment of the present application does not limit this.
In the second implementation manner, the electronic device calculates the matching degree between the selectable dynamic template and the category information of the target object, and determines the dynamic template with the matching degree meeting the target requirement as the target dynamic template matched with the category information of the target object.
In a possible implementation manner, the calculation process of the matching degree between the optional dynamic template and the category information of the target object is as follows: the electronic equipment obtains at least one historical dynamic virtual resource, and trains an initial matching degree calculation model based on a dynamic template of the historical dynamic virtual resource and the class information of objects in the historical dynamic virtual resource, so that a target matching degree calculation model with high calculation precision is obtained. The electronic device may input the selectable dynamic template and the category information of the target object into the target matching degree calculation model, and obtain a matching degree between the selectable dynamic template and the category information of the target object based on an output result of the target matching degree calculation model.
The initial matching degree calculation model may be any type of neural network model, which is not limited in the embodiment of the present application. For example, the initial matching degree calculation model may be a tendency value matching model (PSM).
In a possible implementation manner, the dynamic template whose matching degree meets the target requirement may be the dynamic template whose matching degree is the highest, and the dynamic template whose matching degree meets the target requirement is not limited in the embodiment of the present application.
For example, the selectable dynamic templates are dynamic template two, dynamic template four and dynamic template five, the category information of the target object is a second-class object, and the category information of the selectable dynamic template and the target object is input into the target matching degree calculation model to obtain the matching degree between the selectable dynamic template and the category information of the target object. For example, the degree of matching between the second dynamic template and the category information of the target object is 80%, the degree of matching between the fourth dynamic template and the category information of the target object is 85%, and the degree of matching between the fifth dynamic template and the category information of the target object is 95%. And determining the dynamic template with the highest matching degree as the dynamic template matched with the type information of the target object, namely determining the dynamic template five as the dynamic template matched with the category information of the target object.
It should be noted that, in the first implementation manner and the second implementation manner, the electronic device automatically determines, in the selectable dynamic templates, the target dynamic template matched with the category information of the target object, and a manual operation of the target merchant is not required, so that a workload of the target merchant can be reduced.
It should be further noted that any one of the first implementation manner and the second implementation manner may be selected to determine the target dynamic template matched with the category information of the target object, which is not limited in the embodiment of the present application.
In step 204, a target image matching the target dynamic template is selected from the images corresponding to the target object.
In the embodiment of the present application, any one of the following implementation manners may be used to select a target image matching the target dynamic template from the images corresponding to the target object.
In the first implementation manner, a plurality of images corresponding to the target object are stored in the storage space of the electronic device, and the target image corresponding to the target dynamic template is selected in the storage space based on the target dynamic template determined in the step 203.
In a possible implementation manner, based on the target dynamic template determined in step 203, an image format of an image matched with the target dynamic template is determined, where the image format may include size information of the image and may also include other information of the image, and the information included in the image format is not limited in this embodiment of the application. And extracting an image with the format consistent with the image format in the storage space of the electronic equipment based on the image format of the image matched with the target dynamic template, and taking the extracted image as a target image matched with the target dynamic template.
For example, if the image format of the image matching the target dynamic template is such that the size information of the image is less than 200kb (bytes), the image with the size information less than 200kb is extracted from the storage space of the electronic device, and the extracted image is determined as the target image matching the target dynamic template.
And in the second implementation mode, the electronic equipment receives the image uploaded by the target merchant and determines the image uploaded by the target merchant as a target image matched with the target dynamic template.
In a possible implementation manner, the target merchant may perform image shooting on the target object in advance, upload the shot image to the electronic device, receive the image uploaded by the target merchant by the electronic device, and determine the image uploaded by the target merchant as a target image matched with the target dynamic template.
The number of the target images may be one or more, and the number of the target images is not limited in the embodiment of the present application.
It should be further noted that any implementation manner described above may be selected to select a target image matched with the target dynamic template from the images corresponding to the target object, which is not limited in the embodiment of the present application.
In step 205, a target dynamic virtual resource is obtained based on the combined data of the target image and the target dynamic template.
In this embodiment of the application, based on the target dynamic template determined in step 203 and the target image matched with the target dynamic template acquired in step 204, metadata of the target dynamic template and metadata of the target image are acquired, the metadata of the target dynamic template and the metadata of the target image are combined to obtain combined data of the target image and the target dynamic template, and a target dynamic virtual resource is acquired based on the combined data. The process of obtaining the target dynamic virtual resource based on the combined data comprises the following steps:
and step 2051, disassembling the combined data of the target image and the target dynamic template into pictures according to a frame rate.
In one possible implementation, the combined data is decomposed into a plurality of pictures according to the frame rate according to the duration of the combined data and the number of frames included in each basic duration.
For example, the duration of the combined data is 3 seconds, 1 second is taken as a basic duration, each second includes 20 frames, each frame corresponds to one picture, that is, each second includes 20 pictures, and the combined data is disassembled into 60 pictures according to the frame rate.
And step 2052, recombining the disassembled pictures into a video according to the sequence.
In a possible implementation manner, the multiple pictures acquired in step 2051 are recombined into a video according to a time sequence of the pictures.
For example, based on the 60 pictures acquired in step 2051, the 60 pictures are recombined in time sequence according to the time of each picture, so as to obtain a video corresponding to the 60 pictures.
Step 2053, convert the video to initial dynamic virtual resources.
In a possible implementation manner, the video obtained through the recombination in step 2052 is converted into an initial dynamic virtual resource, where the initial dynamic virtual resource is a virtual resource in the form of a motion picture.
And step 2054, in response to that the size of the initial dynamic virtual resource is not larger than the target size, taking the initial dynamic virtual resource as a target dynamic virtual resource.
In a possible implementation manner, based on the initial dynamic virtual resource obtained in step 2053, the attribute information of the initial dynamic virtual resource is checked, where the attribute information includes information such as size information of the initial dynamic virtual resource. And acquiring the size information of the initial dynamic virtual resource from the attribute information of the initial dynamic virtual resource. And if the size of the initial dynamic virtual resource is not larger than the target size, taking the initial dynamic virtual resource as the target dynamic virtual resource.
For example, the size of the initial dynamic virtual resource obtained in step 2053 is 290kb, and the target size is 300 kb.
And step 2055, in response to that the size of the initial dynamic virtual resource is larger than the target size, adjusting the initial dynamic virtual resource to obtain a target dynamic virtual resource with the size smaller than the target size.
In a possible implementation manner, the size information of the initial dynamic virtual resource is obtained from the attribute information of the initial dynamic virtual resource, and in response to that the size of the initial dynamic virtual resource is larger than the target size, the initial dynamic virtual resource is adjusted, where the adjustment process includes, but is not limited to, compression. And obtaining the target dynamic virtual resource with the size smaller than the target size based on the adjusting process.
For example, the size of the initial dynamic virtual resource is 320kb, the target size is 300kb, and since the size of the initial dynamic virtual resource is larger than the target size, the initial dynamic virtual resource is adjusted, for example, the initial dynamic virtual resource is compressed, so as to obtain the target dynamic virtual resource with the size smaller than the target size.
In step 206, the target dynamic virtual resource is displayed on the display interface corresponding to the target object.
In this embodiment of the application, based on the target dynamic virtual resource obtained in step 205, the target dynamic virtual resource is displayed on the display interface corresponding to the target object. For example, still taking the case that the application program of the recommendation class displays some recommendation information on a recommendation page, the display interface corresponding to the target object may be the recommendation page, and then the target dynamic virtual resource of the target object is displayed at a position on the recommendation page corresponding to the target object.
In addition, the image of the target object can be an advertisement picture corresponding to the target merchant, and the advertisement picture is combined with the dynamic template to obtain the target dynamic virtual resource, so that the content in the advertisement picture can be normally displayed, the dynamic effect can be realized, the content of the dynamic virtual resource is more vivid, the attraction can be improved, and the propaganda effect can be enlarged.
In a possible implementation manner, a target merchant corresponding to a target object may view a target dynamic virtual resource of the target object on a display interface of the target object. And if the target dynamic virtual resource meets the display requirement of the target merchant, the target merchant can click the release button in the interface, and the electronic equipment responds to the operation of clicking the release button by the target merchant to release the target dynamic virtual resource. Fig. 5 is a schematic view of a display interface corresponding to a target object according to an embodiment of the present application, where in fig. 5, a target dynamic virtual resource is displayed on the display interface corresponding to the target object, and a drop button and an undetermined button are arranged below the target dynamic virtual resource.
If the target dynamic virtual resource does not meet the display requirement of the target merchant, the target merchant may click the to-be-determined button in the interface, and the electronic device acquires a new dynamic virtual resource in response to an operation of the target merchant clicking the to-be-determined button, where an acquisition process of the new dynamic virtual resource is consistent with an acquisition process of the target dynamic virtual resource, and details are not repeated here. After the electronic equipment acquires the new dynamic virtual resource, the new dynamic virtual resource is displayed on a display interface corresponding to the target object, and the target merchant views the new dynamic virtual resource. And if the new dynamic virtual resource meets the display requirement of the target merchant, the target merchant clicks the release button of the display interface, and the electronic equipment responds to the operation of clicking the release button by the target merchant to release the new dynamic virtual resource.
It should be noted that the time length of the dynamic virtual resource display may be determined based on the content of the dynamic virtual resource and the playing speed, the display may be stopped after the content display is completed, or the display may be performed in a circulating manner, and the time length of displaying the dynamic virtual resource is not limited in the embodiment of the present application, for example, may be 3 seconds(s).
The method provided by the embodiment of the application can be applied to scenes with recommendation requirements, such as scenes for advertising by merchants. The merchant uploads one or more material maps, and a dynamic virtual resource with theme strength and 3 seconds(s) duration is automatically generated through understanding of the material maps, such as a dynamic virtual resource of themes like 'beauty guide-beauty category'/'splendid moment-family photography/wedding photography category'. The dynamic virtual resources are displayed on the recommended page in the form of the dynamic effect of the waterfall flow card, the dynamic effect of the generated 3s waterfall flow card is used for advertisement putting, the dynamic virtual resources are displayed by circularly playing when a terminal user swipes an information stream, and the user can jump to the advertisement displaying page after clicking, so that the conversion rate is improved.
According to the method, the target dynamic virtual resource is obtained based on the target image and the target dynamic template of the target object, and the target image and the target dynamic template have strong correlation, so that the obtained target dynamic virtual resource is a dynamic virtual resource matched with the target object. The content of the dynamic virtual resource is more vivid and rich, and the attraction of the target object can be improved to a certain extent, so that the click rate of the target object is improved, and the competitiveness of the target object in the similar object is improved.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus includes:
a receiving module 601, configured to receive an image processing request of a target object, where the image processing request carries an object identifier of the target object;
a first determining module 602, configured to determine category information of the target object according to the object identifier of the target object;
a second determining module 603, configured to determine a target dynamic template matching the category information of the target object;
a selecting module 604, configured to select a target image matching the target dynamic template from the images corresponding to the target object;
an obtaining module 605, configured to obtain a target dynamic virtual resource based on the combined data of the target image and the target dynamic template;
and a display module 606, configured to display the target dynamic virtual resource on a display interface corresponding to the target object.
In a possible implementation manner, the obtaining module 605 is configured to disassemble the combined data of the target image and the target dynamic template into pictures according to a frame rate; recombining the disassembled pictures into a video according to the sequence; converting the video into an initial dynamic virtual resource; and in response to the size of the initial dynamic virtual resource not being larger than the target size, taking the initial dynamic virtual resource as the target dynamic virtual resource.
In one possible implementation, the apparatus further includes:
and the adjusting module is used for adjusting the initial dynamic virtual resource to obtain the target dynamic virtual resource with the size smaller than the target size in response to the fact that the size of the initial dynamic virtual resource is larger than the target size.
In a possible implementation manner, the second determining module 603 is configured to display an optional dynamic template; and determining a target dynamic template matched with the category information of the target object in the selectable dynamic templates.
In a possible implementation manner, the second determining module 603 determines a topic matched with the category information of the target object, and obtains a dynamic template corresponding to the topic from a dynamic template library; and displaying the dynamic template corresponding to the theme as an optional dynamic template.
In a possible implementation manner, the obtaining module 605 is further configured to obtain at least one dynamic template and a theme corresponding to each dynamic template;
the device also includes:
the classification module is used for classifying the at least one dynamic template based on the theme corresponding to each dynamic template;
and the establishing module is used for establishing the dynamic template library according to the classified dynamic templates.
The device acquires the target dynamic virtual resource based on the target image and the target dynamic template of the target object, and the target image and the target dynamic template have strong correlation, so that the acquired target dynamic virtual resource is a dynamic virtual resource matched with the target object. The content of the dynamic virtual resource is more vivid and rich, and the attraction of the target object can be improved to a certain extent, so that the click rate of the target object is improved, and the competitiveness of the target object in the similar object is improved.
It should be noted that: in the image processing apparatus provided in the above embodiment, only the division of the above functional modules is taken as an example when performing image processing, and in practical applications, the above functions may be distributed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 700 may be: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. Electronic device 700 may also be referred to by other names as user equipment, portable electronic device, laptop electronic device, desktop electronic device, and so on.
In general, the electronic device 700 includes: one or more processors 701 and one or more memories 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 702 is used to store at least one program code for execution by the processor 701 to implement the image processing method provided by the method embodiments herein.
In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the electronic device 700; in other embodiments, the number of the display screens 705 may be at least two, and the at least two display screens are respectively disposed on different surfaces of the electronic device 700 or are in a folding design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is operable to locate a current geographic location of the electronic device 700 to implement navigation or LBS (location based Service). The positioning component 708 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 709 is used to supply power to various components in the electronic device 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the electronic device 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the user with respect to the electronic device 700. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of electronic device 700 and/or underlying display screen 705. When the pressure sensor 713 is disposed on a side frame of the electronic device 700, a user holding signal of the electronic device 700 may be detected, and the processor 701 may perform left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the electronic device 700. When a physical button or vendor Logo is provided on the electronic device 700, the fingerprint sensor 714 may be integrated with the physical button or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 705 is increased; when the ambient light intensity is low, the display brightness of the display screen 705 is adjusted down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the electronic device 700. In one embodiment, the processor 701 controls the display screen 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually increased, the processor 701 controls the display screen 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the electronic device 700 and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, there is also provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor of a computer device to implement any of the image processing methods described above.
Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only exemplary of the present application and is not intended to limit the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
receiving an image processing request of a target object, wherein the image processing request carries an object identifier of the target object;
determining the category information of the target object according to the object identifier of the target object;
determining a target dynamic template matched with the category information of the target object;
selecting a target image matched with the target dynamic template from the images corresponding to the target object;
acquiring a target dynamic virtual resource based on the combined data of the target image and the target dynamic template;
and displaying the target dynamic virtual resource on a display interface corresponding to the target object.
2. The method of claim 1, wherein the obtaining a target dynamic virtual resource based on combined data of the target image and the target dynamic template comprises:
decomposing the combined data of the target image and the target dynamic template into pictures according to a frame rate;
recombining the disassembled pictures into a video according to the sequence;
converting the video into an initial dynamic virtual resource;
and in response to the size of the initial dynamic virtual resource not being larger than the target size, taking the initial dynamic virtual resource as a target dynamic virtual resource.
3. The method of claim 2, wherein after the converting the video to the initial dynamic virtual resource, the method further comprises:
and responding to the fact that the size of the initial dynamic virtual resource is larger than the target size, adjusting the initial dynamic virtual resource to obtain the target dynamic virtual resource of which the size is smaller than the target size.
4. The method according to any one of claims 1-3, wherein the determining a target dynamic template matching the category information of the target object comprises:
displaying the selectable dynamic template;
and determining a target dynamic template matched with the category information of the target object in the selectable dynamic templates.
5. The method of claim 4, wherein said exposing selectable dynamic templates comprises:
determining a theme matched with the category information of the target object, and acquiring a dynamic template corresponding to the theme from a dynamic template library;
and displaying the dynamic template corresponding to the theme as a selectable dynamic template.
6. The method of claim 5, wherein prior to obtaining the dynamic template corresponding to the topic from the dynamic template library, the method further comprises:
obtaining at least one dynamic template and a theme corresponding to each dynamic template;
classifying the at least one dynamic template based on the theme corresponding to each dynamic template;
and establishing the dynamic template library according to the classified dynamic templates.
7. An image processing apparatus, characterized in that the apparatus comprises:
a receiving module, configured to receive an image processing request of a target object, where the image processing request carries an object identifier of the target object;
the first determining module is used for determining the category information of the target object according to the object identifier of the target object;
the second determination module is used for determining a target dynamic template matched with the category information of the target object;
the selection module is used for selecting a target image matched with the target dynamic template from the images corresponding to the target object;
the acquisition module is used for acquiring target dynamic virtual resources based on the combined data of the target image and the target dynamic template;
and the display module is used for displaying the target dynamic virtual resource on a display interface corresponding to the target object.
8. The apparatus according to claim 7, wherein the obtaining module is configured to disassemble the combined data of the target image and the target dynamic template into pictures according to a frame rate; recombining the disassembled pictures into a video according to the sequence; converting the video into an initial dynamic virtual resource; and in response to the size of the initial dynamic virtual resource not being larger than the target size, taking the initial dynamic virtual resource as a target dynamic virtual resource.
9. An electronic device, comprising a processor and a memory, wherein at least one program code is stored in the memory, and wherein the at least one program code is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 6.
10. A computer-readable storage medium having stored therein at least one program code, the at least one program code being loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 6.
CN202010350853.3A 2020-04-28 2020-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN111539795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350853.3A CN111539795A (en) 2020-04-28 2020-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350853.3A CN111539795A (en) 2020-04-28 2020-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111539795A true CN111539795A (en) 2020-08-14

Family

ID=71967903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350853.3A Pending CN111539795A (en) 2020-04-28 2020-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111539795A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761281A (en) * 2021-04-26 2021-12-07 腾讯科技(深圳)有限公司 Virtual resource processing method, device, medium and electronic equipment
CN113778547A (en) * 2021-07-30 2021-12-10 北京达佳互联信息技术有限公司 Object processing method and device, computer program product and processor
CN116150413A (en) * 2023-02-07 2023-05-23 北京达佳互联信息技术有限公司 Multimedia resource display method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761281A (en) * 2021-04-26 2021-12-07 腾讯科技(深圳)有限公司 Virtual resource processing method, device, medium and electronic equipment
CN113761281B (en) * 2021-04-26 2024-05-14 腾讯科技(深圳)有限公司 Virtual resource processing method, device, medium and electronic equipment
CN113778547A (en) * 2021-07-30 2021-12-10 北京达佳互联信息技术有限公司 Object processing method and device, computer program product and processor
CN113778547B (en) * 2021-07-30 2024-02-06 北京达佳互联信息技术有限公司 Object processing method, device, computer program product and processor
CN116150413A (en) * 2023-02-07 2023-05-23 北京达佳互联信息技术有限公司 Multimedia resource display method and device
CN116150413B (en) * 2023-02-07 2024-06-04 北京达佳互联信息技术有限公司 Multimedia resource display method and device

Similar Documents

Publication Publication Date Title
CN108270794B (en) Content distribution method, device and readable medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110139143B (en) Virtual article display method, device, computer equipment and storage medium
WO2022134632A1 (en) Work processing method and apparatus
CN110769313B (en) Video processing method and device and storage medium
WO2022048398A1 (en) Multimedia data photographing method and terminal
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN112363660B (en) Method and device for determining cover image, electronic equipment and storage medium
CN111539795A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN110837300B (en) Virtual interaction method and device, electronic equipment and storage medium
CN112257006A (en) Page information configuration method, device, equipment and computer readable storage medium
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN111857793B (en) Training method, device, equipment and storage medium of network model
CN111105474A (en) Font drawing method and device, computer equipment and computer readable storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN110929159A (en) Resource delivery method, device, equipment and medium
CN113032590A (en) Special effect display method and device, computer equipment and computer readable storage medium
CN112100437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111988664B (en) Video processing method, video processing device, computer equipment and computer-readable storage medium
CN112560903A (en) Method, device and equipment for determining image aesthetic information and storage medium
CN113535039A (en) Method and device for updating page, electronic equipment and computer readable storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111539794A (en) Voucher information acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200814