CN111046198A - Information processing method, device, equipment and storage medium - Google Patents

Information processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111046198A
CN111046198A CN201911197126.1A CN201911197126A CN111046198A CN 111046198 A CN111046198 A CN 111046198A CN 201911197126 A CN201911197126 A CN 201911197126A CN 111046198 A CN111046198 A CN 111046198A
Authority
CN
China
Prior art keywords
target
information
animation
multimedia information
time period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911197126.1A
Other languages
Chinese (zh)
Other versions
CN111046198B (en
Inventor
罗飞虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911197126.1A priority Critical patent/CN111046198B/en
Publication of CN111046198A publication Critical patent/CN111046198A/en
Application granted granted Critical
Publication of CN111046198B publication Critical patent/CN111046198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • G06F16/447Temporal browsing, e.g. timeline
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an information processing method, an information processing device, information processing equipment and a storage medium, wherein the method comprises the following steps: responding to the attribute information selection operation on the display interface to obtain target attribute information; acquiring target multimedia information corresponding to the target attribute information; acquiring a target animation model corresponding to the target multimedia information, wherein the target animation model comprises at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information; and playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the target animation model in the target multimedia information. The invention relates to an artificial intelligence computer vision technology, which can implant user attributes into multimedia information to generate direct communication and interaction with a user, improve user experience and reduce resource occupancy rate.

Description

Information processing method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of internet, and particularly relates to an information processing method, an information processing device, information processing equipment and a storage medium.
Background
With the development of data processing technology, viewing multimedia information (such as video) has become a common way of entertainment. In order to enrich the playing effect of the multimedia information, the multimedia information is often processed to obtain the multimedia information with special interaction effect, wherein the processing of the multimedia information relates to the computer vision technology of artificial intelligence.
In the prior art, interaction with a user or interaction with a user in a mode of running a 3D model scene on a browser is generally adopted on the basis of playing fixed multimedia information, but when interaction with a user is carried out on the basis of playing fixed multimedia information, because the content of the multimedia information is fixed, the attribute of the user cannot be implanted in the element of the multimedia information, direct communication interaction cannot be generated with the user, and a large number of 3D model scenes need to be made in a mode of running the 3D model scene on the browser, so that the model is complex to make and occupies more resources, and smooth interaction with the user cannot be performed.
Disclosure of Invention
In order to implant user attributes into multimedia information, generate direct and smooth communication interaction with a user, improve user experience and reduce resource occupancy rate, the invention provides an information processing method, an information processing device, information processing equipment and a storage medium.
In one aspect, the present invention provides an information processing method, including:
responding to the attribute information selection operation on the display interface to obtain target attribute information;
acquiring target multimedia information corresponding to the target attribute information;
acquiring a target animation model corresponding to the target multimedia information, wherein the target animation model comprises at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information;
and playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the target animation model in the target multimedia information.
In another aspect, the present invention provides an information processing apparatus, including:
the response module is used for responding to the attribute information selection operation on the display interface to obtain target attribute information;
the target multimedia information acquisition module is used for acquiring target multimedia information corresponding to the target attribute information;
the target animation model acquisition module is used for acquiring a target animation model corresponding to the target multimedia information, wherein the target animation model comprises at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information;
and the rendering module is used for playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering the target animation information corresponding to the target time period in the target animation model in the target multimedia information.
In another aspect, the present invention provides an apparatus, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the information processing method as described above.
In another aspect, the present invention provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the information processing method as described above.
The invention provides an information processing method, a device, equipment and a storage medium, which determine target multimedia information and a target animation model according to target attribute information selected by a user on a display interface, when the target multimedia information is played to a target time period, render and display the target animation model, play the target animation information corresponding to the target time period in the target animation model, and integrate the target animation information and the target multimedia information in the target time period, thereby realizing the implantation of the attributes of user like hobby and the like in the multimedia information, leading the user and the multimedia information to generate direct communication interaction, achieving the interaction experience of rich substitution feeling, and being widely applied to entertainment video interaction; in addition, because the target animation model appears only in the target time period and the corresponding target animation is played, the operation of a large-scale 3D model scene is avoided, the resource occupancy rate is effectively reduced, and the fluency of communication and interaction between the user and the user is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an information processing method according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating an information processing method according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a display interface according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating another information processing method according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating another information processing method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a principle that when the target multimedia information is played and reaches a target time period, the target animation information corresponding to the target time period in the target animation model is rendered in the target multimedia information according to the playing of the target multimedia information provided in the embodiment of the present invention.
FIG. 7 is a schematic structural diagram of a synchronous presentation of a target animation model and target multimedia information according to an embodiment of the present invention.
Fig. 8 is an alternative structure diagram of the blockchain system according to the embodiment of the present invention.
Fig. 9 is an alternative schematic diagram of a block structure according to an embodiment of the present invention.
FIG. 10 is a schematic diagram of a promotional picture composed of a target animation model and a plan view according to an embodiment of the present invention.
Fig. 11 is a flowchart illustrating another information processing method according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of another display interface provided in the embodiment of the present invention.
Fig. 13 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Fig. 14 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. AI is a comprehensive subject, and relates to the field extensively, and the technique of existing hardware level also has the technique of software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further, it refers to using a camera and a Computer to replace human eyes to perform machine Vision such as identifying, tracking and measuring a target, and further performing image processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitting to an instrument to detect. The CV generally includes technologies such as image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also includes common biometric technologies such as face recognition, fingerprint recognition, and the like.
Specifically, the scheme provided by the embodiment of the invention relates to the technologies of video processing, video semantic understanding, three-dimensional object reconstruction, face recognition and the like in CV.
Specifically, the multimedia information is analyzed, which relates to a video processing technology, the target position of a target object in the target multimedia information is obtained, which relates to a target detection and positioning technology in video semantic understanding, the animation model corresponding to the multimedia information is drawn, which relates to a three-dimensional object reconstruction technology, the face image is collected, which relates to a face detection technology in face recognition, and the like.
Specifically, the technical solutions provided by the embodiments of the present invention are illustrated by the following embodiments.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic diagram of an implementation environment of an information processing method according to an embodiment of the present invention. As shown in fig. 1, the implementation environment may include at least a terminal 01 and a server 02, where the terminal 01 establishes a connection with the server 02 through a wired or wireless manner to realize data transmission with the server 02 through the network. For example, the server 02 may respond to the user attribute information selection operation of the terminal 01 to obtain a corresponding animation model, send the animation model to the terminal 01 through the network, and render and display the animation model by the terminal 01.
Specifically, the server 02 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
Specifically, the terminal 01 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 01 and the server 02 may be directly or indirectly connected through wired or wireless communication, and the present invention is not limited thereto.
It should be noted that fig. 1 is only an example.
Fig. 2 is a flow chart of an information processing method provided by an embodiment of the present invention, and the present specification provides the method operation steps as described in the embodiment or the flow chart, but more or less operation steps can be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
and S101, responding to the attribute information selection operation on the display interface to obtain target attribute information.
In the embodiment of the invention, when a user needs to experience the fact that the animation model corresponding to the user attribute information is implanted into the multimedia information, the user can enter the display interface shown in the figure 3 by clicking the corresponding website or scanning the corresponding two-dimensional code, and the display interface can provide rich and various user experiences.
In one possible embodiment, the content displayed in the display interface may include a selection portion of the inherent attribute information of the user, a selection portion of the attribute information (i.e., the entries in fig. 3) of interest to the user, and the like.
Specifically, the user's own inherent attribute information may include a user name, gender, age, birthday, occupation, and the like. The attribute information which cannot be displayed to the user in the form of the selection box can prompt the user to fill in.
In particular, the attribute information (e.g., the entries in fig. 3) of interest to the user may include sports, news, entertainment, science and technology, fashion, games, and the like. Taking the attribute information of interest to the user as sports, selection icons related to sports, including but not limited to ping-pong, tennis, hectometer sprint, diving, badminton, weightlifting and the like, can be displayed on the display interface. The attribute information that the user is interested in is preferably displayed to the user in the form of "icon + text" for the user to click on the icon or text to select the corresponding attribute information, and of course, may also be displayed to the user in other forms, which is not limited in the embodiment of the present invention.
It should be noted that fig. 3 is only an example.
In this embodiment of the present invention, as shown in fig. 4, before obtaining the target attribute information in response to the attribute information selecting operation on the display interface, the method may further include:
s100, establishing an animation model.
Specifically, S100 may include the steps of:
s10001, at least one piece of multimedia information is obtained.
S10003, drawing an animation model corresponding to the at least one multimedia message to obtain at least one animation model.
S10005, analyzing the at least one multimedia message to obtain at least one key time period corresponding to the at least one multimedia message.
S10007, using the action information of the object in the at least one piece of multimedia information in the corresponding at least one key time period as the animation information of the animation model corresponding to the at least one piece of multimedia information.
S10009, storing the at least one piece of multimedia information, the at least one animation model and the mapping relation between the at least one piece of multimedia information and the corresponding animation model in a preset information model library; wherein the at least one multimedia message comprises the target multimedia message, the at least one animation model comprises the target animation model, and the at least one critical time period comprises the target time period.
In the embodiment of the invention, the animation model can be established in advance, and the established animation model is stored. When creating an animated model, the number and type of models may be determined based on the designed display interface (e.g., the interface shown in FIG. 3) that needs to be presented to the user.
In practical application, it is assumed that the attribute information shown in the display interface and interested by the user is related information such as table tennis, hectometer sprint, diving and the like, the multimedia information may be related sports game video, and certainly, if the attribute information interested by the user is related information of a game such as electronic sports, the multimedia information may be related video of the game. For example, the server may obtain a table tennis game video related to a table tennis ball, a tennis game video related to a tennis ball, a hectometer sprint game video related to a hectometer sprint, a diving game video related to a diving, and an electronic competition video related to an electronic competition in advance in S10001. And draws an animation model associated with the different game video according to the characteristics of the object (i.e., the mover) in the different game video, thereby obtaining at least one animation model in S10003.
After the model drawing is completed, each game video may be analyzed in S10005, taking multimedia information as a diving game video for example, the server may analyze the diving game video, and obtain a first-few-second-time athlete to get on the diving platform to prepare for diving, a second-few-second-time athlete to start diving, a second-few-second-time athlete to get on the awarding platform, and record a key time point at which the athlete gets on the diving platform to prepare for diving, a key time period corresponding to a preparation diving process, a key time point at which the athlete starts diving, a key time period corresponding to the diving process, a key time point at which the awarding platform is got on, and a key time period corresponding to the awarding process, and the like, thereby obtaining at least one key time point and at least one key time period corresponding to the diving game video. The acquisition of the key time points and the key time periods of other game videos is similar to that of the diving game video, and is not repeated herein.
After obtaining the key time points and the key time periods, the motion information of the players in each game video in each key time period may be obtained in S10007, and the motion information is used as the animation information of the animation model corresponding to the game video, where each corresponding game video obtains at least one animation information due to at least one key time period of each game video. Taking multimedia information as an example of a diving game video, assuming that there are three key time periods for acquiring the diving game video in S10005, information such as limb movement information and facial expression of an athlete in the three time periods is acquired respectively, so as to obtain three pieces of animation information.
After the multimedia information, the animation model, and the animation information corresponding thereto are obtained, in S10009, each multimedia information, each animation information, and the mapping relationship between each multimedia information and the animation information corresponding thereto may be stored in a preset information model library, so that the subsequent background service may search and invoke the corresponding animation model and the animation information corresponding thereto from the preset information model library along with the multimedia information corresponding to the attribute information selected by the user.
In a possible embodiment, when the "gender" selection box exists in the user inherent attribute information in the display interface, taking the multimedia information as the diving game video as an example, the multimedia information may include a woman diving game video and a man diving game video, and accordingly, the drawn animation model in S10003 may include an animation model corresponding to the woman diving game video and an animation model corresponding to the man diving game video.
In one possible embodiment, the animated model may be a 2D model, a 3D model, or a model of other dimensions.
In practical applications, when the animation model is a 3D animation model, the corresponding animation model may be made by modeling software, such as a 3D Graphics Library (WebGL). For example, a 3D animation model may be created from materials (music, animation, images, characters, etc.) in an existing 3D resource file, and a map, material, 3D animation skeleton, etc. may be arranged for the 3D animation model.
In one possible embodiment, the drawn animation model and its corresponding animation information may be run on an HTML5Canvas tag for subsequent rendering and display. Where HTML is an abbreviation for HypertextMarked Language, Hypertext markup Language, the canvas element is part of HTML5, allowing scripting languages to dynamically render bit images. Terminal 01 may be a terminal that supports HTML5 when an animation model and its corresponding animation information are run on an HTML5Canvas tag.
And S103, acquiring target multimedia information corresponding to the target attribute information.
As shown in fig. 4, in the embodiment of the present invention, since the user selects the attribute information of interest on the display interface, after the server receives the target attribute information selected by the user, the server may analyze the attribute information of interest of the user included in the target attribute information, and then the server may obtain the target multimedia information corresponding to the target attribute information from the at least one piece of multimedia information. For example, if the attribute information of interest selected by the user is diving, the server may obtain a game video related to diving from at least one piece of multimedia information stored in the preset information model library. For another example, if the attribute information of interest selected by the user is diving and the selected gender is male, the server may obtain a video of a male diving race from a preset information model library.
And S105, acquiring a target animation model corresponding to the target multimedia information, wherein the target animation model comprises at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information.
As shown in fig. 4, in the embodiment of the present invention, after determining the target multimedia information corresponding to the target attribute information selected by the user, the server may search the target animation model corresponding to the target multimedia information according to the mapping relationship between the multimedia information stored in the preset information model library and the animation model.
And S107, playing the target multimedia information, and rendering the target animation information corresponding to the target time period in the target animation model in the target multimedia information when the target multimedia information is played to the target time period.
Specifically, as shown in fig. 5, S107 may include:
s10701, the target multimedia information is played, and when the target multimedia information is played to the starting time point of the target time period, the target position of the target object in the target multimedia information is obtained.
S10703, hiding the target object to obtain the target multimedia information after the object is hidden.
S10705, rendering and displaying the target animation model at the target position, and taking animation information corresponding to the target time period in the target animation model as the target animation information.
S10707, according to the first time stamp information of the target animation information and the second time stamp information of the target multimedia information after the object is hidden, aligning the target animation information and the target multimedia information after the object is hidden.
S10709, under the same playing frequency, the target multimedia information and the target animation information after the alignment processing are synchronously played.
Fig. 6 is a schematic diagram of the principle of S107, fig. 7 is a schematic diagram of a structure of synchronously displaying a target animation model and target multimedia information, and the following describes a specific process of S107 by taking the target multimedia information as a diving game video as an example:
after the target animation model corresponding to the diving game video is found from the preset information model base, the diving game video can be played at a video playing interface of the terminal, and currentTime attribute of the diving game video is monitored under the same frequency time sequence, so that a target time point and a target time period of the diving game video are obtained, wherein the currentTime attribute is a video object attribute and is used for setting or returning to a current playing position in the video. When it is sensed that the diving game video is played to a target time point (for example, 01:10 in fig. 6) when the athlete gets on the diving platform to prepare for diving (as shown in a in fig. 7), a target position of a target object (i.e., a sportsman) in the video is acquired and hidden, an x coordinate and a y coordinate are determined according to the target position, the target animation model 1 is rendered and displayed on the x coordinate and the y coordinate through WebGL on a canvas label (as shown in B in fig. 7), the target animation model is overlaid on the diving game video, the target animation information 1 corresponding to the target time period (for example, 01:10-01:20 in fig. 6) is played at the same time, and the playing time of the target animation information and the video is adjusted to enable the target animation information 1 and the video to be played synchronously and fused together (as shown in C in fig. 7).
In this embodiment of the present invention, after the playing of the target multimedia information and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the target animation model in the target multimedia information, the method may further include:
s109, after the target time period, hiding the target animation model.
S1011, displaying the target object.
And S1013, playing the target multimedia information with the target object.
In the embodiment of the present invention, taking target multimedia information as an example of a diving game video, after playing of target animation information corresponding to a certain target time period is completed (i.e., after the target time period), hiding the target animation model, displaying the athlete (as shown in D in fig. 7), and continuing to play the diving game video on which the athlete is displayed, and waiting for a next target time point. For example, when it is sensed that the diving game video is played to a target time point (e.g., 02:15 in fig. 6) at which the athlete is ready to get on the prize-receiving table for receiving a prize (e.g., as shown in E in fig. 7), the corresponding target animation information 2 may be acquired and played in a target time period (e.g., 02:15-02:20 in fig. 6), so that the target animation information 2 and the video are played synchronously and merged (e.g., as shown in F in fig. 7).
In a possible embodiment, the at least one multimedia information, the at least one animation model, and the mapping relationship between the at least one multimedia information and the corresponding animation model in S10009 may also be stored in the blockchain system. Referring to fig. 8, fig. 8 is an optional structural diagram of the block chain system according to the embodiment of the present invention, a point-to-point (P2P, Peer to Peer) network is formed among a plurality of nodes, and the P2P protocol is an application layer protocol operating on a Transmission Control Protocol (TCP). In the blockchain system, any machine such as a server and a terminal can be added to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 8, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
Referring to fig. 9, fig. 9 is an optional schematic diagram of a Block Structure (Block Structure) according to an embodiment of the present invention, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
In one possible embodiment, in order to enrich the user experience and achieve more direct interaction and communication with the user, the promotional pictures may be generated after the target multimedia information is played. Taking the target multimedia information as the diving game video as an example, the target animation model and the plan view can be synthesized to generate a synthesized propaganda picture as shown in fig. 10.
In a possible embodiment, to implement an interactive experience with richer substitution feeling, as shown in fig. 11, before the obtaining of the target multimedia information corresponding to the target attribute information, the method may further include:
and S102, collecting face image information.
Correspondingly, after the obtaining of the target animation model corresponding to the target multimedia information, the method may further include:
s106, responding to a face replacing request, obtaining model face information corresponding to the target animation model, and replacing the model face information with the face image information to obtain a replaced target animation model, wherein the replaced target animation model comprises at least one piece of animation information.
Accordingly, S107 may further include:
and playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the replaced target animation model in the target multimedia information.
In practical application, as shown in a in fig. 12, the content displayed in the display interface may include a face information collecting part, in addition to the selecting part of the inherent attribute information of the user and the attribute information selecting part interested by the user, and the face information collecting part may collect a face through a form of uploading a photo or taking a picture by the user. After the user clicks the icon for uploading/taking pictures in the icon for a, the server can identify the collected face, as shown in the icon for B in the icon for uploading/taking pictures in the icon for uploading/taking pictures in the icon for a second time, when the face collection is successful and the confirmation can be clicked to carry out the next step and the face collection is unsuccessful, the icon for uploading again can be clicked to upload the picture. When detecting that the face image information exists, the server can actively respond to the face replacement request, obtain model face information corresponding to the target animation model, and replace the model face information with the face image information to obtain the replaced target animation model. In addition, except that the server actively replaces the face of the target animation model when detecting that the face image information exists, the user can actively trigger a face replacement request through a face replacement request control arranged on the display interface after uploading pictures or taking pictures.
It should be noted that fig. 12 is merely an example.
In practical applications, the replacement manner of replacing the model face information with the face image information in S106 may be as follows: in webgl, the information of face image is used to replace the corresponding part of the map and texture data in the target animation model.
The information processing method, the device, the equipment and the storage medium provided by the embodiment of the invention operate the pre-designed 3D model and the corresponding model animation on the Html5Canvas label. Playing the video by using an HTML5 video playing interface, and intercepting a currentTime of a video playing position interface under a specific frequency time sequence to acquire a target time point on the video time. In a target time period corresponding to the target time point, a required 3D target animation model is superimposed on a preset target position, and target animation information corresponding to the target time period in the 3D target animation model is played simultaneously, so that the model and the video synchronously show effects, user attributes are implanted into video elements, and a user and the video generate direct communication interaction, thereby realizing experience effects of fixing the video and enriching various changes, being widely applied to entertainment video interaction, such as being widely applied to self-made videos of the user, realizing combined experience of various model images, and also being capable of carrying out model making and experience on materials (music, animation, graphics, characters and the like) prepared by a project party. Moreover, on the basis that the user selects the attribute information, the user uploads photos or takes pictures in a combined mode, so that a 3D character image is generated, and according to the time node of the video, the 3D model is implanted into the corresponding coordinate position according to the scene to synchronously show and play the model action, so that the interactive experience of richer substitution feeling is achieved. In addition, because the target animation model appears only in the target time period and the corresponding target animation is played, the operation of a large-scale 3D model scene is avoided, the resource occupancy rate is effectively reduced, and the fluency of communication and interaction between the user and the user is ensured.
As shown in fig. 13, an embodiment of the present invention provides an information processing apparatus, which may include:
the response module 201 may be configured to obtain the target attribute information in response to an attribute information selection operation on the display interface.
In the embodiment of the present invention, the apparatus may further include: animation model creation module 200.
Specifically, the animation model building module 200 may include:
the multimedia information acquisition unit can be used for acquiring at least one piece of multimedia information.
And the animation model drawing unit can be used for drawing the animation model corresponding to the at least one piece of multimedia information to obtain at least one animation model.
The analysis unit may be configured to analyze the at least one piece of multimedia information to obtain at least one key time period corresponding to the at least one piece of multimedia information.
And the animation information determining unit may be configured to use motion information of an object in the at least one piece of multimedia information in the corresponding at least one key time period as animation information of an animation model corresponding to the at least one piece of multimedia information.
The storage unit may be configured to store the at least one piece of multimedia information, the at least one animation model, and a mapping relationship between the at least one piece of multimedia information and a corresponding animation model in a preset information model library; wherein the at least one multimedia message comprises the target multimedia message, the at least one animation model comprises the target animation model, and the at least one critical time period comprises the target time period.
The target multimedia information obtaining module 203 may be configured to obtain target multimedia information corresponding to the target attribute information.
Specifically, the target multimedia information obtaining module 203 may be configured to obtain target multimedia information corresponding to the target attribute information from the at least one piece of multimedia information.
The target animation model obtaining module 205 may be configured to obtain a target animation model corresponding to the target multimedia information, where the target animation model includes at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information.
Specifically, the target animation model obtaining module 205 may be configured to obtain, according to the mapping relationship, a target animation model corresponding to the target multimedia information from the preset information model library, where the target animation model includes at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information.
The rendering module 207 may be configured to play the target multimedia information, and render the target animation information corresponding to the target time period in the target animation model in the target multimedia information when the target multimedia information is played to the target time period.
In this embodiment of the present invention, the rendering module 207 may include:
and the target position acquiring unit may be configured to play the target multimedia information, and acquire a target position of a target object in the target multimedia information when the target multimedia information is played to a starting time point of the target time period.
And the hiding unit can be used for hiding the target object to obtain the target multimedia information after the object is hidden.
And the display unit can be used for rendering and displaying the target animation model at the target position, and taking animation information corresponding to the target time period in the target animation model as the target animation information.
And the alignment unit can be used for aligning the target animation information and the target multimedia information after the object hiding according to the first time stamp information of the target animation information and the second time stamp information of the target multimedia information after the object hiding.
And the synchronous playing unit can be used for synchronously playing the target multimedia information and the target animation information after the alignment processing under the same playing frequency.
In the embodiment of the present invention, the apparatus may further include:
and the acquisition module can be used for acquiring the face image information.
And the model face information acquisition module can be used for responding to the face replacement request and acquiring the model face information corresponding to the target animation model.
And the replacing module can be used for replacing the model face information by using the face image information to obtain a replaced target animation type model, and the replaced target animation type model comprises the at least one piece of animation information.
Correspondingly, the rendering module may be configured to play the target multimedia information, and when the target multimedia information is played to a target time period, render, in the target multimedia information, target animation information corresponding to the target time period in the replaced target animation model.
It should be noted that the device embodiments in the embodiments of the present invention and the method embodiments in the embodiments of the present invention are based on the same inventive concept.
The embodiment of the present invention further provides an information processing apparatus, where the apparatus includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the information processing method provided in the above method embodiment.
The embodiment of the present invention further provides a storage medium, where at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the information processing method provided by the above method embodiment.
Alternatively, in the present specification embodiment, the storage medium may be located at least one network server among a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
The memory according to the embodiments of the present disclosure may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The embodiment of the information processing method provided by the embodiment of the invention can be executed in a mobile terminal, a computer terminal, a server or a similar arithmetic device. Taking the example of the operation on a server, fig. 14 is a hardware configuration block diagram of the server of the information processing method according to the embodiment of the present invention. As shown in fig. 14, the server 300 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 310 (the processors 310 may include but are not limited to a Processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 330 for storing data, and one or more storage media 320 (e.g., one or more mass storage devices) for storing applications 323 or data 322. Memory 330 and storage medium 320 may be, among other things, transient or persistent storage. The program stored in the storage medium 320 may include one or more modules, each of which may include a series of instruction operations for the server. Still further, the central processor 310 may be configured to communicate with the storage medium 320 to execute a series of instruction operations in the storage medium 320 on the server 300. The Server 300 may also include one or more power supplies 360, one or more wired or wireless network interfaces 350, one or more input-output interfaces 340, and/or one or more operating systems 321, such as a Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.
The input output interface 340 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 300. In one example, the input/output Interface 340 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the input/output interface 340 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration and is not intended to limit the structure of the electronic device. For example, server 300 may also include more or fewer components than shown in FIG. 14, or have a different configuration than shown in FIG. 14.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An information processing method, characterized in that the method comprises steps of:
responding to the attribute information selection operation on the display interface to obtain target attribute information;
acquiring target multimedia information corresponding to the target attribute information;
acquiring a target animation model corresponding to the target multimedia information, wherein the target animation model comprises at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information;
and playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the target animation model in the target multimedia information.
2. The method of claim 1, wherein prior to obtaining the target attribute information in response to the attribute information selection operation at the display interface, the method further comprises:
acquiring at least one piece of multimedia information;
drawing an animation model corresponding to the at least one piece of multimedia information to obtain at least one animation model;
analyzing the at least one multimedia message to obtain at least one key time period corresponding to the at least one multimedia message;
taking the action information of the object in the at least one piece of multimedia information in the corresponding at least one key time period as the animation information of the animation model corresponding to the at least one piece of multimedia information;
storing the at least one multimedia message, the at least one animation model and the mapping relation between the at least one multimedia message and the corresponding animation model in a preset information model library;
wherein the at least one multimedia message comprises the target multimedia message, the at least one animation model comprises the target animation model, and the at least one critical time period comprises the target time period.
3. The method according to claim 2, wherein the obtaining of the target multimedia information corresponding to the target attribute information comprises:
acquiring target multimedia information corresponding to the target attribute information from the at least one piece of multimedia information;
correspondingly, the obtaining of the target animation model corresponding to the target multimedia information includes:
and acquiring a target animation model corresponding to the target multimedia information from the preset information model library according to the mapping relation.
4. The method of claim 1, wherein the playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the target animation model in the target multimedia information comprises:
playing the target multimedia information, and acquiring a target position of a target object in the target multimedia information when the target multimedia information is played to the starting time point of the target time period;
hiding the target object to obtain target multimedia information after the object is hidden;
rendering and displaying the target animation model at the target position, and taking animation information corresponding to the target time period in the target animation model as the target animation information;
aligning the target animation information and the target multimedia information after the object is hidden according to the first time stamp information of the target animation information and the second time stamp information of the target multimedia information after the object is hidden;
and synchronously playing the target multimedia information and the target animation information after the alignment processing at the same playing frequency.
5. The method of claim 4, wherein the target multimedia information is played, and when the target multimedia information is played to a target time period, after target animation information corresponding to the target time period in the target animation model is rendered in the target multimedia information, the method further comprises:
after the target time period, hiding the target animation model;
displaying the target object;
and playing the target multimedia information with the target object.
6. The method of claim 1,
before the obtaining of the target multimedia information corresponding to the target attribute information, the method further includes:
collecting face image information;
correspondingly, after the target animation model corresponding to the target multimedia information is obtained, the method further includes:
responding to the face replacement request, and acquiring model face information corresponding to the target animation model:
replacing the model face information with the face image information to obtain a replaced target animation model, wherein the replaced target animation model comprises the at least one piece of animation information;
correspondingly, the playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering animation information corresponding to the target time period in the target animation model in the target multimedia information includes:
and playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering target animation information corresponding to the target time period in the replaced target animation model in the target multimedia information.
7. An information processing apparatus characterized in that the apparatus comprises:
the response module is used for responding to the attribute information selection operation on the display interface to obtain target attribute information;
the target multimedia information acquisition module is used for acquiring target multimedia information corresponding to the target attribute information;
the target animation model acquisition module is used for acquiring a target animation model corresponding to the target multimedia information, wherein the target animation model comprises at least one piece of animation information, and the at least one piece of animation information corresponds to different time periods of the target multimedia information;
and the rendering module is used for playing the target multimedia information, and when the target multimedia information is played to a target time period, rendering the target animation information corresponding to the target time period in the target animation model in the target multimedia information.
8. The apparatus of claim 7, wherein the rendering module comprises:
the target position acquisition unit is used for playing the target multimedia information, and acquiring the target position of a target object in the target multimedia information when the target multimedia information is played to the starting time point of the target time period;
the hiding unit is used for hiding the target object to obtain target multimedia information after the object is hidden;
the display unit is used for rendering and displaying the target animation model at the target position and taking animation information corresponding to the target time period in the target animation model as the target animation information;
the alignment unit is used for aligning the target animation information and the target multimedia information after the object hiding according to the first time stamp information of the target animation information and the second time stamp information of the target multimedia information after the object hiding;
and the synchronous playing unit is used for synchronously playing the target multimedia information and the target animation information after the alignment processing under the same playing frequency.
9. An apparatus comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the information processing method according to any one of claims 1 to 6.
10. A computer-readable storage medium, in which at least one instruction or at least one program is stored, the at least one instruction or the at least one program being loaded and executed by a processor to implement the information processing method according to any one of claims 1 to 6.
CN201911197126.1A 2019-11-29 2019-11-29 Information processing method, device, equipment and storage medium Active CN111046198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197126.1A CN111046198B (en) 2019-11-29 2019-11-29 Information processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197126.1A CN111046198B (en) 2019-11-29 2019-11-29 Information processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111046198A true CN111046198A (en) 2020-04-21
CN111046198B CN111046198B (en) 2022-03-29

Family

ID=70234033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197126.1A Active CN111046198B (en) 2019-11-29 2019-11-29 Information processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111046198B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796846A (en) * 2020-07-06 2020-10-20 成都艾乐橙文化传播有限公司 Information updating method and device, terminal equipment and readable storage medium
CN112435313A (en) * 2020-11-10 2021-03-02 北京百度网讯科技有限公司 Method and device for playing frame animation, electronic equipment and readable storage medium
WO2022143335A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Dynamic effect processing method and related apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130346851A1 (en) * 2012-06-25 2013-12-26 Microsoft Corporation Declarative show and hide animations in html5
CN105069829A (en) * 2015-07-24 2015-11-18 中国电子科技集团公司第二十八研究所 Human body animation generation method based on multi-objective video
CN105338370A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time
CN109003106A (en) * 2017-06-06 2018-12-14 腾讯科技(北京)有限公司 Information processing method and information processing unit
CN109976632A (en) * 2019-03-15 2019-07-05 广州视源电子科技股份有限公司 Text animation control methods and device, storage medium and processor
CN110427499A (en) * 2018-04-26 2019-11-08 腾讯科技(深圳)有限公司 Processing method, device and the storage medium and electronic device of multimedia resource

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130346851A1 (en) * 2012-06-25 2013-12-26 Microsoft Corporation Declarative show and hide animations in html5
CN105069829A (en) * 2015-07-24 2015-11-18 中国电子科技集团公司第二十八研究所 Human body animation generation method based on multi-objective video
CN105338370A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time
CN109003106A (en) * 2017-06-06 2018-12-14 腾讯科技(北京)有限公司 Information processing method and information processing unit
CN110427499A (en) * 2018-04-26 2019-11-08 腾讯科技(深圳)有限公司 Processing method, device and the storage medium and electronic device of multimedia resource
CN109976632A (en) * 2019-03-15 2019-07-05 广州视源电子科技股份有限公司 Text animation control methods and device, storage medium and processor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796846A (en) * 2020-07-06 2020-10-20 成都艾乐橙文化传播有限公司 Information updating method and device, terminal equipment and readable storage medium
CN111796846B (en) * 2020-07-06 2023-12-12 广州一起精彩艺术教育科技有限公司 Information updating method, device, terminal equipment and readable storage medium
CN112435313A (en) * 2020-11-10 2021-03-02 北京百度网讯科技有限公司 Method and device for playing frame animation, electronic equipment and readable storage medium
WO2022143335A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Dynamic effect processing method and related apparatus

Also Published As

Publication number Publication date
CN111046198B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN111046198B (en) Information processing method, device, equipment and storage medium
US20210299630A1 (en) Generating interactive messages with asynchronous media content
US10828570B2 (en) System and method for visualizing synthetic objects within real-world video clip
CN110809175B (en) Video recommendation method and device
TWI752502B (en) Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof
EP3889912B1 (en) Method and apparatus for generating video
CN109242940B (en) Method and device for generating three-dimensional dynamic image
CN110992256B (en) Image processing method, device, equipment and storage medium
CN110162667A (en) Video generation method, device and storage medium
CN111667557A (en) Animation production method and device, storage medium and terminal
CN111405314B (en) Information processing method, device, equipment and storage medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN109529350A (en) A kind of action data processing method and its device applied in game
Komianos et al. Efficient and realistic cultural heritage representation in large scale virtual environments
CN104572794A (en) Method and system for showing network information in a user-friendly manner
CN113992638B (en) Synchronous playing method and device for multimedia resources, storage position and electronic device
CN114173173A (en) Barrage information display method and device, storage medium and electronic equipment
Ziagkas et al. Greek Traditional Dances Capturing and a Kinematic Analysis Approach of the Greek Traditional Dance “Syrtos”(Terpsichore Project)
Ziagkas et al. Greek traditional dances 3d motion capturing and a proposed method for identification through rhythm pattern analyses (terpsichore project)
CN111652986B (en) Stage effect presentation method and device, electronic equipment and storage medium
CN114554111A (en) Video generation method and device, storage medium and electronic equipment
CN116993872B (en) Labanotation-based human body animation generation system, method, equipment and storage medium
Bibiloni et al. An Augmented Reality and 360-degree video system to access audiovisual content through mobile devices for touristic applications
Han Development of HMD-based 360 VR Content of Korean Heritage
CN117504296A (en) Action generating method, action displaying method, device, equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022200

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant