CN111935488A - Data processing method, information display method, device, server and terminal equipment - Google Patents

Data processing method, information display method, device, server and terminal equipment Download PDF

Info

Publication number
CN111935488A
CN111935488A CN201910394238.XA CN201910394238A CN111935488A CN 111935488 A CN111935488 A CN 111935488A CN 201910394238 A CN201910394238 A CN 201910394238A CN 111935488 A CN111935488 A CN 111935488A
Authority
CN
China
Prior art keywords
user
related information
video data
information
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910394238.XA
Other languages
Chinese (zh)
Other versions
CN111935488B (en
Inventor
郑萌萌
程杭
徐珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202210551425.6A priority Critical patent/CN115119004B/en
Priority to CN201910394238.XA priority patent/CN111935488B/en
Publication of CN111935488A publication Critical patent/CN111935488A/en
Application granted granted Critical
Publication of CN111935488B publication Critical patent/CN111935488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2542Management at additional data server, e.g. shopping server, rights management server for selling goods, e.g. TV shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a data processing method, an information display device, a server and a terminal device, and relates to the technical field of networks. In the embodiment of the application, the video data sent by the first user side is obtained and sent to the second user side, so that the second user side can play the video data on a playing interface. At least one user characteristic of the second user terminal is obtained. Determining object-related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates. And sending the object related information to the second user end so that the second user end can output the object related information in the playing interface. In the embodiment of the application, the stickiness of video content and a watching user is improved.

Description

Data processing method, information display method, device, server and terminal equipment
Technical Field
The embodiment of the application relates to the technical field of networks, in particular to a data processing method, an information display device, a server and terminal equipment.
Background
At present, when some live users sell or introduce commodities in live online broadcasting, due to lack of professional and systematic training, the live users are difficult to grasp the requirements of the watching users in the live broadcasting process, and personalized deep explanation can not be performed on different watching users. Therefore, the live content cannot attract watching users, thereby affecting the live effect.
Disclosure of Invention
The embodiment of the application provides a data processing method, an information display device, a server and terminal equipment, which can display different object related information aiming at different watching users, thereby further enhancing the user viscosity.
In a first aspect, an embodiment of the present application provides a data processing method, including:
the method comprises the steps of obtaining video data sent by a first user end, and sending the video data to a second user end so that the second user end can play the video data on a playing interface;
acquiring at least one user characteristic of the second user terminal;
determining object-related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
In a second aspect, an embodiment of the present application provides a data processing method, including:
the method comprises the steps of obtaining video data uploaded by a first user side, and sending the video data to a second user side so that the second user side can play the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
In a third aspect, an embodiment of the present application provides an information display method, including:
receiving video data sent by a server and playing the video data on a playing interface;
receiving object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
and outputting the object related information in the playing interface.
In a fourth aspect, an embodiment of the present application provides an information display method, including:
receiving video data sent by a server and outputting the video data on a playing interface;
receiving a prompt identifier sent by the server; the prompt identification is generated by the server side based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and outputting the prompt identification in a target display area where the target part is located in the playing interface.
In a fifth aspect, an embodiment of the present application provides a data processing apparatus, including:
the first acquisition module is used for acquiring video data uploaded by a first user side;
the first sending module is used for sending the video data to a second user end so that the second user end can play the video data on a playing interface;
a second obtaining module, configured to obtain at least one user characteristic of the second user end;
a first determination module for determining object related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and the second sending module is used for sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
In a sixth aspect, an embodiment of the present application provides a data processing apparatus, including:
the video data acquisition module is used for acquiring video data uploaded by a first user side;
the video data sending module is used for sending the video data to a second user end so that the second user end can play the video data on a playing interface;
a second determination module, configured to determine, based on the video data, a target region of an associated object to which the video data relates;
the prompt identifier generation module is used for generating a prompt identifier corresponding to the target part;
and the prompt identifier sending module is used for sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
In a seventh aspect, an embodiment of the present application provides an information display device, including:
the first receiving module is used for receiving video data sent by the server;
the first playing module is used for playing the video data on a playing interface;
the second receiving module is used for receiving the object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
and the first output module is used for outputting the object related information in the playing interface.
In an eighth aspect, an embodiment of the present application provides an information display apparatus, including:
the third receiving module is used for receiving the video data sent by the server;
the second playing module is used for outputting the video data on a playing interface;
the fourth receiving module is used for receiving the prompt identifier sent by the server; wherein the prompt identification is generated by the server based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and the second output module is used for outputting the prompt identifier in a target display area where the target part is located in the playing interface.
In a ninth aspect, an embodiment of the present application provides a server, including a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
the method comprises the steps of obtaining video data sent by a first user end, and sending the video data to a second user end so that the second user end can play the video data on a playing interface;
acquiring at least one user characteristic of the second user terminal;
determining object-related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
In a tenth aspect, an embodiment of the present application provides a server, including a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
the method comprises the steps of obtaining video data uploaded by a first user side, and sending the video data to a second user side so that the second user side can play the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
In an eleventh aspect, an embodiment of the present application provides a terminal device, including a processing component, a display component, and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
receiving video data sent by a server and playing the video data on a playing interface of the display component;
receiving object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
and outputting the object related information in a playing interface of the display component.
In a twelfth aspect, an embodiment of the present application provides a terminal device, including a processing component, a display component, and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
receiving video data sent by a server and outputting the video data on a playing interface of the display component;
receiving a prompt identifier sent by the server; the prompt identification is generated by the server side based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and outputting the prompt identification in a target display area where the target part is located in a playing interface of the display component.
Compared with the prior art, the application can obtain the following technical effects:
in the embodiment of the application, the video data uploaded by the first user side is sent to the second user side, so that the second user side can play the video data on a playing interface. And by acquiring at least one user characteristic of the second user, determining the object related information matched with the at least one user characteristic, so that the object related information most suitable for the requirement of the watching user can be sent to the second user for the watching user to watch, thereby not only improving the viscosity of video data and the watching user, but also further improving the product conversion rate of the associated object.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a data processing method according to the present application;
FIG. 2 is a schematic diagram illustrating information related to a display object in a playback interface according to the present application;
FIG. 3 shows a schematic flow chart diagram of yet another embodiment of a data processing method according to the present application;
FIG. 4 is a schematic diagram illustrating a prompt pattern associated with a target display area of a target portion of an object according to the present application;
FIG. 5 is a flow chart diagram illustrating one embodiment of an information display method according to the present application;
FIG. 6 is a schematic flow chart diagram illustrating another embodiment of an information display method according to the present application;
FIG. 7 is a block diagram illustrating one embodiment of a data processing apparatus according to the present application;
FIG. 8 is a schematic diagram illustrating an architecture of yet another embodiment of a data processing apparatus according to the present application;
FIG. 9 is a schematic diagram illustrating another embodiment of an information display device according to the present application;
FIG. 10 is a schematic diagram illustrating a structure of yet another embodiment of an information display device according to the present application;
FIG. 11 is a schematic diagram illustrating an embodiment of a server provided by the present application;
FIG. 12 is a schematic diagram illustrating an architecture of yet another embodiment of a server provided by the present application;
fig. 13 is a schematic structural diagram illustrating an embodiment of a terminal device provided in the present application;
fig. 14 shows a schematic structural diagram of another embodiment of a terminal device provided by the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical problem that live broadcast content cannot attract watching users due to the fact that the live broadcast users cannot catch the requirements of the watching users in the process of network live broadcast is solved. The inventors have made a series of studies to propose embodiments of the present application. In the embodiment of the application, the video data uploaded by the first user side is sent to the second user side, so that the second user side can play the video data on a playing interface. And by acquiring at least one user characteristic of the second user, determining the object related information matched with the at least one user characteristic, so that the object related information most suitable for the requirement of the watching user can be sent to the second user for the watching user to watch, thereby not only improving the viscosity of the video content and the watching user, but also further improving the product conversion rate of the associated object.
The embodiment of the application is applicable to, but not limited to, a live network scene, and also applicable to scenes such as video playback, video recording and playing, video chat, and video conference, and is not particularly limited herein.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of an embodiment of a data processing method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the method may include the following steps:
101: the method comprises the steps of obtaining video data sent by a first user end, and sending the video data to a second user end so that the second user end can play the video data on a playing interface.
The video data is obtained by the first user terminal acquiring video data of a live broadcast site through a camera lens, a microphone and other equipment, and the video data can comprise video image information, voice information, sensing data acquired by a first user terminal sensing assembly, setting data of a video special effect or an audio special effect set by the first user terminal and the like.
In practical application, in a video call, a video conference or a live webcast scene, a first user side sends acquired video data to a server side, the server side sends the video data to corresponding second user sides in real time, and each second user side plays the video data in a respective playing interface for watching users.
For a video recording or playback scene, for example, a social platform, a multimedia platform, a video website and other scenes related to video, a first user may record video data through a first user side and upload the video data to a service side of the social platform, and a second user logs in the social platform through a second user side and can view the video data uploaded by the first user. It can be understood that the first user side and the second user side are only used for distinguishing the acquisition side and the playing side of the video data, and the first user side and the second user side in the scenes of video conference, video call and the like can simultaneously generate two playing interfaces for bidirectional acquisition and playing of the video data, and at this time, any one of the first user side and the second user side can realize the functions of the first user side and the second user side, which is not limited specifically herein.
102: at least one user characteristic of the second user terminal is obtained.
In practical applications, the at least one user characteristic may be related data input by the viewing user when registering to log in the second user terminal, such as a user type, a gender, an age, and the like of the viewing user, and further, may further include a main category, a transaction channel, and the like; it is to be understood that the at least one user type may further include historical data generated by the viewing user after using the second user, such as historical purchase data, historical viewing data, and the like, which may be set according to actual situations and is not limited herein.
103: determining object-related information matching the at least one user characteristic.
Wherein the object related information is related data of an associated object to which the video data relates.
In practical applications, the related objects related to the video data may be commodities, exhibits and the like introduced or sold in the video, and are not specifically limited herein.
Taking the webcast process as an example, a live user (i.e., a first user) wants to attract a watching user to watch the live user by catching the needs and preferences of the watching user, so as to stimulate the watching user to interact with the live user. Particularly for the live users of self-service marketing, the watching users need to be more intuitive and understand the commodities more comprehensively by introducing the characteristics of the commodities, showing the commodity details and the like, so that the watching users are developed into potential customers.
However, the threshold of the current live broadcast users is low, the industry specifications are not detailed enough, and the live broadcast users can carry out network live broadcast without unified training and learning, so the professional degree of the live broadcast users is not uniform. The method results in that some live broadcast users lack the speciality and systematicness for explaining commodities in the live broadcast process, and cannot grasp the requirements of the watching users and provide personalized information for directly watching the users because the requirement information of each watching user is not pre-judged.
In order to solve the above problem, the server may determine the object related information matching with at least one user characteristic of each viewing user entering the live broadcast room by acquiring the at least one user characteristic. Since the user characteristics of each viewing user are different, the object related information obtained by matching is also different. At least one user characteristic of the watching user can represent the user requirement of the watching user, so that the personalized matching of the object related information is realized, and the technical problem that the live broadcasting user is difficult to prejudge the requirement of the watching user in advance is solved.
In practical application, the object related information may be feature information of the associated object, multi-dimensional prediction information surrounding the associated object, associated object fixed information, transaction information of the associated object, and the like according to an actual requirement, and may be set according to the actual requirement without specific limitation.
104: and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
Optionally, the second user end may display the object related information in any display form in the playing interface, for example, the object related information may be displayed in a bullet screen form, a message frame form, or a dynamic window form, and the display may be set by the viewing user according to the actual viewing habit of the viewing user. Of course, the display form of the object related information includes, but is not limited to, the above display form, and any display form in the prior art can be applied to the present technical solution.
As an implementable embodiment, the at least one user characteristic may comprise a user type; the object related information may include transaction information of at least one transaction channel to which the associated object relates; the determining of the object related information matching the at least one user characteristic may comprise;
and determining transaction information of a transaction channel matched with the user type.
In practical application, the object related information is classified in advance based on different user characteristics, for example, the user characteristics may include user types, and the object related information may include channel transaction information; the user types can comprise personal self-service purchase types, A platform channel transaction types, B platform channel transaction types, cross-border channel transactions, import and export channel transactions, entity channel transaction types and the like.
The transaction information may include transaction amounts of the commodities in different transaction channels corresponding to a preset time, for example, sales of the commodities in a last 3 months; the transaction price and the profit margin of different transaction prices can also include the number and proportion of the seller selling the commodity and the buyer purchasing the commodity; in addition, the price of the product, discount information, and the like may be included, and the present invention is not limited to this.
When the self-marketing first user explains the commodity, the watching user usually has the most concern about the sales data of the commodity at the secondary terminal, namely the transaction information. However, because the sales patterns of different sales channels are different, the sales situations and prices are different, and the generated profit margins are also different. However, in the process of explaining the product, the first user cannot care all the viewing users and cannot predict the requirement information of each viewing user, so that when introducing the product, the transaction information may be lost or omitted, and potential customers cannot be mined.
Due to the different numbers of customers in stock and sales channels, the corresponding profit costs may have a large gap. For self-service sale type first users, different prices and discounts are usually set for different sale channels and the number of the bought goods, for example, the price of a watching user of a personal self-service purchase type is set to be higher because the number of the bought goods is small and the watching user is a random user; the user types are watching users of a platform channel transaction type A, a platform channel transaction type B, a cross-border channel transaction, an import and export channel transaction, an entity channel transaction type and the like, and the watching users can be long-term cooperation clients or potential long-term cooperation clients, the goods intake is large, the goods intake frequency is high, the price setting is low, and different discount prices are set according to profit spaces of different transaction channels.
Based on the foregoing, as described in fig. 2, the server side can send the transaction data matched with the user type to the second user side in a bullet screen form by acquiring the user type of the second user side, so that the watching user can obtain the transaction information in time.
Furthermore, the at least one user characteristic may include, for example, a main category in addition to the user type. After the server side obtains the main catalog of the second user side, whether the related object related to the video data is input into the main catalog of the watching user can be judged firstly. For example, when the associated object is a clothing category, if the main category of the watching user is also the clothing category, the user type of the watching user is further determined; and if the main category of the watching user is different from the category to which the associated object belongs, preferentially matching the transaction information corresponding to the transaction channel with the optimal sale condition to attract the watching user to pay attention to the commodity so as to develop the watching user as a potential customer.
In the embodiment of the application, object related information corresponding to at least one user characteristic is matched based on the at least one user characteristic of a watching user of a second user side in the video playing process. In practical application, the at least one user characteristic can represent the user requirements of the watching user, so that the object related information meeting the user requirements is sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is developed into a potential client, and systematic product conversion is realized.
For a live broadcast scene, the server may obtain at least one user characteristic of the watching user when monitoring that the watching user enters the live broadcast room to watch video data. Matching the corresponding object related information of the viewing user based on the at least one user characteristic. However, it is understood that, in order to make the video content more specialized and systematic, different object-related displays may be triggered according to the progress of the video content.
Optionally, in some embodiments, the determining the object related information matching the at least one user characteristic may include:
and when a first preset event occurs in the video data, determining object related information matched with the at least one user characteristic.
The first preset event may be a first user trigger or an automatic trigger based on a change condition of video data during a video playing process.
In practical application, the first preset event may be video data collected by the first user, or biological or physiological characteristic information generated by the first user during the process of collecting the video data, such as information of sound, heat radiation, body movement, expression, or the like, or one or more combinations of sensed data obtained by detecting information of photoelectric, sound wave, or magnetic field generated by other electronic devices (for example, a remote control device) or some sensing devices (for example, a laser sensor, a touch pressure sensor, or the like) arranged by the first user in combination with the electronic devices or the biological or physiological characteristic information output by the first user, and may be specifically set according to actual requirements.
Before determining the object-related information matching with the at least one user characteristic when the first preset event occurs in the video data, the method may further include:
and establishing an incidence relation between the first preset event and the object related information.
In practical application, the server may pre-establish an association relationship between the first preset event and the object related information according to a requirement of the first user. For example, the association relationship between the preset keyword (word) and the object related information may be established in advance, the server performs voice recognition on the voice information in the video data, and when the preset keyword (word) is recognized, the object related information associated with the preset keyword (word) is determined. Of course, it may also be that an association relationship between the associated object and the object related information is established in advance, the server monitors whether the associated object appears in the video data by performing image recognition on the video data, and determines the object related information associated with the associated object when the associated object appears. Optionally, an association relationship between the predetermined sensing data and the object related information may be pre-established, and the associated object related information may be determined based on the collected predetermined sensing data.
In practical application, the object related information may be generated in advance by the first user based on the video content and sent to the server for storage, and the server establishes an association relationship between the object related information and the first preset event. Further, in order to reduce the workload of the first user, the server may determine an associated object related to the video data in advance, for example, collect information such as an object identifier or an object information code of the associated object, and obtain object related information of the associated object from at least one trading platform or other authorized cooperation platforms cooperating with the video recording and playing platform based on the object identifier or the object information code.
Before determining the object-related information matching with the at least one user characteristic when the first preset event occurs in the video data, the method may further include:
classifying the object-related information according to at least one user characteristic to obtain at least one object-related sub-information;
the determining of the object-related information matching the at least one user characteristic may include:
determining object-related sub-information matching the at least one user characteristic.
In practical applications, taking the object-related information as the transaction information as an example, the classifying the object-related information according to at least one user characteristic to obtain at least one object-related sub-information may include:
and classifying the transaction information based on the user type to obtain transaction sub-information corresponding to at least one transaction channel.
When at least one user characteristic acquired by the server is the user type, the object related information corresponding to the user type can be determined as transaction information, and when the user type is the A platform channel transaction type, the transaction information corresponding to the A platform channel is obtained in a matching mode.
As an implementable embodiment, the video data may include voice information; the identifying, when a first preset event occurs in the video data, the determining of the object related information matching with the at least one user characteristic may include:
recognizing first preset voice information in the voice information;
determining object related information associated with the first predetermined speech information;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
For example, the first user says "sales volume is good" during video recording and playing, and the object-related information associated with the "sales volume" is determined as transaction information by recognizing the voice information. Further, at least one user characteristic of each second user end is determined and obtained, and corresponding transaction sub-information is obtained through matching based on the user type corresponding to each second user end.
As an implementation manner, the video data includes sensing data collected by the sensing component based on the preset gesture output by the first user at the first user end; the identifying, when a first preset event occurs in the video data, the determining of the object related information matching with the at least one user characteristic may include:
identifying first predetermined sensing data in the sensing data;
determining object-related information associated with the first predetermined sensory data;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
In practical application, the first user can set the association relationship between the sensing data corresponding to the preset gesture and the related information of different objects according to own habits. The first preset sensing data is not limited to the sensing data acquired by the sensing component based on the preset gesture of the first user, and may also be the sensing data acquired by acquiring the facial expression, the head movement, and the touch and press action of the hand or the foot on the sensing component of the first user. The sensing component can be arranged at any position which can be reached by the first user in video recording and playing, or any position which can acquire user gestures, facial expressions and head movements, and of course, it can be understood that the sensing component can also be arranged in terminal equipment such as a mobile phone or a computer. The sensing assembly can be directly connected with the server side and also can be connected with the first user side, and the first preset sensing data collected by the sensing assembly is sent to the server side through the first user side. The setting can be carried out according to actual requirements.
As an implementation manner, the identifying, when the first preset event occurs in the video data, the determining of the object related information matching with the at least one user feature may include:
identifying an associated object in the video data;
determining object related information associated with the associated object;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
After the first user sends the video data to the server, the server may obtain a related object preset by the first user through image recognition, and of course, may also determine the object related information related to the related object through only recognizing a partial area of the related object or an object identifier of the related object, for example, obtain the related object based on feature recognition of the partial area of the related object, or determine the related object in the video data through information such as a two-dimensional code or a barcode of the related object obtained through recognition in the video data, and the like, which is not specifically limited herein.
The foregoing exemplary embodiment shows a manner of triggering, by voice information, video information, and other sensing information such as gestures and gestures of the first user in the video data, matching the object related information based on at least one user characteristic, where the first preset event includes, but is not limited to, one or more combinations of the foregoing manners, and may be specifically set based on actual requirements, and is not limited herein.
The sending the object related information to the second user side so that the second user side outputs the object related information in the playing interface may include:
and sending the object related sub-information matched with the at least one user characteristic in the object related information to the second user end so that the second user end can output the object related sub-information in a playing interface.
After the sending the object related information to the second user end, the method may further include:
and controlling the second user end to display the object related information in the playing interface according to a preset display form.
Optionally, the object related information is not limited to text information, but may also be picture information such as a three-dimensional space diagram, a dynamic diagram, or information in the form of video, voice, or address link, and is not limited herein. In practical application, the server can control the client to display in any form of a pop-up screen, a small window, a pop-up window, a dynamic graph and the like in a preset area of the playing interface. It should be noted that, when the information related to the object is displayed on the second user side in the bullet screen form, the display area and the display form of the information need to be distinguished from the ordinary bullet screen display in the video data to some extent, for example, the user message bullet screen or the message bullet screen of the second user side, so that the watching user does not have a certain reading difficulty due to the difficulty in distinguishing.
It can be understood that the server may control the second user to circularly play the object related information in the playing interface from bottom to top or from right to left in sequence, or may move among a plurality of random positions or preset positions of the playing interface and stay for a preset time to disappear, so that the viewing embodiment of the viewing user is improved by circularly playing for a plurality of times when the amount of the object related information is large. Meanwhile, in order to ensure that the watching user can finish reading the object related information in the effective time and can watch more information in a shorter time, the server needs to set a proper playing speed for matching the preset content according to the data size of the object related information, for example, the display time, the display speed and the like in the playing interface can be set according to actual requirements.
In practical applications, in order to further improve the viewing experience of the viewing user, the viewing user may also trigger the server to send the object related information to the second user according to a requirement of the viewing user, and as an implementation manner, before determining the object related information matched with the at least one user characteristic, the method may further include:
receiving a display request aiming at the object related information sent by the second user terminal; the display request is generated based on preset trigger operation of a watching user of the second user side.
It is understood that the viewing user may also control the display form of the object related information through the second user terminal. The viewing user may generate a display request for the object related information by triggering a display area of an associated object in the video data, such as a single click, a double click, or a touch down.
Alternatively, the server may send the object related information to the second user in a list form, and the viewing user may select whether to trigger the list to obtain more detailed or richer object related information. In addition, a display control of the object related information can be set at the second user end, the watching user generates a display request aiming at the object related information by opening the display control, so that the object related information is displayed in the playing interface, and the second user end is controlled to stop playing the object related information by closing the display control after reading or obtaining the effective information. Furthermore, the watching user can set the display form or the display speed of the object related information through the display control so as to adapt to the watching requirements of different watching users.
In a live network scene, part of watching users may enter a live broadcast room after live broadcast for a period of time, so that the watching users may have a situation that part of valid information is not acquired, or because the live broadcast time is too long, part of valid information is forgotten. At the moment, the watching user can ask or inquire questions in a message or bullet screen mode, although the first user, namely the live broadcasting user, can answer some questions after seeing the bullet screen, when the bullet screen or the message quantity is large, the live broadcasting user can not process or ignore some questions in time, so that part of the watching user needs are not caught by the live broadcasting user in time, the live broadcasting user can repeatedly answer the same question, time is wasted, and workload is increased. In order to improve the experience of the watching users, capture the requirements of each watching user in time and reduce the workload of the live broadcasting user, the server side obtains the actual requirements of each watching user through the barrage information sent by the second user side. Optionally, the method may further include:
receiving bullet screen information sent by the second user terminal and identifying preset content in the bullet screen information;
the object-related information comprises at least one object-related sub-information; the determining of the object related information matched with the at least one user feature based on the preset content may include:
determining object related information associated with the preset content;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
In practical application, the bullet screen information may be text information, the first user may preset keywords (words) as preset content according to video content, and by setting an association relationship between the preset content and the object related information, when the server acquires the bullet screen information sent by the second user and recognizes that the preset content exists in the bullet screen information, the object related information associated with the preset content may be determined. In order to realize the personalized matching of the object related information, the object related sub-information matched with the watching user in the object related information is further matched and obtained based on at least one user characteristic corresponding to the second user side sending the bullet screen information.
In the embodiment of the present application, multiple implementation forms for triggering obtaining of object related information based on at least one user feature matching are provided, and the object related information may be triggered through a first preset event generated by a first user, or may be triggered based on a display request generated by a second user based on a preset trigger operation of a viewing user, so as to ensure that the viewing requirements of different viewing users are more flexibly and conveniently adapted, and further improve the user stickiness and the viewing experience of the viewing user.
Fig. 3 is a flowchart of an embodiment of a data processing method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the method may include the following steps:
301: the method comprises the steps of obtaining video data uploaded by a first user side, and sending the video data to a second user side so that the second user side can play the video data on a playing interface.
302: based on the video data, a target region of an associated object to which the video data relates is determined.
In the live broadcasting process, a first user may not make system and sufficient preparation in advance, and some important contents may be omitted due to various reasons in the video recording and explaining process, or some watching users may not obtain effective information in time due to problems of too fast speech speed, severe accent and the like, so that the watching experience of the watching users is influenced.
303: and generating a prompt identifier corresponding to the target part.
In order to improve the viewing experience of the user and further improve the user viscosity, when the first user explains any one of the associated objects, the target part of the associated object is identified, and the prompt identifier of the target part is generated, so that dynamic positioning based on video content is realized, and the first user can better understand live content based on video data and prompt information.
For example, for a first new retail user, when introducing clothing goods, the first new retail user explains different parts of the goods, such as designs of waist, wrist, shoulder, neckline and the like, fabrics, heat retention, materials, upper body effects and the like of the clothing one by one. However, the watching user who enters the missed listening watching user or does not clearly listen to the explanation of the first user may not obtain the effective information, so that the watching user is prompted by the prompt identifier in real time based on the playing process of the video content, the watching requirement that the watching user obtains the effective information in time can be met, the watching experience of the watching user is improved, and the user stickiness is enhanced.
304: and sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
In practical application, in order to achieve effective and intuitive prompting for a viewing user, the prompting mark can be output and displayed in a target display area where the target part of the associated object is located. Of course, in the embodiment of the present application, the target location is not limited to be displayed in the target display area where the target location is located, and may also be any location in the live interface, and the target location may be connected to the prompt identifier through the connector or the indicator, so as to clearly and intuitively prompt the user to view the target location of the associated object corresponding to the prompt identifier.
Optionally, in some embodiments, the generating a prompt identifier corresponding to the target site may include:
determining a target display area of the target part in the playing interface;
generating a prompt pattern with the same size as the target display area;
the sending the prompt identifier to the second user end so that the second user end can output the prompt identifier on the playing interface includes:
and sending the prompt pattern to the second user end so that the second user end can output the prompt pattern in a target display area where the target part is located in the playing interface.
As shown in fig. 4, the related object is a garment try-on by a live user, the target portion is a waist region of the garment, the prompt pattern is a shadow pattern T generated based on a size of a target display region of the detected waist region in the play interface, and the shadow pattern T is output in the target display region. Meanwhile, a clothing dimension information display control can be further arranged, and when a user is watched to trigger the display control corresponding to the material dimension information, the material bullet screen information about the clothing is displayed in the playing interface.
It should be understood that the prompt pattern described in the embodiment of the present application includes, but is not limited to, the shadow pattern shown in fig. 4, and may be any pattern and shape, and may be set according to actual requirements.
In practical applications, the prompt identifier may be not only a prompt pattern, but also an animation, a text message, or other forms of prompt identifiers, and is not limited herein.
As an optional implementation, the determining, based on the video data, a target region of an associated object to which the video data relates may include:
and when a second preset event occurs in the video data, determining a target part related to the second preset event in the related object related to the video data.
In practical applications, the second preset time may be video data collected by the first user, or biological or physiological characteristic information output by the first user of the first user, such as sound, heat radiation, limb movement, expression, or photoelectric, acoustic or magnetic information (e.g., remote control device) generated by other electronic devices, or some sensing devices (e.g., laser sensor, touch pressure sensor, etc.) arranged by the first user in combination with the biological or physiological characteristic information output by the electronic device or the first user to detect and obtain one or more combinations of sensing data, and may be specifically set according to actual requirements, which is not described herein again.
In practical application, the server establishes an association relationship between a second preset event and a target part of an associated object in advance. When the first user triggers the second preset event, the server side can determine the target part corresponding to the second preset event triggered by the first user according to the incidence relation. For example, when a first user speaks a waist, the server determines the waist of the associated object in the video data as a target part through voice recognition; of course, the server may also determine, based on the gesture motion of the first user, for example, when pointing to the waist of the server or pointing to the waist of the associated object, the position pointed by the first user as the target portion of the associated object; the server may determine the preset target portion of the associated object only by recognizing that the associated object appears in the video data. When the preset target part comprises a plurality of target parts, the server side can simultaneously generate prompt marks respectively corresponding to the plurality of target parts, and the prompt marks are output in the target display area corresponding to each target part.
In order to further improve the viewing experience of the viewing user, after determining a target portion associated with a second preset event in an associated object related to the video data when the second preset event occurs in the video data, the method may further include:
determining object-related information matching the target site;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
In practical applications, the target region may be a part of a related object or may be a related object. That is, the entire related object is regarded as the target region. And establishing an association relation for the information related to the object aiming at different objects. For clothing goods, the labels with multiple dimensions can include style, upper body effect, process, material, performance, origin and the like, so that the association relation of the object related information of the target part can be established for the multiple dimensions of the associated object. Meanwhile, the object related information may further include prediction information for multiple dimensions of the related object, such as industry evaluation information, professional assessment information, buyer evaluation information, and the like, and may further include fixed information of the related object provided by the manufacturer or the origin, such as official commodity detail information, version number, product series, and the like, which is not specifically limited herein.
After the sending the prompt identifier to the second user end, the method may further include:
receiving a display request for the object related information associated with the target part, which is sent by the second user end; the display request is generated based on a preset trigger operation of a watching user of the second user end;
determining object-related information associated with the target site;
and sending the object related information to the second user end so that the second user end can output the object related information in a playing interface.
As an optional implementation manner, after the target portion of the associated object is determined, the object related information determined to be matched with the target portion may be directly sent to the second user side for displaying, and the display manner may be displayed in the display form of the object related information as described in the embodiment of fig. 1, which is not described herein again.
Optionally, the viewing user at the second user end may trigger the server end to display the object related information of the target portion object. Specifically, an object related information display instruction may be generated by viewing a prompt identifier output in a user-triggered play interface, or different types of object related information may be set in the play interface of the second user side for different types of object related information display controls, for example, the object related information may be classified into different types such as detail information, and writing information and transaction information. When the watching user triggers the display control corresponding to any one category, a display request for the object related information matched with the target part in the category can be triggered and generated, and the display request is sent to the server.
In the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated, so that the video content of the first user can prompt the watching user dynamically in real time, and meanwhile, the watching user can obtain more information timely and effectively by displaying the object related information corresponding to the target part. The watching method and the device not only further improve the watching of watching users, are beneficial to improving the viscosity of the users, develop the watching users into potential customers, and promote the productization of the associated objects.
Fig. 5 is a flowchart of an embodiment of an information display method provided in an embodiment of the present application, where a technical solution of this embodiment may be executed by a user side, and the method may include the following steps:
501: and receiving the video data sent by the server.
502: and playing the video data on a playing interface.
503: and receiving the object related information sent by the server.
And the object related information is obtained by the server side based on at least one user characteristic matching of a second user side.
504: and outputting the object related information in the playing interface.
As an implementation manner, before receiving the object related information sent by the server, the method may further include:
generating a display request aiming at the object related information based on a preset trigger operation of the watching user;
and sending the display request to the server.
The display interface comprises at least one preset display control; the generating a display request for object related information based on a preset trigger operation of the viewing user may include:
detecting a preset trigger operation of the watching user for any preset display control;
and generating a display request aiming at the object related information associated with any preset display control based on the preset trigger operation.
As an implementation manner, before receiving the object related information sent by the server, the method may further include:
and acquiring bullet screen information input by the viewing user and sending the bullet screen information to the server, so that the server can identify preset content in the bullet screen information and determine object related information matched with at least one user characteristic based on the preset content.
In practical applications, the object related information includes at least one object related sub-information, and the receiving the object related information sent by the server may include:
receiving object related sub-information sent by the server; and the object related sub-information is obtained by the server side from the object related information based on at least one user characteristic matching of a second user side.
The outputting the object related information in the play interface may include:
and outputting the object related sub information in the playing interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, object related information corresponding to at least one user characteristic is matched based on the at least one user characteristic of a watching user of a second user side in the video playing process. In practical application, the at least one user characteristic can represent the user requirements of the watching user, so that the object related information meeting the user requirements is sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is developed into a potential client, and systematic product conversion is realized.
Fig. 6 is a flowchart of an embodiment of an information display method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a user side, and the method may include the following steps:
601: and receiving the video data sent by the server.
602: and outputting the video data on a playing interface.
603: and receiving a prompt identifier sent by the server.
Wherein the prompt identification is generated by the server based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data.
604: and outputting the prompt identification in a target display area where the target part is located in the playing interface.
The outputting the prompt identifier in the target display area where the target portion is located in the playing interface may include:
determining a target display area where the target part is located in the playing interface;
and outputting the prompt identification in the target display area.
The prompt mark comprises a prompt pattern; the outputting the prompt identifier in the target display area where the target portion is located in the playing interface may include:
outputting the prompt mark output prompt pattern in a target display area where the target part is located in the playing interface; and the prompt pattern is generated for the server side based on the size of the target display area.
As an implementable embodiment, the method may further comprise:
receiving object related information which is sent by the server and matched with the target part;
and outputting the object related information in the playing interface.
As an implementation manner, before receiving the information related to the object matching the target region sent by the server, the method may further include:
generating an object-related information display request associated with the target part based on a preset trigger operation of the watching user;
and sending the display request to the server.
In practical applications, the object related information includes at least one object related sub-information, and the receiving the object related information matched with the target location and sent by the server may include:
receiving object related sub-information which is sent by the server and matched with the target part; the object-related sub-information is obtained by the server from the object-related information based on the target part matching.
The outputting the object related information in the play interface may include:
and outputting the object related sub information in the playing interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, the target part related to the associated object in the video data is identified, the prompt information corresponding to the target part is generated, so that the watching user is prompted dynamically in real time based on the video content of the first user, and meanwhile, the watching user can obtain more information timely and effectively by displaying the object related information corresponding to the target part. The watching method and the device not only further improve the watching of watching users, are beneficial to improving the viscosity of the users, develop the watching users into potential customers, and promote the productization of the associated objects.
Fig. 7 is a schematic structural diagram of an embodiment of a data processing apparatus provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the apparatus may include:
a first obtaining module 701, configured to obtain video data uploaded by a first user.
A first sending module 702, configured to send the video data to a second user end, so that the second user end plays the video data on a playing interface.
A second obtaining module 703 is configured to obtain at least one user characteristic of the second user end.
A first determining module 704 for determining object related information matching the at least one user characteristic.
Wherein the object related information is related data of an associated object to which the video data relates.
A second sending module 705, configured to send the object related information to the second user end, so that the second user end outputs the object related information in the playing interface.
As an implementable embodiment, the at least one user characteristic may comprise a user type; the object related information may include transaction information of at least one transaction channel to which the associated object relates; the first determining module 704 may specifically be configured to:
and determining transaction information of a transaction channel matched with the user type.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, object related information corresponding to at least one user characteristic is matched based on the at least one user characteristic of a watching user of a second user side in the video playing process. In practical application, the at least one user characteristic can represent the user requirements of the watching user, so that the object related information meeting the user requirements is sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is developed into a potential client, and systematic product conversion is realized.
Optionally, in some embodiments, the first determining module 704 may specifically be configured to:
and when a first preset event occurs in the video data, determining object related information matched with the at least one user characteristic.
Before determining the object-related information matched with the at least one user feature when the first preset event occurs in the video data, the method may further include:
and establishing an incidence relation between the first preset event and the object related information.
Before determining the object-related information matched with the at least one user feature when the first preset event occurs in the video data, the method may further include:
classifying the object-related information according to at least one user characteristic to obtain at least one object-related sub-information;
the determining of the object-related information matching the at least one user characteristic may include:
determining object-related sub-information matching the at least one user characteristic.
As an implementable embodiment, the video data may include voice information; when the first preset event occurs in the video data, determining that the object-related information matched with the at least one user feature may be specifically used to:
recognizing first preset voice information in the voice information;
determining object related information associated with the first predetermined speech information;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
As an implementation manner, the video data includes sensing data collected by the sensing component based on the preset gesture output by the first user at the first user end; when the first preset event occurs in the video data, determining that the object-related information matched with the at least one user feature may be specifically used to:
identifying first predetermined sensing data in the sensing data;
determining object-related information associated with the first predetermined sensory data;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
As an implementation manner, when the first preset event occurs in the video data, the determining of the object related information matching with the at least one user feature may specifically be used to:
identifying an associated object in the video data;
determining object related information associated with the associated object;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
As an implementation manner, the second sending module 705 may specifically be configured to:
and sending the object related sub-information matched with the at least one user characteristic in the object related information to the second user end so that the second user end can output the object related sub-information in a playing interface.
After the sending the object related information to the second user end, the method may further include:
and controlling the second user end to display the object related information in the playing interface according to a preset display form.
As an implementation manner, before the first determining module 704, the method may further include:
a first display request receiving module, configured to receive a display request for the object-related information sent by the second user end; the display request is generated based on preset trigger operation of a watching user of the second user side.
Optionally, the apparatus may further include:
and the bullet screen information receiving module is used for receiving the bullet screen information sent by the second user terminal.
And the preset content identification module is used for identifying the preset content in the bullet screen information.
The object-related information comprises at least one object-related sub-information; the determining, based on the preset content, the object related information matched with the at least one user feature may be specifically used to:
determining object related information associated with the preset content;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the present application, multiple implementation forms for triggering obtaining of object related information based on at least one user feature matching are provided, and the object related information may be triggered through a first preset event generated by a first user, or may be triggered based on a display request generated by a second user based on a preset trigger operation of a viewing user, so as to ensure that the viewing requirements of different viewing users are more flexibly and conveniently adapted, and further improve the user stickiness and the viewing experience of the viewing user.
Fig. 8 is a schematic structural diagram of an embodiment of a data processing apparatus provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the apparatus may include:
a video data obtaining module 801, configured to obtain video data uploaded by the first user.
A video data sending module 802, configured to send the video data to a second user end, so that the second user end plays the video data on a playing interface.
A second determining module 803, configured to determine, based on the video data, a target region of an associated object to which the video data relates.
A prompt identifier generating module 804, configured to generate a prompt identifier corresponding to the target portion.
A prompt identifier sending module 805, configured to send the prompt identifier to the second user end, so that the second user end outputs the prompt identifier in a target display area where the target location is located in the playing interface.
Optionally, in some embodiments, the prompt identifier generating module 804 may be specifically configured to:
determining a target display area of the target part in the playing interface;
generating a prompt pattern with the same size as the target display area;
the prompt identifier sending module 805 may be specifically configured to:
and sending the prompt pattern to the second user end so that the second user end can output the prompt pattern in a target display area where the target part is located in the playing interface.
As an optional implementation manner, the second determining module 803 may specifically be configured to:
and when a second preset event occurs in the video data, determining a target part related to the second preset event in the related object related to the video data.
In order to further improve the viewing experience of the viewing user, after determining a target portion associated with a second preset event in an associated object related to the video data when the second preset event occurs in the video data, the method may further include:
the matching module is used for determining the relevant information of the object matched with the target part;
and the information sending module is used for sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
After the prompt identifier sending module 805, the method may further include:
a second display request receiving module, configured to receive a display request for the object-related information associated with the target location, where the display request is sent by the second user end; the display request is generated based on a preset trigger operation of a watching user of the second user end;
an information determination module for determining object-related information associated with the target site;
and the information sending module is used for sending the object related information to the second user end so that the second user end can output the object related information in a playing interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated, so that the video content of the first user can prompt the watching user dynamically in real time, and meanwhile, the watching user can obtain more information timely and effectively by displaying the object related information corresponding to the target part. The watching method and the device not only further improve the watching of watching users, are beneficial to improving the viscosity of the users, develop the watching users into potential customers, and promote the productization of the associated objects.
Fig. 9 is a schematic structural diagram of an embodiment of an information display device according to an embodiment of the present application, where a technical solution of the embodiment can be executed by a user side, and the information display device may include:
a first receiving module 901, configured to receive video data sent by a server.
A first playing module 902, configured to play the video data on a playing interface.
A second receiving module 903, configured to receive the object related information sent by the server.
And the object related information is obtained by the server side based on at least one user characteristic matching of a second user side.
A first output module 904, configured to output the object related information in the playing interface.
As an implementation manner, before the second receiving module 903, the method may further include:
the first display request generation module is used for generating a display request aiming at the object related information based on the preset trigger operation of the watching user;
and the first display request sending module is used for sending the display request to the server.
The display interface comprises at least one preset display control; the display request generation module may be specifically configured to:
detecting a preset trigger operation of the watching user for any preset display control;
and generating a display request aiming at the object related information associated with any preset display control based on the preset trigger operation.
As an implementation manner, before the second receiving module 903, the method may further include:
and the bullet screen information sending module is used for acquiring bullet screen information input by the watching user and sending the bullet screen information to the server so that the server can identify preset content in the bullet screen information and determine object related information matched with the at least one user characteristic based on the preset content.
In practical applications, the object related information includes at least one object related sub-information, and the second receiving module 903 may specifically be configured to:
receiving object related sub-information sent by the server; and the object related sub-information is obtained by the server side from the object related information based on at least one user characteristic matching of a second user side.
The first output module 904 may be specifically configured to:
and outputting the object related sub information in the playing interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, object related information corresponding to at least one user characteristic is matched based on the at least one user characteristic of a watching user of a second user side in the video playing process. In practical application, the at least one user characteristic can represent the user requirements of the watching user, so that the object related information meeting the user requirements is sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is developed into a potential client, and systematic product conversion is realized.
Fig. 10 is a schematic structural diagram of an embodiment of an information display device according to an embodiment of the present application, where a technical solution of the embodiment can be executed by a user side, and the information display device may include:
a third receiving module 1001, configured to receive video data sent by a server.
The second playing module 1002 is configured to output the video data on a playing interface.
A fourth receiving module 1003, configured to receive the prompt identifier sent by the server.
Wherein the prompt identification is generated by the server based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data.
A second output module 1004, configured to output the prompt identifier in a target display area where the target portion is located in the play interface.
The second output module 1004 may specifically be configured to:
determining a target display area where the target part is located in the playing interface;
and outputting the prompt identification in the target display area.
The prompt mark comprises a prompt pattern; the second output module 1004 may specifically be configured to:
outputting the prompt mark output prompt pattern in a target display area where the target part is located in the playing interface; and the prompt pattern is generated for the server side based on the size of the target display area.
As an implementable embodiment, the apparatus may further comprise:
the object related information receiving module is used for receiving the object related information which is sent by the server and matched with the target part;
and the object related information output module is used for outputting the object related information in the playing interface.
As an implementation manner, before the object-related information receiving module, the method may further include:
the second display request generation module is used for generating a display request aiming at the object related information associated with the target part based on the preset trigger operation of the watching user;
and the second display request sending module is used for sending the display request to the server.
In practical applications, the object-related information includes at least one piece of object-related sub-information, and the object-related information receiving module may be specifically configured to:
receiving object related sub-information which is sent by the server and matched with the target part; the object-related sub-information is obtained by the server from the object-related information based on the target part matching.
The object-related information output module may be specifically configured to:
and outputting the object related sub information in the playing interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, the target part related to the associated object in the video data is identified, the prompt information corresponding to the target part is generated, so that the watching user is prompted dynamically in real time based on the video content of the first user, and meanwhile, the watching user can obtain more information timely and effectively by displaying the object related information corresponding to the target part. The watching method and the device not only further improve the watching of watching users, are beneficial to improving the viscosity of the users, develop the watching users into potential customers, and promote the productization of the associated objects.
Fig. 11 is a schematic structural diagram of an embodiment of a server provided in an embodiment of the present application, where the server may include a processing component 1101 and a storage component 1102.
The storage component 1102 is configured to store one or more computer instructions; the one or more computer instructions are to be invoked for execution by the processing component 1101.
The processing component 1101 may be configured to:
the method comprises the steps of obtaining video data sent by a first user end, and sending the video data to a second user end so that the second user end can play the video data on a playing interface;
acquiring at least one user characteristic of the second user terminal;
determining object-related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
The processing component 1101 may include one or more processors to execute computer instructions to perform all or part of the steps of the method described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1102 is configured to store various types of data to support operations in the server. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the server may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the server and other devices, such as with a terminal.
The embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the data processing method of the embodiment shown in fig. 1 may be implemented.
Fig. 12 is a schematic structural diagram of an embodiment of a server provided in an embodiment of the present application, where the server may include a processing component 1201 and a storage component 1202.
The storage component 1202 is for storing one or more computer instructions; the one or more computer instructions are to be invoked for execution by the processing component 1201.
The processing component 1201 may be configured to:
the method comprises the steps of obtaining video data uploaded by a first user side, and sending the video data to a second user side so that the second user side can play the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
The processing component 1201 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1202 is configured to store various types of data to support operations in the server. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the server may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the server and other devices, such as with a terminal.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the data processing method of the embodiment shown in fig. 3 may be implemented.
Fig. 13 is a schematic structural diagram of an embodiment of a terminal device according to the embodiment of the present application, where the terminal device may include a processing component 1301, a display component 1302, and a storage component 1303. The storage component 1303 is used for storing one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component 1301.
The processing component 1301 may be configured to:
receiving video data sent by a server and playing the video data on a playing interface of the display component 1302;
receiving object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
the object related information is output in the playback interface of the display component 1302.
The processing component 1301 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1303 is configured to store various types of data to support operations at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The display element 1302 may be an electro-luminescence (EL) element, a liquid crystal display or a micro-display having a similar structure, or a laser scanning type display in which the retina can directly display or the like.
Of course, the terminal device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the information display method of the embodiment shown in fig. 5 may be implemented.
Fig. 14 is a schematic structural diagram of an embodiment of a terminal device according to an embodiment of the present application, where the terminal device may include a processing component 1401, a display component 1402, and a storage component 1403. The storage component 1403 is used to store one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component 1401.
The processing component 1401 may be configured to:
receiving video data sent by a server and outputting the video data on a play interface of the display module 1402;
receiving a prompt identifier sent by the server; the prompt identification is generated by the server side based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and outputting the prompt identifier in a target display area where the target part is located in a play interface of the display module 1402.
Among other things, the processing component 1401 may include one or more processors to execute computer instructions to perform all or part of the steps of the method described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1403 is configured to store various types of data to support operation at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The display element 1402 may be an Electro Luminescence (EL) element, a liquid crystal display or a micro display having a similar structure, or a retina-directly-displayable or similar laser scanning type display.
Of course, the terminal device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the information display method according to the embodiment shown in fig. 6 may be implemented.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (35)

1. A data processing method, comprising:
the method comprises the steps of obtaining video data sent by a first user end, and sending the video data to a second user end so that the second user end can play the video data on a playing interface;
acquiring at least one user characteristic of the second user terminal;
determining object-related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
2. The method of claim 1, wherein the determining object-related information that matches the at least one user characteristic comprises:
and when a first preset event occurs in the video data, determining object related information matched with the at least one user characteristic.
3. The method according to claim 2, wherein before determining the object-related information matching the at least one user feature when the first predetermined event occurs in the video data, the method further comprises:
and establishing an incidence relation between the first preset event and the object related information.
4. The method according to claim 2, wherein before determining the object-related information matching the at least one user feature when the first predetermined event occurs in the video data, the method further comprises:
classifying the object-related information according to at least one user characteristic to obtain at least one object-related sub-information;
the determining of the object-related information matching the at least one user characteristic comprises:
determining object-related sub-information matching the at least one user characteristic.
5. The method of claim 4, wherein the video data comprises voice information;
when the first preset event occurs in the video data, the determining the object related information matched with the at least one user characteristic comprises:
recognizing first preset voice information in the voice information;
determining object related information associated with the first predetermined speech information;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
6. The method of claim 4, wherein the video data comprises sensing data collected by the sensing component based on the first user outputting a preset gesture at the first user end;
when the first preset event occurs in the video data, the determining the object related information matched with the at least one user characteristic comprises:
identifying first predetermined sensing data in the sensing data;
determining object-related information associated with the first predetermined sensory data;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
7. The method of claim 4, wherein the identifying that the first predetermined event occurs in the video data, determining the object-related information matching the at least one user feature comprises:
identifying an associated object in the video data;
determining object related information associated with the associated object;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
8. The method according to any one of claims 5 to 7, wherein the sending the object related information to the second user side for the second user side to output the object related information in a playing interface comprises:
and sending the object related sub-information matched with the at least one user characteristic in the object related information to the second user end so that the second user end can output the object related sub-information in a playing interface.
9. The method of claim 1, wherein after sending the object-related information to the second user end, further comprising:
and controlling the second user end to display the object related information in the playing interface according to a preset display form.
10. The method of claim 1, wherein the at least one user characteristic comprises a user type; the object related information comprises transaction information of at least one transaction channel related to the associated object;
the determining of the object-related information matching the at least one user characteristic comprises:
and determining transaction information of a transaction channel matched with the user type.
11. The method of claim 1, wherein prior to determining the object-related information that matches the at least one user characteristic, further comprising:
receiving a display request aiming at the object related information sent by the second user terminal; the display request is generated based on preset trigger operation of a watching user of the second user side.
12. The method of claim 1, further comprising:
receiving bullet screen information sent by the second user terminal and identifying preset content in the bullet screen information;
the determining of the object-related information matching the at least one user characteristic comprises:
and determining object related information matched with the at least one user characteristic based on the preset content.
13. The method of claim 12, wherein the object-related information includes at least one object-related sub-information;
the determining, based on the preset content, object-related information matching the at least one user feature comprises:
determining object related information associated with the preset content;
and determining object-related sub-information matched with the at least one user characteristic in the object-related information.
14. A data processing method, comprising:
the method comprises the steps of obtaining video data uploaded by a first user side, and sending the video data to a second user side so that the second user side can play the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
15. The method of claim 14, wherein generating the prompt identification corresponding to the target site comprises:
determining a target display area of the target part in the playing interface;
generating a prompt pattern with the same size as the target display area;
the sending the prompt identifier to the second user end so that the second user end can output the prompt identifier on the playing interface includes:
and sending the prompt pattern to the second user end so that the second user end can output the prompt pattern in a target display area where the target part is located in the playing interface.
16. The method of claim 15, wherein determining, based on the video data, a target region of an associated object to which the video data relates comprises:
and when a second preset event occurs in the video data, determining a target part related to the second preset event in the related object related to the video data.
17. The method according to claim 16, wherein after determining a target portion associated with a second predetermined event in an associated object related to the video data when the second predetermined event occurs in the video data, the method further comprises:
determining object-related information matching the target site;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
18. The method of claim 14, wherein after sending the alert identifier to the second user, further comprising:
receiving a display request for the object related information associated with the target part, which is sent by the second user end; the display request is generated based on a preset trigger operation of a watching user of the second user end;
determining object-related information associated with the target site;
and sending the object related information to the second user end so that the second user end can output the object related information in a playing interface.
19. An information display method, comprising:
receiving video data sent by a server and playing the video data on a playing interface;
receiving object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
and outputting the object related information in the playing interface.
20. The method of claim 19, wherein before receiving the object-related information sent by the server, the method further comprises:
generating a display request aiming at the object related information based on a preset trigger operation of a watching user of the second user end;
and sending the display request to the server.
21. The method according to claim 20, wherein the display interface comprises at least one preset display control;
the generating of the display request for the object related information based on the preset trigger operation of the viewing user of the second user comprises:
detecting a preset trigger operation of the watching user for any preset display control;
and generating a display request aiming at the object related information associated with any preset display control based on the preset trigger operation.
22. The method of claim 19, wherein before receiving the object-related information sent by the server, the method further comprises:
and acquiring bullet screen information input by the viewing user and sending the bullet screen information to the server, so that the server can identify preset content in the bullet screen information and determine object related information matched with at least one user characteristic based on the preset content.
23. An information display method, comprising:
receiving video data sent by a server and outputting the video data on a playing interface;
receiving a prompt identifier sent by the server; the prompt identification is generated by the server side based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and outputting the prompt identification in a target display area where the target part is located in the playing interface.
24. The method according to claim 23, wherein the outputting the prompt identifier in the target display area where the target portion is located in the playing interface comprises:
determining a target display area where the target part is located in the playing interface;
and outputting the prompt identification in the target display area.
25. The method of claim 23, wherein the cue indicia comprises a cue pattern; the outputting the prompt identifier in the target display area where the target part is located in the playing interface includes:
outputting the prompt mark output prompt pattern in a target display area where the target part is located in the playing interface; and the prompt pattern is generated for the server side based on the size of the target display area.
26. The method of claim 23, further comprising:
receiving object related information which is sent by the server and matched with the target part;
and outputting the object related information in the playing interface.
27. The method according to claim 26, wherein before receiving the information related to the object matching the target location sent by the server, further comprising:
generating an object-related information display request associated with the target part based on a preset trigger operation of a viewing user of the second user side;
and sending the display request to the server.
28. A data processing apparatus, comprising:
the first acquisition module is used for acquiring video data uploaded by a first user side;
the first sending module is used for sending the video data to a second user end so that the second user end can play the video data on a playing interface;
a second obtaining module, configured to obtain at least one user characteristic of the second user end;
a first determination module for determining object related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and the second sending module is used for sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
29. A data processing apparatus, comprising:
the video data acquisition module is used for acquiring video data uploaded by a first user side;
the video data sending module is used for sending the video data to a second user end so that the second user end can play the video data on a playing interface;
a second determination module, configured to determine, based on the video data, a target region of an associated object to which the video data relates;
the prompt identifier generation module is used for generating a prompt identifier corresponding to the target part;
and the prompt identifier sending module is used for sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
30. An information display device characterized by comprising:
the first receiving module is used for receiving video data sent by the server;
the first playing module is used for playing the video data on a playing interface;
the second receiving module is used for receiving the object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
and the first output module is used for outputting the object related information in the playing interface.
31. An information display device characterized by comprising:
the third receiving module is used for receiving the video data sent by the server;
the second playing module is used for outputting the video data on a playing interface;
the fourth receiving module is used for receiving the prompt identifier sent by the server; wherein the prompt identification is generated by the server based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and the second output module is used for outputting the prompt identifier in a target display area where the target part is located in the playing interface.
32. A server comprising a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
the method comprises the steps of obtaining video data sent by a first user end, and sending the video data to a second user end so that the second user end can play the video data on a playing interface;
acquiring at least one user characteristic of the second user terminal;
determining object-related information matching the at least one user characteristic; wherein the object related information is related data of an associated object to which the video data relates;
and sending the object related information to the second user end so that the second user end can output the object related information in the playing interface.
33. A server comprising a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
the method comprises the steps of obtaining video data uploaded by a first user side, and sending the video data to a second user side so that the second user side can play the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user end so that the second user end can output the prompt identifier in a target display area where the target part is located in the playing interface.
34. The terminal equipment is characterized by comprising a processing component, a display component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
receiving video data sent by a server and playing the video data on a playing interface of the display component;
receiving object related information sent by the server; the object related information is obtained by the server side based on at least one user characteristic matching of a second user side;
and outputting the object related information in a playing interface of the display component.
35. The terminal equipment is characterized by comprising a processing component, a display component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
receiving video data sent by a server and outputting the video data on a playing interface of the display component;
receiving a prompt identifier sent by the server; the prompt identification is generated by the server side based on a target part of an associated object related to the video data; a target portion of the associated object is determined based on the video data;
and outputting the prompt identification in a target display area where the target part is located in a playing interface of the display component.
CN201910394238.XA 2019-05-13 2019-05-13 Data processing method, information display method, device, server and terminal equipment Active CN111935488B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210551425.6A CN115119004B (en) 2019-05-13 2019-05-13 Data processing method, information display device, server and terminal equipment
CN201910394238.XA CN111935488B (en) 2019-05-13 2019-05-13 Data processing method, information display method, device, server and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910394238.XA CN111935488B (en) 2019-05-13 2019-05-13 Data processing method, information display method, device, server and terminal equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210551425.6A Division CN115119004B (en) 2019-05-13 2019-05-13 Data processing method, information display device, server and terminal equipment

Publications (2)

Publication Number Publication Date
CN111935488A true CN111935488A (en) 2020-11-13
CN111935488B CN111935488B (en) 2022-10-28

Family

ID=73282562

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210551425.6A Active CN115119004B (en) 2019-05-13 2019-05-13 Data processing method, information display device, server and terminal equipment
CN201910394238.XA Active CN111935488B (en) 2019-05-13 2019-05-13 Data processing method, information display method, device, server and terminal equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210551425.6A Active CN115119004B (en) 2019-05-13 2019-05-13 Data processing method, information display device, server and terminal equipment

Country Status (1)

Country Link
CN (2) CN115119004B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663641A (en) * 2012-05-19 2012-09-12 黄洪程 Electronic commerce method for unifying marketing channels
CN106791904A (en) * 2016-12-29 2017-05-31 广州华多网络科技有限公司 Live purchase method and device
CN106791895A (en) * 2016-11-29 2017-05-31 北京小米移动软件有限公司 Interactive approach and device in electric business application program
CN106791970A (en) * 2016-12-06 2017-05-31 乐视控股(北京)有限公司 The method and device of merchandise news is presented in video playback
WO2018036456A1 (en) * 2016-08-22 2018-03-01 大辅科技(北京)有限公司 Method and device for tracking and recognizing commodity in video image and displaying commodity information
CN108076353A (en) * 2017-05-18 2018-05-25 北京市商汤科技开发有限公司 Business object recommends method, apparatus, storage medium and electronic equipment
CN109429074A (en) * 2017-08-25 2019-03-05 阿里巴巴集团控股有限公司 A kind of live content processing method, device and system

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867648A (en) * 2010-04-30 2010-10-20 华为终端有限公司 Method for displaying prompt information in video program playing, and mobile terminal
US8761448B1 (en) * 2012-12-13 2014-06-24 Intel Corporation Gesture pre-processing of video stream using a markered region
CN104065979A (en) * 2013-03-22 2014-09-24 北京中传数广技术有限公司 Method for dynamically displaying information related with video content and system thereof
KR102019128B1 (en) * 2013-05-10 2019-09-06 엘지전자 주식회사 Mobile terminal and controlling method thereof
US20140359448A1 (en) * 2013-05-31 2014-12-04 Microsoft Corporation Adding captions and emphasis to video
US10001904B1 (en) * 2013-06-26 2018-06-19 R3 Collaboratives, Inc. Categorized and tagged video annotation
CN104796743B (en) * 2015-04-03 2020-04-24 腾讯科技(北京)有限公司 Content item display system, method and device
KR102396036B1 (en) * 2015-05-18 2022-05-10 엘지전자 주식회사 Display device and controlling method thereof
US10770113B2 (en) * 2016-07-22 2020-09-08 Zeality Inc. Methods and system for customizing immersive media content
CN107340852A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Gestural control method, device and terminal device
CN107578306A (en) * 2016-08-22 2018-01-12 大辅科技(北京)有限公司 Commodity in track identification video image and the method and apparatus for showing merchandise news
CN106331429A (en) * 2016-08-31 2017-01-11 上海交通大学 Video detail magnifying method
WO2018092016A1 (en) * 2016-11-19 2018-05-24 Yogesh Chunilal Rathod Providing location specific point of interest and guidance to create visual media rich story
CN106792092B (en) * 2016-12-19 2020-01-03 广州虎牙信息科技有限公司 Live video stream split-mirror display control method and corresponding device thereof
CN107613399A (en) * 2017-09-15 2018-01-19 广东小天才科技有限公司 A kind of video fixed-time control method for playing back, device and terminal device
CN107944376A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 The recognition methods of video data real-time attitude and device, computing device
CN108255304B (en) * 2018-01-26 2022-10-04 腾讯科技(深圳)有限公司 Video data processing method and device based on augmented reality and storage medium
CN108174167A (en) * 2018-03-01 2018-06-15 中国工商银行股份有限公司 A kind of remote interaction method, apparatus and system
CN108712683B (en) * 2018-03-02 2020-09-15 北京奇艺世纪科技有限公司 Data transmission method, bullet screen information generation method and device
CN108881765A (en) * 2018-05-25 2018-11-23 讯飞幻境(北京)科技有限公司 Light weight recorded broadcast method, apparatus and system
CN108769772B (en) * 2018-05-28 2019-06-14 广州虎牙信息科技有限公司 Direct broadcasting room display methods, device, equipment and storage medium
CN109274999A (en) * 2018-10-08 2019-01-25 腾讯科技(深圳)有限公司 A kind of video playing control method, device, equipment and medium
CN109309762B (en) * 2018-11-30 2021-08-10 努比亚技术有限公司 Message processing method, device, mobile terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663641A (en) * 2012-05-19 2012-09-12 黄洪程 Electronic commerce method for unifying marketing channels
WO2018036456A1 (en) * 2016-08-22 2018-03-01 大辅科技(北京)有限公司 Method and device for tracking and recognizing commodity in video image and displaying commodity information
CN106791895A (en) * 2016-11-29 2017-05-31 北京小米移动软件有限公司 Interactive approach and device in electric business application program
CN106791970A (en) * 2016-12-06 2017-05-31 乐视控股(北京)有限公司 The method and device of merchandise news is presented in video playback
CN106791904A (en) * 2016-12-29 2017-05-31 广州华多网络科技有限公司 Live purchase method and device
CN108076353A (en) * 2017-05-18 2018-05-25 北京市商汤科技开发有限公司 Business object recommends method, apparatus, storage medium and electronic equipment
CN109429074A (en) * 2017-08-25 2019-03-05 阿里巴巴集团控股有限公司 A kind of live content processing method, device and system

Also Published As

Publication number Publication date
CN111935488B (en) 2022-10-28
CN115119004B (en) 2024-03-29
CN115119004A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
US11064257B2 (en) System and method for segment relevance detection for digital content
US11611795B2 (en) Online live video sales management system
CN107818180B (en) Video association method, video display device and storage medium
US20170097679A1 (en) System and method for content provision using gaze analysis
US20150215674A1 (en) Interactive streaming video
US20180268440A1 (en) Dynamically generating and delivering sequences of personalized multimedia content
US10638197B2 (en) System and method for segment relevance detection for digital content using multimodal correlations
JP2016503919A (en) Method and system for analyzing the level of user engagement in an electronic document
CN110310137B (en) Advertisement putting method and device
US10440435B1 (en) Performing searches while viewing video content
US12002071B2 (en) Method and system for gesture-based cross channel commerce and marketing
US20220343307A1 (en) Video analysis of food service counter operations
CN108475381A (en) The method and apparatus of performance for media content directly predicted
CN108090206A (en) Sort method and device, the electronic equipment of comment information
CN111813986A (en) Intelligent advertisement pushing method, device, system, medium and electronic terminal
US20190289362A1 (en) System and method to generate a customized, parameter-based video
US20140089079A1 (en) Method and system for determining a correlation between an advertisement and a person who interacted with a merchant
CN111935488B (en) Data processing method, information display method, device, server and terminal equipment
WO2009024990A1 (en) System of processing portions of video stream data
CN114967922A (en) Information display method and device, electronic equipment and storage medium
CN114741610A (en) Information pushing method, device, equipment and storage medium
CN110110688B (en) Information analysis method and system
CN113129112A (en) Article recommendation method and device and electronic equipment
US20160098766A1 (en) Feedback collecting system
US11842543B1 (en) Camera system for providing an in-store experience to a remote user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant