WO2019101038A1 - 弹幕内容控制方法、计算机设备和存储介质 - Google Patents

弹幕内容控制方法、计算机设备和存储介质 Download PDF

Info

Publication number
WO2019101038A1
WO2019101038A1 PCT/CN2018/116190 CN2018116190W WO2019101038A1 WO 2019101038 A1 WO2019101038 A1 WO 2019101038A1 CN 2018116190 W CN2018116190 W CN 2018116190W WO 2019101038 A1 WO2019101038 A1 WO 2019101038A1
Authority
WO
WIPO (PCT)
Prior art keywords
barrage
content
target object
computer device
action
Prior art date
Application number
PCT/CN2018/116190
Other languages
English (en)
French (fr)
Inventor
陈姿
孔凡阳
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019101038A1 publication Critical patent/WO2019101038A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker

Definitions

  • the present application relates to the field of computer technology, and in particular, to a barrage content control method, a computer device, and a storage medium.
  • the user needs to manually interrupt the playing video, and then manually add a keyword in the mask list, thereby masking the contents of the barrage including the keyword to adjust the barrage content.
  • manually adding keywords to adjust the contents of the barrage is less efficient.
  • a barrage content control method a computer device, and a storage medium are provided.
  • a barrage content control method comprising:
  • the computer device acquires an image frame acquired from a real scene while playing the video and the corresponding barrage content
  • the computer device When the action of the target object matches the preset type of the barrage adjustment action, the computer device triggers generation of a barrage operation instruction corresponding to the matched type of the barrage adjustment action;
  • the computer device adjusts the content of the played barrage according to the matched barrage adjustment action type in response to the barrage operation instruction.
  • a computer device comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executed by one or more processors to cause the one or more processes Perform the following steps:
  • Triggering to generate a barrage operation command corresponding to the matching type of barrage adjustment action Triggering to generate a barrage operation command corresponding to the matching type of barrage adjustment action
  • the played barrage content is adjusted according to the matched barrage adjustment action type.
  • One or more storage media storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the following steps:
  • Triggering to generate a barrage operation command corresponding to the matching type of barrage adjustment action Triggering to generate a barrage operation command corresponding to the matching type of barrage adjustment action
  • the played barrage content is adjusted according to the matched barrage adjustment action type.
  • FIG. 1 is an application scenario diagram of a method for controlling a content of a barrage in an embodiment
  • FIG. 2 is a schematic flow chart of a method for controlling a content of a barrage in an embodiment
  • FIG. 3 is a schematic diagram of a scene for acquiring an image frame in an embodiment
  • FIG. 4 is a schematic diagram showing the principle of establishing a machine learning model in an embodiment
  • FIG. 6 are schematic diagrams showing an interface for removing the contents of the barrage in one embodiment
  • FIG. 7 is a timing diagram of a method for controlling a content of a barrage in an embodiment
  • Figure 8 is a block diagram of a barrage content control device in an embodiment
  • Figure 9 is a block diagram of a barrage content control device in another embodiment.
  • Figure 10 is a block diagram of a barrage content control device in still another embodiment.
  • Figure 11 is a block diagram of a computer device in one embodiment.
  • FIG. 1 is an application scenario diagram of a method for controlling a content of a barrage in an embodiment.
  • the application scenario includes a terminal 110 and a server 120 connected through a network.
  • the terminal 110 may be a smart television, a desktop computer or a mobile terminal, and the mobile terminal may include at least one of a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, and a wearable device.
  • the server 120 can be implemented by a stand-alone server or a server cluster composed of a plurality of physical servers.
  • the terminal 110 can play the video and the corresponding barrage content, and acquire image frames acquired from the real scene when playing the video and the corresponding barrage content.
  • terminal 110 may acquire image frames from a real-world scene through its own integrated image acquisition device. It can be understood that the terminal 110 can also externally connect the image acquisition device to acquire image frames from a real scene.
  • Terminal 110 can identify the action of the target object in the image frame. In an embodiment, the terminal 110 can determine whether the action of the target object matches the preset barrage adjustment action type by itself, and the terminal 110 can also feed back the action of the identified target object to the server 120, and identify the target object by using the server 120. Whether the action matches the preset barrage adjustment action type.
  • the trigger terminal 110 When the action of the target object matches the preset type of the barrage adjustment action, the trigger terminal 110 generates a barrage operation instruction corresponding to the matched type of the barrage adjustment action; in response to the barrage operation instruction, adjusts according to the matched barrage The action type adjusts the content of the barrage played.
  • FIG. 2 is a schematic flow chart of a method for controlling a content of a barrage in an embodiment. This embodiment is mainly illustrated by the application of the barrage content control method to a computer device, which may be the terminal 110 in FIG. Referring to FIG. 2, the method specifically includes the following steps:
  • the barrage content is a comment that the video viewer enters and appears on the video screen, and can be displayed on the video by scrolling, staying, or even more actions.
  • a realistic scene is a scene that exists in a natural and real world.
  • An image frame is a unit in a sequence of image frames capable of forming a dynamic picture, and is used to record a picture in a real scene at a certain moment.
  • the computer device can play the video and the corresponding barrage content.
  • the computer device can play the video and corresponding barrage content through the video client.
  • the video client is a client mainly used for video playback processing.
  • the client can be APP (short for Application) and refers to a third-party application.
  • the computer device can also play the video and the corresponding barrage content through the webpage.
  • the computer device can acquire image frames acquired from a real scene while playing the video and corresponding barrage content. It can be understood that the acquired image frames can be one or more. In one embodiment, the acquired image frame may be a plurality of consecutively acquired image frames.
  • the computer device itself integrates an image capture device that can capture image frames from a real-world scene through a self-integrated image capture device while playing the video and corresponding barrage content.
  • the computer device can also acquire image frames from the real scene while playing the video and the corresponding barrage content through the external image capture device.
  • the image capturing device may be a camera, a camera or a camera.
  • FIG. 3 is a schematic diagram of a scene for acquiring an image frame in an embodiment.
  • computer device 302 plays video 304 and barrage content.
  • the computer device 302 shown in Fig. 3 is a PC, the host is omitted here, and only the display screen is shown. Among them, “clothes are really ugly", “no, the clothes look good” is the content of the barrage, watching the user 306 in the real scene to watch the video and barrage content of the computer equipment.
  • the computer device 302 captures the image frame from the real scene through the camera 302a integrated by itself. For example, the camera 302a can capture the real scene and collect an image frame including the gesture of viewing the user 306.
  • the target object is an object in the image frame that needs to be recognized by the motion.
  • the action of the target object is the action that the target object presents in the image frame.
  • the action of the target object including the action presented by the gesture of the target object (for example, a thumbs up) or the action represented by the motion trajectory of the target object (for example, a hand swinging left and right).
  • the target object may be a video viewing user's body part, including at least one of a hand, a foot, an arm, a leg, a head, and a face (eye, nose, mouth, etc.).
  • the symmetric parts having the collective name may be collectively referred to as a part, or may be divided into different parts.
  • the left and right hands may be collectively referred to as a hand, or may be divided into different parts.
  • the target object may also be a device capable of controlling the content of the barrage through motion control, and the specific image of the target object is not limited herein.
  • the action of the target object may be a combination of actions of the two or more parts included (for example, shaking the head and swinging the left and right hands, that is, the head and the hand) The combination of the two parts of the action).
  • the target object can be a hand.
  • the action of the target object may be an action presented by the hand gesture (eg, a thumbs up) and/or an action presented by the hand motion trajectory (eg, a hand swinging left and right). It can be understood that when the target object is a hand, the target object can be one or two hands. When the target object is two hands, the action of the target object may be a combination of actions of the two hands. For example, two hands draw a circle than the heart or two hands.
  • the computer device can detect whether the target object is included in the image frame acquired in the real scene, and when it is determined to include the target object, the action of the target object in the image frame is identified. It can be understood that including a target object in an image frame refers to an image including a target object in an image frame. For example, the inclusion of a hand in an image frame refers to the inclusion of a hand image in the image frame.
  • the computer device can identify the gesture of the target object in the image frame, and determine the motion of the target object according to the gesture of the identified target object.
  • the computer device can also identify the location of the target object in the image frame and determine the motion of the target object based on the location of the target object in the image frame.
  • the type of the barrage adjustment action is a type of action for adjusting the contents of the barrage.
  • the barrage adjustment action type includes at least one of a barrage removal action type, a barrage fast forward action type, and a barrage pause action type.
  • the barrage removal action is the action of removing the barrage content.
  • the fast-moving action of the barrage is a fast-forward action on the contents of the barrage.
  • the barrage pause action is to pause the barrage content.
  • the type of the barrage adjustment action can also be a type of action that makes other adjustments to the barrage content.
  • the type of the barrage adjustment action is not limited, and the expansion setting can be performed according to actual needs.
  • the computer device can match the action of the target object to a preset type of barrage adjustment action.
  • the computer device may acquire a feature parameter that represents an action of the target object, and match the feature parameter with a feature parameter corresponding to the preset barrage adjustment action type, and when the feature parameter that represents the action of the target object hits the preset When the barrage adjusts the feature parameter corresponding to the action type, it is determined that the action of the target object matches the type of the barrage adjustment action corresponding to the hit feature parameter.
  • the computer device may also pre-train the machine learning model, input the feature parameters of the action of the target object into the pre-trained machine learning model, and output the preset barrage adjustment action according to the feature parameter.
  • the type is determined, it is determined that the action of the target object matches the preset type of the barrage adjustment action of the output.
  • the machine learning model is obtained by performing machine learning training according to the characteristic parameters of the characterization action and the corresponding preset barrage adjustment action type.
  • the action of the target object that matches the type of the barrage adjustment action may be left and right movement, up and down movement, or static motion.
  • the action of the target object matching the type of the barrage removal action may be moving up and down
  • the action of the target object matching the type of the barrage fast forward action may be moving left and right, and paused with the barrage.
  • the action of the type matching target object can be a static action. It can be understood that the action of the target object that matches the preset type of the barrage adjustment action may also be other actions, which is not limited thereto. It can be understood that the "left and right" and “up and down” as used herein are “left and right” and “up and down” with respect to a preset reference orientation, and the preset reference orientation can be specifically set according to actual needs.
  • the computer device When the action of the target object matches the preset barrage adjustment action type, the computer device is triggered to generate a barrage operation instruction corresponding to the matched barrage adjustment action type.
  • the barrage operation instruction is an instruction to perform an operation on the contents of the barrage.
  • the barrage operation command corresponding to the type of the barrage adjustment action is used to trigger the barrage adjustment action corresponding to the type of the barrage adjustment action.
  • mapping relationship between the barrage adjustment action type and the barrage operation command is pre-stored in the computer device, and according to the mapping relationship, the computer device can map the matched barrage adjustment action type to the corresponding barrage operation instruction.
  • a hash map created by using a barrage adjustment action type as a key and a barrage operation command as a value is pre-stored in the computer device.
  • the key is a Key in the Key-Value store, and the key (Key) is used to query, and the value corresponding to the key can be queried.
  • the computer device can match the matched barrage adjustment action type with the key (Key) in the hash map, and obtain the value (Value) corresponding to the matched key (Key), and obtain a corresponding barrage operation instruction.
  • the computer device may perform a corresponding adjustment operation on the played barrage content according to the matched barrage adjustment action type in response to the barrage operation instruction. It can be understood that each type of barrage adjustment action has a corresponding adjustment operation.
  • the computer device can pass the barrage operation instructions as parameters to the video client component via the messaging carrier.
  • the computer device can read and respond to the barrage operation command from the messaging carrier via the video client component.
  • the computer device can broadcast the barrage operation command as a parameter to the video client component via the messaging carrier.
  • the message delivery carrier is a carrier of communication information between components, and encapsulates instructions and data provided by the calling component.
  • the messaging carrier can be an Intent.
  • Intent is an abstract description of the data stored in the Android system (Android system, a Linux-based free and open source operating system), can be used between different components and different App (Application) transfer.
  • the adjusted content of the barrage that is played may be the content of the barrage played when the image frame is acquired, wherein the image frame is an action that identifies the target object that matches the type of the barrage adjustment action.
  • Image frame that is, based on the type of the barrage adjustment action matched by the image frame, it is used to adjust the content of the barrage played when the image frame is acquired. For example, when the action of the target object in the image frame matches the type of the barrage removal action, the content of the barrage played when the image frame is acquired is removed, and the content of the barrage that is played next may be left unprocessed.
  • the content of the barrage that is adjusted to be played may also be the content of the barrage played from the time of capturing the image frame or the content of the barrage being played.
  • the content of the barrage being played is the content of the barrage being played when the barrage content adjustment process is performed in response to the barrage operation instruction.
  • the computer device can also adjust the content of the barrage played from the time the image frame is acquired according to the matching barrage adjustment type. For example, when the action of the target object in the image frame matches the type of the barrage fast forward action, the barrage content played from the time of acquiring the image frame can be fast forwarded.
  • the entire barrage content of the play may be removed, or part of the content of the barrage being played.
  • the barrage content is removed.
  • the above-mentioned barrage content control method collects and analyzes the action of the target object in the real scene when playing the video and the barrage content, and automatically adjusts the play according to the barrage adjustment action type matched by the action of the target object.
  • Barrage content no need to manually stop video playback and manual addition of keywords and other cumbersome operations, improve the efficiency of the barrage content adjustment.
  • the adjustment of the barrage content can also be realized, without the user having to directly operate the barrage content, thereby improving the efficiency and flexibility of the barrage content adjustment.
  • step S204 includes: identifying a target object in the acquired image frame; determining an action of the target object according to a change in position of the target object in the adjacent image frame.
  • the adjacent image frames are image frames adjacent to the acquisition timing.
  • Acquisition timing is the chronological order in which image frames are acquired.
  • the computer device may extract image data included in the acquired image frame, identify feature data of the target object from the extracted image data, and determine a target object in the image frame according to the identified feature data of the target object.
  • determining the action of the target object according to the positional change of the target object in the adjacent image frame comprises: determining a position of the target object in the acquired image frame; and performing position of the target object in the adjacent image
  • the alignment obtains a change in position of the target image in the adjacent image; the computer device can determine the action of the target object according to the positional change of the target object in the adjacent image frame.
  • the adjacent image frames are adjacent at least two image frames, and may be two adjacent image frames, or may be two or more adjacent image frames, such as five adjacent image frames.
  • the computer device can determine the action of the target object according to the positional change of the target object in the adjacent image frame.
  • the computer device may determine a motion trajectory of the target object based on a change in position of the target object in the adjacent image frame, and determine an action of the target object based on the corresponding motion trajectory. For example, when the motion trajectory is a motion trajectory that moves back and forth, it is determined that the motion of the target object moves back and forth.
  • the action of the target object is determined according to the position change of the target object in the adjacent image frame, and is not limited to the action of the target object represented by the gesture alone, thereby improving the flexibility of the action of the target object, thereby improving The diversity of the barrage content adjustment.
  • determining the action of the target object according to the change of the position of the target object in the adjacent image frame comprises: determining the target object in the image frame in a spatial rectangular coordinate system established with the camera acquiring the image frame as an origin The position coordinates of the position; the action of the target object is determined according to the change of the position coordinates of the target object in the adjacent image frame.
  • the acquired image frames are collected by an integrated or external camera.
  • the computer device can establish a spatial Cartesian coordinate system with the camera that acquires the image frame as the origin.
  • the computer device may be based on the camera, with the horizontal direction of the display screen parallel to the computer device being the horizontal axis (X-axis) and the vertical direction parallel to the display screen of the computer device being the vertical axis (Y-axis) ), a spatial Cartesian coordinate system is established with a vertical axis (Z axis) perpendicular to the direction of the display screen of the computer device.
  • the computer device can also establish a spatial rectangular coordinate system with the camera as the origin and the other mutually perpendicular directions as the horizontal, the vertical and the vertical axes. There is no limit to this.
  • the position coordinates of the target object in the acquired image frame are determined.
  • the computer device can determine the motion of the target object based on the position coordinates of the target object in the adjacent image frame, based on the change in the position coordinates of the target object in the adjacent image frame.
  • the adjacent image frames are adjacent at least two image frames, and may be two adjacent image frames, or may be two or more adjacent image frames, such as five adjacent image frames.
  • the computer device may construct a motion trajectory of the target object according to a change in position coordinates of the target object in the adjacent image frame, and determine an action of the target object according to the corresponding motion trajectory.
  • the computer device may also acquire a difference between position coordinates of the target object in the adjacent image frame, analyze a variation rule of the difference between the position coordinates, and determine the target object according to the change rule. action. For example, if the difference between the position coordinates of the target object in the two adjacent image frames in the horizontal direction is positive or negative, the action of the target object can be determined to move horizontally back and forth.
  • the position of the target object can be accurately determined by referring to the camera that collects the image frame, so that the motion of the target object can be accurately determined, and the accuracy of the adjustment of the barrage content is improved.
  • the method further comprises: acquiring a feature parameter representing the action of the target object; inputting the feature parameter into the pre-trained machine learning model; and when the machine learning model outputs the preset barrage adjustment action type according to the feature parameter Then, it is determined that the action of the target object matches the output of the preset barrage adjustment action type.
  • the action of the target object is characterized by feature parameters, and the computer device can determine the action it represents by parsing the feature parameters.
  • the computer device can acquire feature parameters that characterize the actions of the target object and input the feature parameters into a pre-trained machine learning model.
  • the machine learning model is obtained by performing machine learning training according to the characteristic parameters of the characterization action and the corresponding preset barrage adjustment action type.
  • the machine learning model can be stored in a computer device that inputs the feature parameters into a machine learning model that is stored by itself.
  • the machine learning model outputs a preset barrage adjustment action type according to the feature parameter
  • the computer device can determine that the action of the target object matches the output of the preset barrage adjustment action type.
  • the machine learning model can be stored in a server, and the computer device can send the feature parameters to the server, causing the server to enter the feature parameters into the machine learning model.
  • the server outputs the preset barrage adjustment action type according to the feature parameter through the machine learning model, it is determined that the action of the target object matches the output preset barrage adjustment action type.
  • the server can feed back the determination result to the computer device.
  • the action of the target object may be moving back and forth or moving back and forth or static motion.
  • the corresponding type of the barrage adjustment action of the output is the barrage fast forward action type or the barrage removal action type or the barrage. Pause the action type.
  • the pre-trained machine learning model is used to determine the type of the barrage adjustment action of the action matching of the target object, and the accuracy of the matching barrage adjustment action type is improved.
  • the method further includes: establishing a spatial rectangular coordinate system with the camera acquiring the image frame as an origin; acquiring a training image frame acquired from the real scene by the camera; and spatially selecting the target object according to the adjacent training image frame
  • the change of the position coordinates in the Cartesian coordinate system determines the corresponding action of the target object; the machine learning training is performed according to the feature parameters of the determined action and the corresponding preset barrage adjustment action type, and the machine learning model is obtained.
  • the training image frame is an image frame for performing machine learning training.
  • the adjacent training image frames are training image frames adjacent to the acquisition timing.
  • the adjacent training image frame is adjacent to at least two training image frames, and may be two adjacent training image frames, or may be two or more adjacent training image frames.
  • the acquired image frames are collected by an integrated or external camera.
  • the computer device can establish a spatial Cartesian coordinate system with the camera that acquires the image frame as the origin.
  • the computer device may be based on the camera, with the horizontal direction of the display screen parallel to the computer device being the horizontal axis (X-axis) and the vertical direction parallel to the display screen of the computer device being the vertical axis (Y-axis) ), a spatial Cartesian coordinate system is established with a vertical axis (Z axis) perpendicular to the direction of the display screen of the computer device.
  • the computer device can also establish a spatial rectangular coordinate system with the camera as the origin and other mutually perpendicular directions as the horizontal, vertical and vertical axes. There is no limit to this.
  • the computer device can capture the training image frame from the real scene through the camera.
  • the computer device can identify the target object in the training image frame, and determine the position coordinates of the target object in the spatial rectangular coordinate system in each training image frame, and determine the target object in the training image frame in the space rectangular coordinate system according to the position coordinates.
  • the change in position coordinates The computer device can determine the corresponding action of the target object according to the change of the position coordinates of the target object in the spatial rectangular coordinate system in the training image frame.
  • the computer device may construct a motion trajectory of the target object according to a change in position coordinates of the target object in the adjacent training image frame, and determine an action of the target object according to the corresponding motion trajectory.
  • the computer device may acquire a difference between position coordinates of the target object in the adjacent training image frame, analyze a variation rule of the difference between the position coordinates, and determine the target object according to the change rule. action.
  • the computer device can acquire a feature parameter that represents the action of the determined target object, and the computer device can acquire a corresponding barrage adjustment action type preset for the acquired feature parameter, and the computer device can adjust the feature parameter and the corresponding preset barrage.
  • the type of action is machine learning training, and the machine learning model is obtained. It can be understood that the machine learning model is used to output a type of barrage adjustment action that matches the characteristic parameter of the input.
  • FIG. 4 is a schematic diagram showing the principle of establishing a machine learning model in one embodiment.
  • a spatial rectangular coordinate system is established with the camera as the origin, and the target object in the training image frame is a hand, and the position coordinates of the hand in the adjacent training image frame in the space rectangular coordinate system are changed back and forth in the vertical direction. (As shown in 402), it can be determined that the action of the target object moves up and down. Assuming that the position coordinates of the hand in the adjacent training image frame in the space rectangular coordinate system change back and forth in the horizontal direction (as shown by 404), it can be determined that the action of the target object moves back and forth, assuming adjacent training image frames.
  • the position coordinates of the hand in the space rectangular coordinate system do not change (as indicated by 406), then it can be determined that the action of the target object is a stationary motion.
  • the computer device can acquire a feature parameter that characterizes the motion of the hand, and obtain a type of the barrage adjustment action preset for the feature parameter, and perform machine learning training according to the feature parameter and the corresponding barrage adjustment action type. For example, for the feature parameters that characterize the left and right movements, the barrage fast forward action type is set, the barrage removal action type is set for the feature parameters that characterize the up and down movement, and the barrage pause action type is set for the feature parameters characterizing the static action.
  • the spatial rectangular coordinate system is established with the camera that collects the image frame as the origin; the corresponding action of the target object is determined according to the change of the position coordinate of the target object in the spatial rectangular coordinate system in the collected training image frame;
  • the machine learning model is obtained by performing the machine learning training on the characteristic parameters of the determined action and the corresponding preset barrage adjustment action types.
  • the machine learning model obtained based on the machine learning training determines whether the action of the target object adjusts the action type with the preset matching barrage, thereby improving the accuracy.
  • step S208 includes adjusting the content of the played barrage to be invisible in response to the barrage operation command when the matched barrage adjustment action type is a barrage removal action type.
  • the type of the barrage removal action is the type of action to remove the barrage content.
  • the type of the barrage adjustment action matching the action of the target object is the barrage removal action type
  • triggering the generation of the bomb that matches the type of the barrage removal action and triggering the removal operation of the barrage content is triggered.
  • the computer device adjusts the content of the played barrage to be invisible in response to the barrage operation command.
  • the content of the adjusted playing barrage may be the barrage content played when the image frame is acquired, wherein the image frame is an image frame that recognizes the action of the target object that matches the barrage adjustment action type.
  • the content of the barrage that is adjusted to be played may also be the content of the barrage played from the time the image frame is acquired or the content of the barrage being played.
  • the computer device can adjust the visibility of the played barrage content to invisible.
  • Visible is the degree of clarity that can be seen.
  • the computer device can adjust the value of the visibility of the played barrage content to False, so that the barrage content is not visible.
  • the computer device can also delete the played barrage content.
  • FIG. 5 to 6 are schematic diagrams showing an interface for removing the contents of the barrage in one embodiment.
  • FIG. 5 shows some unhealthy barrage contents, such as “the clothes are ugly”, “hair ugly to burst” and the like, then the user can make a gesture matching the type of the barrage removal action ( That is, the hand movement), the computer device can adjust the content of the displayed barrage to invisible according to the type of the barrage removal action.
  • the unhealthy barrage content of "clothes are ugly” and “hair ugly to burst” is adjusted to the invisible barrage content.
  • the content of the played barrage is automatically adjusted to be invisible according to the matching type of the barrage removal action. There is no need to manually add keywords to shield the barrage content, which improves the efficiency of the barrage content adjustment.
  • the adjustment of the barrage content can also be realized, without the user having to directly operate the barrage content, thereby improving the efficiency and flexibility of the barrage content adjustment.
  • adjusting the content of the played barrage to invisible comprises: determining a position of the target object mapped to the play screen of the barrage content; determining the barrage content to be removed according to the position in the played barrage content ; Adjust the contents of the barrage to be removed to be invisible.
  • the position of the target object mapped to the play screen of the barrage content is to map the position of the target object in the real scene to the position in the play screen of the barrage content. It can be understood that the mapping can achieve an effective docking of actions in a real scene and operations on a play screen.
  • the target object is mapped to a position in the play screen of the barrage content, which may be a start position in the play screen mapped to the barrage content when the target object is subjected to the barrage removal action, or may be the target object. Maps to the position in the playback screen of the barrage content during the barrage removal action.
  • mapping relationship between the location in the real scene and the location in the playback screen is preset in the computer device.
  • the mapping relationship is used to implement a corresponding conversion between a position in a real scene and a position in a play screen.
  • the computer device may acquire a starting position of the target object as a barrage removal action in a real scene, and map the starting position of the target object in the real scene to the content of the barrage according to the mapping relationship.
  • the computer device can select the content of the barrage currently playing to the starting position to which the target object is mapped as the barrage content to be removed.
  • the computer device may also acquire a position of the target object in the real scene during the process of performing the barrage removal action, and according to the mapping relationship, the obtained barrage removal action is performed. Each position in which it is located is sequentially mapped to the position in the play screen of the barrage content, and the corresponding barrage removal track is obtained.
  • the computer device can determine the content of the barrage to be removed from the content of the played barrage according to the barrage removal track.
  • the computer device may use the barrage content covered by the barrage removal track as the barrage content to be removed.
  • the content of the barrage that needs to be removed may be accurately determined according to the position of the target object mapped to the playback screen of the barrage content, without the need for other playback.
  • the removal of the barrage content improves the accuracy of the removal of the barrage content.
  • step S208 further includes: when the matched barrage adjustment action type is a barrage fast forward action type, speeding up the scrolling speed of the played barrage content in response to the barrage operation instruction.
  • the type of barrage fast forward action is the type of action to fast forward the barrage content.
  • the type of the barrage pause action is the type of action that pauses the barrage content.
  • the trigger when the type of the barrage adjustment action matching the action of the target object is the barrage fast forward action type, the trigger generates a bullet that matches the barrage fast forward action type and triggers the fast forward operation processing on the barrage content.
  • Curtain operation instructions The computer device speeds up the scrolling speed of the played barrage content in response to the barrage operation command. In one embodiment, the computer device can shorten the scrolling time of the barrage content to speed up the scrolling speed of the played barrage content.
  • step S208 further includes: when the matched barrage adjustment action type is a barrage pause action type, stopping the played barrage content from scrolling in response to the barrage operation instruction.
  • the type of the barrage adjustment action matching the action of the target object is the barrage pause action type
  • triggering the generation of the barrage operation command that matches the type of the barrage pause action and triggering the pause operation of the barrage content is the barrage operation instruction.
  • the content of the adjusted playing barrage may be the barrage content played when the image frame is acquired, wherein the image frame is an image frame that recognizes the action of the target object that matches the barrage adjustment action type.
  • the content of the barrage that is adjusted to be played may also be the content of the barrage played from the time the image frame is acquired or the content of the barrage being played.
  • the image frame captured from the real scene is matched, and the type of the barrage fast forward or pause motion is matched, and the content of the barrage is automatically fast forwarded according to the matching barr movement fast forward or pause action type.
  • the adjustment of the barrage content can also be realized, without the user having to directly operate the barrage content, thereby improving the efficiency and flexibility of the barrage content adjustment.
  • the method further comprises: when the matching barrage adjustment action type is a barrage removal action type, obtaining a keyword included in the barrage content adjusted to be invisible; from the bullet associated with the video The content of the barrage including the keyword is filtered out in the content of the screen; during the playback of the video, the remaining barrage content after filtering is played.
  • the matching barrage adjustment action type is a barrage removal action type
  • the computer device can analyze the content of the barrage adjusted to be invisible by itself, and extract the keywords included in the barrage content adjusted to be invisible.
  • the computer device can also send the barrage content adjusted to be invisible to the server, and obtain the keywords extracted from the barrage content adjusted by the server and adjusted to be invisible.
  • the computer device can segment the barrage content that is adjusted to be invisible to obtain a word segment.
  • the computer device can match the word segment obtained by the word segment with the keyword in the preset keyword library, and obtain the keyword included in the barrage content adjusted to be invisible.
  • the preset keyword library may be a keyword library set locally on the computer device, or may be a keyword library set in the server.
  • the computer device may send the word segment obtained by the word segmentation to the server, so that the server matches the word segment with the keyword in the keyword library, and feeds the matching result to the computer device. Get the keywords included in the invisible barrage content.
  • the computer device may also perform semantic analysis on the content of the barrage that is adjusted to be invisible to extract keywords included in the barrage content that is adjusted to be invisible.
  • the computer device can filter the barrage content including the keywords from the barrage content associated with the video. During the playback of the video, the computer device plays the remaining barrage content after filtering, that is, during the process of playing the video, the computer device no longer has the barrage content including the keyword in the content of the barrage played.
  • the matching type of the barrage adjustment action is the barrage removal action type
  • the keywords included in the barrage content adjusted to be invisible are acquired; and the inclusion of the barrage content associated with the video is included.
  • the content of the barrage of the keyword; during the playback of the video, the remaining barrage content after filtering is played. Automatically filtering the contents of the barrage that you want to block, you don't need to manually add keywords to shield the barrage content, and improve the efficiency of the barrage content adjustment.
  • obtaining the keywords included in the barrage content adjusted to be invisible includes: transmitting the barrage content adjusted to be invisible to the server through the locally running service; receiving the feedback from the server to the server Keywords extracted from the contents of the barrage.
  • the computer device may save the barrage content adjusted to be invisible as a parameter to the messaging carrier and communicate the invisible barrage content to the locally running service via the messaging carrier.
  • the barrage content adjusted to be invisible in the messaging carrier is sent to the server by a locally running service.
  • the computer device may send an HTTP request to the server via a locally-run service based on adjustments in the messaging carrier for invisible barrage content, including HTTP content adjusted to be invisible.
  • the server may perform semantic analysis processing on the received barrage content that is adjusted to be invisible, and obtain keywords included in the barrage content adjusted to be invisible.
  • the server may also perform segmentation on the content of the barrage adjusted to be invisible, obtain a word segment, and match the word segment obtained by the segmentation with the keyword in the preset keyword library to extract and adjust to not Keywords that are visible in the visible barrage content.
  • the server can feed back keywords extracted from the barrage content that is adjusted to be invisible to the computer device.
  • the content of the barrage adjusted to be invisible is sent to the server through the locally running service, and the extracted keywords are directly received from the server, thereby reducing the data processing of the computer device at the end, thereby reducing the impact on the video playback. . That is, while automatically blocking the contents of the barrage, the video playback quality is also guaranteed.
  • the video plays after logging in with the user identification.
  • the method further includes: associating the keyword with the user identifier; wherein the associated keyword is used to filter the content of the barrage including the associated keyword from the content of the barrage associated with the video played after the user logo is logged in. .
  • the video may be played by the video client after logging in to the video client with the user identification.
  • the video played after the user logo is logged in may be any video played after the user logo is logged in, and is not limited to the video corresponding to the content of the barrage extracted from the keyword. It can be understood that the video played after being registered by the user identifier may also refer to the video corresponding to the content of the barrage extracted from the keyword.
  • the computer device may itself associate the keyword with the user identification after acquiring the keyword.
  • the computer device may also associate the keyword with the logged-in user identifier by the server, and obtain a keyword that is fed back by the server and associated with the user identifier. That is, after the server extracts the keyword, the keyword can be associated with the user identifier, and the keyword associated with the user identifier is returned to the computer device. Specifically, the server may save the extracted keywords to a personal mask list corresponding to the user identifier.
  • the computer device can acquire the content of the barrage associated with the video played after the user logo is logged in, and filter out the bullet including the keyword associated with the user identifier from the obtained barrage content.
  • Curtain content When playing a video after logging in with the user ID, the computer device can acquire the content of the barrage associated with the video played after the user logo is logged in, and filter out the bullet including the keyword associated with the user identifier from the obtained barrage content.
  • Curtain content When playing a video after logging in with the user ID, the computer device can acquire the content of the barrage associated with the video played after the user logo is logged in, and filter out the bullet including the keyword associated with the user identifier from the obtained barrage content.
  • the computer device can acquire the image frame acquired from the real scene when playing the video A and the corresponding barrage content a, and recognize the action of the target object in the image frame, when the target object When the action matches the preset barrage removal action type, the trigger generates a barrage removal operation instruction corresponding to the matched barrage removal action type, and adjusts the content of the played barrage to be invisible.
  • the computer device can obtain the keyword n included in the barrage content adjusted to be invisible, and associate the keyword n with the user identifier 1.
  • the computer device can also filter the barrage content including the associated keyword n from the barrage content b associated with the video B.
  • the content of the barrage including the keyword can be directly and automatically filtered when the video is played after the user logo is logged in, thereby reducing the operation steps of the barrage content adjustment, and It can filter out the content of the barrage that the user does not want, and improve the accuracy and value of the content of the barrage played.
  • Fig. 7 is a timing chart showing a method of controlling the content of the barrage in one embodiment.
  • the video client in the timing diagram is installed on the computer device.
  • the timing diagram specifically includes the following steps:
  • the video client obtains an image frame captured by the camera from the real scene while playing the video and the corresponding barrage content.
  • the video client identifies the target object in the image frame.
  • the video client inputs the feature parameter representing the action of the target object into the pre-trained machine learning model; when the machine learning model outputs the preset barrage adjustment action type according to the feature parameter, the action and output of the target object are determined.
  • the preset barrage adjustment action type matches.
  • the video client triggers generation of a barrage operation instruction corresponding to the matched barrage adjustment action type.
  • the video client adjusts the content of the played barrage to be invisible in response to the barrage operation instruction.
  • the video client sends a service that is adjusted to be invisible to the locally running service.
  • the server extracts keywords from the content of the barrage that is adjusted to be invisible.
  • the video client receives the keywords extracted from the content of the barrage sent to the server fed back by the server.
  • the video client can receive the keywords extracted from the content of the barrage sent to the server through the locally running service, where the locally running service is only a keyword transfer function, which is simplified for simplicity. Draw the keywords that the video client receives from the server.
  • the video client associates the keyword with the user identification.
  • the associated keyword is used to filter the content of the barrage including the associated keyword from the content of the barrage associated with the video played after the user logo is logged in.
  • the video client filters out the contents of the barrage including the associated keywords, and plays the remaining barrage content after filtering.
  • a computer device which may be the terminal 110 of FIG.
  • the internal structural block diagram of the computer device can be as shown in FIG. 11.
  • the computer device includes a barrage content control device, and the barrage content control device includes various modules, and each module can be implemented in whole or in part by software, hardware or a combination thereof. .
  • a barrage content control device 800 includes an image frame acquisition module 802, a motion recognition module 804, an operation command generation module 806, and a barrage content adjustment module. 808, where:
  • the image frame obtaining module 802 is configured to acquire an image frame acquired from a real scene when playing the video and the corresponding barrage content.
  • the motion recognition module 804 is configured to identify an action of the target object in the image frame.
  • the operation instruction generating module 806 is configured to: when the action of the target object matches the preset barrage adjustment action type, trigger to generate a barrage operation instruction corresponding to the matched barrage adjustment action type.
  • the barrage content adjustment module 808 is configured to adjust the content of the played barrage according to the matched barrage adjustment action type in response to the barrage operation instruction.
  • the motion recognition module 804 is further configured to identify a target object in the acquired image frame; and determine an action of the target object according to a change in position of the target object in the adjacent image frame.
  • the motion recognition module 804 is further configured to determine a position coordinate of the target object in the image frame in a spatial rectangular coordinate system established with the camera that collects the image frame as the origin; according to the adjacent image frame The change of the position coordinates of the target object determines the action of the target object.
  • the apparatus 800 further includes:
  • the action type matching module 805 is configured to acquire a feature parameter that represents the action of the target object; input the feature parameter into the pre-trained machine learning model; and when the machine learning model outputs the preset barrage adjustment action type according to the feature parameter, determine The action of the target object matches the preset preset barrage adjustment action type.
  • the action type matching module 805 is further configured to establish a spatial rectangular coordinate system with the camera that collects the image frame as an origin; acquire a training image frame acquired from the real scene by the camera; and according to the target in the adjacent training image frame The change of the position coordinates of the object in the space rectangular coordinate system determines the corresponding action of the target object; the machine learning training is performed according to the feature parameters of the determined action and the corresponding preset barrage adjustment action type, and the machine learning model is obtained.
  • the barrage content adjustment module 808 is further configured to adjust the content of the played barrage to be invisible in response to the barrage operation command when the matched barrage adjustment action type is a barrage removal action type.
  • the barrage content adjustment module 808 is further configured to: when the matching barrage adjustment action type is a barrage fast forward action type, speed up the scrolling speed of the played barrage content in response to the barrage operation instruction; When the matching barrage adjustment action type is the barrage pause action type, in response to the barrage operation instruction, the content of the played barrage is stopped to scroll.
  • the barrage content adjustment module 808 is further configured to: when the matched barrage adjustment action type is a barrage removal action type, obtain keywords included in the barrage content adjusted to be invisible; The content of the barrage including the keyword is filtered out in the content of the barrage associated with the video; during the playback of the video, the remaining barrage content after filtering is played.
  • the barrage content adjustment module 808 is further configured to send the barrage content adjusted to be invisible to the server through a locally running service; receive the keyword extracted from the barrage content sent to the server fed back by the server .
  • the video plays after logging in with the user identification.
  • the apparatus 800 further includes:
  • the association module 810 is configured to associate the keyword with the user identifier, where the associated keyword is used to filter out the barrage including the associated keyword from the content of the barrage associated with the video played after the user logo is logged in. content.
  • the barrage content adjustment module 808 is further configured to determine a position of the target object mapped to the play screen of the barrage content; determine the barrage content to be removed according to the position in the played barrage content; The removed barrage content is adjusted to be invisible.
  • Figure 11 is a block diagram showing the internal structure of a computer device in an embodiment.
  • the computer device can be the terminal 110 shown in Figure 1, including a processor, memory, network interface, display screen, and input device connected by a system bus.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device can store operating system and computer readable instructions.
  • the computer readable instructions when executed, may cause the processor to perform a barrage content control method.
  • the processor of the computer device is used to provide computing and control capabilities to support the operation of the entire computer device.
  • the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to perform a barrage content control method.
  • the network interface of the computer device is used for network communication.
  • the display of the computer device can be a liquid crystal display or an electronic ink display.
  • the input device of the computer device may be a touch layer covered on the display screen, a button, a trackball or a touchpad provided on the terminal casing, or an external keyboard, a touchpad or a mouse.
  • the computer device may be a personal computer, a mobile terminal, or an in-vehicle device, and the mobile terminal includes at least one of a mobile phone, a tablet, a personal digital assistant, or a wearable device.
  • FIG. 11 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the specific computer device may It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • the barrage content control device provided by the present application can be implemented in the form of a computer readable instruction that can be run on a computer device as shown in FIG.
  • the storage medium can store various program modules constituting the barrage content control device, for example, the image frame acquisition module 802, the motion recognition module 804, the operation command generation module 806, and the barrage content adjustment module 808 shown in FIG.
  • Computer readable instructions comprising respective program modules for causing the computer device to perform the steps in the barrage content control method of various embodiments of the present application described in this specification, for example, the computer device can pass the bomb as shown in FIG.
  • the image frame acquisition module 802 in the screen content control device 800 acquires image frames acquired from the real scene when the video and the corresponding barrage content are played, and recognizes the motion of the target object in the image frame by the motion recognition module 804.
  • the computer device can trigger the generation of the barrage operation instruction corresponding to the matching type of the barrage adjustment action by the operation command generation module 806 when the action of the target object matches the preset barrage adjustment action type.
  • the computer device can adjust the content of the played barrage according to the matching barrage adjustment action type by the barrage content adjustment module 808 in response to the barrage operation instruction.
  • a computer apparatus comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executed by one or more processors Having the one or more processors perform the steps of: acquiring an image frame acquired from a real scene while playing the video and the corresponding barrage content; recognizing an action of the target object in the image frame; and acting on the target object
  • the barrage operation instruction corresponding to the matching type of the barrage adjustment action is triggered; in response to the barrage operation instruction, the content of the barrage played is adjusted according to the matching type of the barrage adjustment action.
  • the act of identifying a target object in an image frame comprises: identifying a target object in the acquired image frame; determining an action of the target object based on a change in position of the target object in the adjacent image frame.
  • determining the action of the target object according to the change of the position of the target object in the adjacent image frame comprises: determining the target object in the image frame in a spatial rectangular coordinate system established with the camera acquiring the image frame as an origin The position coordinates of the position; the action of the target object is determined according to the change of the position coordinates of the target object in the adjacent image frame.
  • the computer readable instructions further cause the processor to perform the steps of: obtaining feature parameters characterizing the actions of the target object; entering the feature parameters into the pre-trained machine learning model; and outputting the presets based on the feature parameters when the machine learning model is output
  • the barrage adjusts the action type, it determines that the action of the target object matches the output of the preset barrage adjustment action type.
  • the computer readable instructions further cause the processor to perform the steps of: establishing a spatial Cartesian coordinate system with the camera acquiring the image frame as an origin; acquiring a training image frame acquired from the real scene by the camera; and according to the adjacent training image
  • the change of the position coordinates of the target object in the frame in the space rectangular coordinate system determines the corresponding action of the target object; performs machine learning training according to the characteristic parameters of the determined action and the corresponding preset barrage adjustment action type, and obtains machine learning model.
  • adjusting the content of the played barrage according to the matched barrage adjustment action type includes: responding to the barrage when the matched barrage adjustment action type is a barrage removal action type Operation instructions to adjust the content of the played barrage to invisible.
  • adjusting the content of the played barrage according to the matched barrage adjustment action type further includes: responding to the bomb when the matched barrage adjustment action type is a barrage fast forward action type
  • the screen operation instruction speeds up the scrolling speed of the played barrage content; when the matching barrage adjustment action type is the barrage pause action type, the content of the played barrage is stopped to scroll in response to the barrage operation instruction.
  • the computer readable instructions further cause the processor to perform the step of: obtaining a keyword included in the barrage content adjusted to be invisible when the matching barrage adjustment action type is a barrage removal action type Filtering the content of the barrage including the keyword from the content of the barrage associated with the video; during the playback of the video, playing the remaining barrage content after filtering.
  • obtaining the keywords included in the barrage content adjusted to be invisible includes: transmitting the barrage content adjusted to be invisible to the server through the locally running service; receiving the feedback from the server to the server Keywords extracted from the contents of the barrage.
  • the video is played after logging in with the user identification; the computer readable instructions further cause the processor to perform the steps of associating a keyword with a user identification; wherein the associated keyword is used to log in from the user identification In the content of the barrage associated with the post-played video, the barrage content including the associated keyword is filtered out.
  • adjusting the content of the played barrage to invisible comprises: determining a position of the target object mapped to the play screen of the barrage content; determining the barrage content to be removed according to the position in the played barrage content ; Adjust the contents of the barrage to be removed to be invisible.
  • one or more storage media storing computer readable instructions that, when executed by one or more processors, cause one or more processors to perform the steps of: acquiring The image frame captured from the real scene when the video and the corresponding barrage content are; the action of identifying the target object in the image frame; when the action of the target object matches the preset barrage adjustment action type, the generation and matching are triggered.
  • the barrage adjustment action type corresponds to the barrage operation command; in response to the barrage operation instruction, the content of the barrage played is adjusted according to the matching barrage adjustment action type.
  • the act of identifying a target object in an image frame comprises: identifying a target object in the acquired image frame; determining an action of the target object based on a change in position of the target object in the adjacent image frame.
  • determining the action of the target object according to the change of the position of the target object in the adjacent image frame comprises: determining the target object in the image frame in a spatial rectangular coordinate system established with the camera acquiring the image frame as an origin The position coordinates of the position; the action of the target object is determined according to the change of the position coordinates of the target object in the adjacent image frame.
  • the computer readable instructions further cause the processor to perform the steps of: obtaining feature parameters characterizing the actions of the target object; entering the feature parameters into the pre-trained machine learning model; and outputting the presets based on the feature parameters when the machine learning model is output
  • the barrage adjusts the action type, it determines that the action of the target object matches the output of the preset barrage adjustment action type.
  • the computer readable instructions further cause the processor to perform the steps of: establishing a spatial Cartesian coordinate system with the camera acquiring the image frame as an origin; acquiring a training image frame acquired from the real scene by the camera; and according to the adjacent training image
  • the change of the position coordinates of the target object in the frame in the space rectangular coordinate system determines the corresponding action of the target object; performs machine learning training according to the characteristic parameters of the determined action and the corresponding preset barrage adjustment action type, and obtains machine learning model.
  • adjusting the content of the played barrage according to the matched barrage adjustment action type includes: responding to the barrage when the matched barrage adjustment action type is a barrage removal action type Operation instructions to adjust the content of the played barrage to invisible.
  • adjusting the content of the played barrage according to the matched barrage adjustment action type further includes: responding to the bomb when the matched barrage adjustment action type is a barrage fast forward action type
  • the screen operation instruction speeds up the scrolling speed of the played barrage content; when the matching barrage adjustment action type is the barrage pause action type, the content of the played barrage is stopped to scroll in response to the barrage operation instruction.
  • the computer readable instructions further cause the processor to perform the step of: obtaining a keyword included in the barrage content adjusted to be invisible when the matching barrage adjustment action type is a barrage removal action type Filtering the content of the barrage including the keyword from the content of the barrage associated with the video; during the playback of the video, playing the remaining barrage content after filtering.
  • obtaining the keywords included in the barrage content adjusted to be invisible includes: transmitting the barrage content adjusted to be invisible to the server through the locally running service; receiving the feedback from the server to the server Keywords extracted from the contents of the barrage.
  • the video is played after logging in with the user identification; the computer readable instructions further cause the processor to perform the steps of associating a keyword with a user identification; wherein the associated keyword is used to log in from the user identification In the content of the barrage associated with the post-played video, the barrage content including the associated keyword is filtered out.
  • adjusting the content of the played barrage to invisible comprises: determining a position of the target object mapped to the play screen of the barrage content; determining the barrage content to be removed according to the position in the played barrage content ; Adjust the contents of the barrage to be removed to be invisible.
  • the various steps in the various embodiments of the present application are not necessarily performed in the order indicated by the steps. Except as explicitly stated herein, the execution of these steps is not strictly limited, and the steps may be performed in other orders. Moreover, at least some of the steps in the embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be executed at different times, and the execution of these sub-steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of the other steps.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种弹幕内容控制方法,包括:获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;识别所述图像帧中的目标对象的动作;当所述目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。

Description

弹幕内容控制方法、计算机设备和存储介质
本申请要求于2017年11月22日提交中国专利局,申请号为2017111766056,申请名称为“弹幕内容控制方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种弹幕内容控制方法、计算机设备和存储介质。
背景技术
随着科学技术的飞速发展,人们表达自己言论和观点的方式越来越多样。在观看视频时通过弹幕来发表自己评论的方式也被很多人所喜爱。然而有些弹幕内容往往会对观看视频的人造成一定的干扰,比如,有些弹幕内容存在不雅的东西,这时就需要对弹幕内容进行一定的调整。
传统方法中,用户需要手动地中断播放中的视频,然后在屏蔽列表中手动地添加关键词,从而将包括该关键词的弹幕内容屏蔽掉,以调整弹幕内容。然而,手动添加关键词来调整弹幕内容的操作效率比较低。
发明内容
根据本申请提供的各种实施例,提供一种弹幕内容控制方法、计算机设备和存储介质。
一种弹幕内容控制方法,所述方法包括:
计算机设备获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;
所述计算机设备识别所述图像帧中的目标对象的动作;
当所述目标对象的动作与预设的弹幕调整动作类型匹配时,所述计算机 设备则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;及
所述计算机设备响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;
识别所述图像帧中的目标对象的动作;
当所述目标对象的动作与预设的弹幕调整动作类型匹配时,则
触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;及
响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。
一个或多个存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;
识别所述图像帧中的目标对象的动作;
当所述目标对象的动作与预设的弹幕调整动作类型匹配时,则
触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;及
响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。基于本申请的说明书、附图以及权利要求书,本申请的其它特征、目的和优点将变得更加明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本 申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中弹幕内容控制方法的应用场景图;
图2为一个实施例中弹幕内容控制方法的流程示意图;
图3为一个实施例中采集图像帧的场景示意图;
图4为一个实施例中机器学习模型建立原理示意图;
图5至图6为一个实施例中移除弹幕内容的界面示意图;
图7为一个实施例中弹幕内容控制方法的时序图;
图8为一个实施例中弹幕内容控制装置的框图;
图9为另一个实施例中弹幕内容控制装置的框图;
图10为又一个实施例中弹幕内容控制装置的框图;及
图11为一个实施例中计算机设备的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中弹幕内容控制方法的应用场景图。参照图1,该应用场景中包括通过网络连接的终端110和服务器120。终端110可以是智能电视机、台式计算机或移动终端,移动终端可以包括手机、平板电脑、笔记本电脑、个人数字助理和穿戴式设备等中的至少一种。服务器120可以用独立的服务器或者是多个物理服务器组成的服务器集群来实现。
终端110可以播放视频以及相应的弹幕内容,并获取在播放视频和相应弹幕内容时从现实场景中采集的图像帧。在一个实施例中,终端110可以通过自身集成的图像采集设备从现实场景中采集图像帧。可以理解,终端110也可以外接图像采集设备,以从现实场景中采集图像帧。终端110可以识别图像帧中的目标对象的动作。在一个实施例中,终端110可以通过自身识别 目标对象的动作与预设的弹幕调整动作类型是否匹配,终端110也可以将识别的目标对象的动作反馈至服务器120,通过服务器120识别目标对象的动作与预设的弹幕调整动作类型是否匹配。当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发终端110生成与匹配的弹幕调整动作类型相应的弹幕操作指令;响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容。
图2为一个实施例中弹幕内容控制方法的流程示意图。本实施例主要以该弹幕内容控制方法应用于计算机设备来举例说明,该计算机设备可以是图1中的终端110。参照图2,该方法具体包括如下步骤:
S202,获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧。
其中,弹幕内容,是视频观看用户所录入的、显现于视频画面上的评论,能够以滚动、停留甚至更多动作方式显示于视频上。现实场景是自然真实的世界中存在的场景。图像帧是能够形成动态画面的图像帧序列中的单元,用来记录某时刻现实场景中的画面。
具体地,计算机设备可以播放视频和相应的弹幕内容。在一个实施例中,计算机设备可以通过视频客户端播放视频和相应的弹幕内容。可以理解,视频客户端,是主要用于视频播放处理的客户端。客户端可以是APP(Application的简称),指第三方应用程序。可以理解,计算机设备也可以通过网页播放视频和相应的弹幕内容。
计算机设备可以获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧。可以理解,获取的图像帧可以为一个或多个。在一个实施例中,获取的图像帧可以是多个连续采集的图像帧。
在一个实施例中,计算机设备自身集成了图像采集设备,计算机设备可以在播放视频和相应的弹幕内容时,通过自身集成的图像采集设备从现实场景中采集图像帧。在另一个实施例中,计算机设备也可以通过外接的图像采集设备在播放视频和相应的弹幕内容时从现实场景中采集图像帧。其中,图像采集设备可以是摄像头、照相机或摄影机等。
图3为一个实施例中采集图像帧的场景示意图。参照图3,计算机设备302播放视频304和弹幕内容。图3中所示的计算机设备302为PC机,这里省略了主机,只示出了显示屏。其中,“衣服真丑”、“不会呀,衣服很好看呀”就属于弹幕内容,观看用户306处于现实场景中观看计算机设备播放的视频和弹幕内容。在播放视频和相应弹幕内容时,计算机设备302通过自身集成的摄像头302a,从现实场景中采集图像帧,比如,摄像头302a可以拍摄现实场景,采集得到包括观看用户306的手势的图像帧。
S204,识别图像帧中的目标对象的动作。
其中,目标对象,是图像帧中的需要进行动作识别的对象。目标对象的动作,是目标对象在图像帧中所呈现的动作。目标对象的动作,包括通过目标对象的姿态所呈现的动作(比如,竖起大拇指)或根据目标对象的运动轨迹所体现的动作(比如,一只手左右挥动)。
在一个实施例中,目标对象可以是视频观看用户的身体部位,包括手部、脚部、胳膊、腿、头部和面部(眼睛、鼻子、嘴巴等)等部位中的至少一种。可以理解,基于人体对称性,具有统称的对称部位可以统称为一个部位,也可以区分为不同的部位,比如,基于人体对称性,左手和右手可以统称为手部,也可以区分为不同的部位。需要说明的是,目标对象还可以是能够通过动作控制控制弹幕内容的设备,这里对目标对象的具体形象不作限定。
在一个实施例中,当目标对象包括两个及以上的部位时,目标对象的动作可以是所包括的两个及以上的部位的动作组合(比如,摇头且左右摆手,即为头部和手部这两个部位的动作组合)。
在一个实施例中,目标对象可以是手部。目标对象的动作,可以是通过手部姿态所呈现的动作(比如,竖起大拇指)和/或通过手部运动轨迹所呈现的动作(比如,一只手左右挥动)。可以理解,当目标对象为手部时,目标对象可以为一只或两只手。当目标对象为两只手时,则目标对象的动作可以是两只手的动作的组合。比如,两只手比心或两只手画圆。
在一个实施例中,计算机设备可检测现实场景中采集的图像帧中是否包 括目标对象,当判定包括目标对象时,则识别该图像帧中的目标对象的动作。可以理解,图像帧中包括目标对象,是指图像帧中包括目标对象的图像。比如,图像帧中包括手部,是指图象帧中包括手部图像。
具体地,计算机设备可以对图像帧中的目标对象的姿态进行识别,根据所识别的目标对象的姿态,确定目标对象的动作。计算机设备也可以识别目标对象在图像帧中的位置,根据目标对象在图像帧中的位置,确定目标对象的动作。
S206,当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令。
其中,弹幕调整动作类型,是对弹幕内容进行调整的动作的类型。在一个实施例中,弹幕调整动作类型包括弹幕移除动作类型、弹幕快进动作类型和弹幕暂停动作类型等中的至少一种。弹幕移除动作,是移除弹幕内容的动作。弹幕快进动作,是对弹幕内容进行快进的动作。弹幕暂停动作,是将弹幕内容暂停滚动的动作。
可以理解,弹幕调整动作类型还可以是对弹幕内容进行其他调整的动作类型。这里对弹幕调整动作类型不作限定,可以根据实际需求进行扩展设定。
在一个实施例中,计算机设备可以将目标对象的动作与预设的弹幕调整动作类型进行匹配。
具体地,计算机设备可以获取表征目标对象的动作的特征参数,将该特征参数与预设的弹幕调整动作类型所对应的特征参数进行匹配,当表征目标对象的动作的特征参数命中预设的弹幕调整动作类型所对应的特征参数时,则判定将目标对象的动作与所命中的特征参数相应的弹幕调整动作类型匹配。
在一个实施例中,计算机设备也可以预先训练机器学习模型,将表征目标对象的动作的特征参数输入预先训练得到的机器学习模型中,当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定该目标对象的动作与输出的预设的弹幕调整动作类型匹配。其中,机器学习模型是预先根据 表征动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练得到的。
在一个实施例中,与弹幕调整动作类型匹配的目标对象的动作可以为左右来回移动、上下来回移动或静止动作等。在一个实施例中,与弹幕移除动作类型匹配的目标对象的动作可以是上下来回移动,与弹幕快进动作类型匹配的目标对象的动作可以是左右来回移动、与弹幕暂停动作类型匹配的目标对象的动作可以是静止动作。可以理解,与预设的弹幕调整动作类型匹配的目标对象的动作还可以是其他的动作,对此不做限定。可以理解,这里所说的“左右”、“上下”,是相对于预设的参照方位而言的“左右”和“上下”,而预设的参照方位具体可以根据实际需求设定。
当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发计算机设备生成与匹配的弹幕调整动作类型相应的弹幕操作指令。
其中,弹幕操作指令,是对弹幕内容执行操作的指令。与弹幕调整动作类型相应的弹幕操作指令,用于触发执行弹幕调整动作类型所对应的弹幕调整动作。
具体地,计算机设备中预先存储了弹幕调整动作类型与弹幕操作指令之间的映射关系,根据该映射关系,计算机设备可以将匹配的弹幕调整动作类型映射为相应的弹幕操作指令。
在一个实施例中,计算机设备中预先存储了以弹幕调整动作类型为键(Key)、以弹幕操作指令为值(Value)建立的哈希映射(HashMap)表。其中,键是Key-Value存储中的Key,通过键(Key)进行查询,能够查询到该键对应的值(Value)。计算机设备可以将匹配的弹幕调整动作类型与哈希映射表中的键(Key)匹配,并获取所匹配到的键(Key)所对应的值(Value),得到相应的弹幕操作指令。
S208,响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容。
具体地,计算机设备可以响应于弹幕操作指令,按照所匹配的弹幕调整 动作类型对播放的弹幕内容执行相应调整操作。可以理解,每个弹幕调整动作类型都有对应的调整操作。
在一个实施例中,计算机设备可以通过消息传递载体将弹幕操作指令作为参数传递至视频客户端组件。计算机设备可以通过视频客户端组件从消息传递载体中读取弹幕操作指令并响应。在一个实施例中,计算机设备可以通过消息传递载体将弹幕操作指令作为参数发送广播至视频客户端组件。
其中,消息传递载体是组件间通信信息的载体,封装了调用组件提供的指令和数据。在一个实施例中,消息传递载体可以是Intent。Intent是安卓***(Android***,一种基于Linux的自由及开放源代码的操作***)中存放操作的抽象描述的数据结构,可用于在不同的组件以及不同的App(Application,应用程序)间进行传递。
在一个实施例中,被调整的播放的弹幕内容,可以是采集该图像帧时所播放的弹幕内容,其中,图像帧是识别出与该弹幕调整动作类型匹配的目标对象的动作的图像帧。即基于图像帧匹配出的弹幕调整动作类型,用于调整采集该图像帧时播放的弹幕内容。比如,当图像帧中的目标对象的动作匹配到弹幕移除动作类型时,则将采集该图像帧时播放的弹幕内容进行移除处理,而接下来播放的弹幕内容可以不作处理。
可以理解,被调整的播放的弹幕内容,也可以是从采集图像帧时起所播放的弹幕内容或正在播放的弹幕内容。其中,正在播放的弹幕内容是响应于弹幕操作指令进行弹幕内容调整处理时正在播放的弹幕内容。计算机设备也可以按照匹配的弹幕调整类型,对从采集图像帧时起所播放的弹幕内容进行调整。比如,当图像帧中的目标对象的动作匹配到弹幕快进动作类型时,则可以对从采集图像帧时起所播放的弹幕内容进行快进。
在一个实施例中,当图像帧中的目标对象的动作匹配到弹幕移除动作类型时,既可以对播放的全部弹幕内容进行移除处理,也可以对播放的弹幕内容中的部分弹幕内容进行移除处理。
上述弹幕内容控制方法,在播放视频和弹幕内容时,对现实场景中目标 对象的动作进行采集和分析,按照该目标对象的动作匹配的弹幕调整动作类型,即可自动地调整播放的弹幕内容,不需要手动停止视频播放以及手动的添加关键词等繁琐的操作,提高了弹幕内容调整效率。
此外,当用户与视频播放界面还有一定距离时,也可以实现对弹幕内容的调整,而不需要用户一定要对弹幕内容进行直接操作,提高了弹幕内容调整的效率以及灵活性。
在一个实施例中,步骤S204包括:识别获取的图像帧中的目标对象;根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作。
其中,相邻图像帧,是采集时序相邻的图像帧。采集时序,是图像帧被采集的时间先后顺序。
在一个实施例中,计算机设备可以提取采集的图像帧中包括的图像数据,从提取的图像数据中识别目标对象的特征数据,根据识别出的目标对象的特征数据确定图像帧中的目标对象。
在一个实施例中,根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作包括:确定目标对象在所采集的图像帧中的位置;将目标对象在相邻图像中的位置进行比对,得到目标图像在相邻图像中的位置变化;计算机设备可以根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作。可以理解,相邻图像帧是相邻的至少两个图像帧,可以是相邻的两个图像帧,也可以是两个以上相邻的图像帧,比如5个相邻的图像帧。
在一个实施例中,确定目标对象在所采集的图像帧中的位置包括:确定目标对象在图像帧的全局画面中所处的像素位置。将目标对象在相邻图像中的位置进行比对,得到目标图像在相邻图像中的位置变化包括:将目标对象在相邻图像中的像素位置进行比对,得到目标图像在相邻图像中的位置变化。可以理解,图像帧的全局画面,是图像帧的整体画面。
计算机设备可以根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作。在一个实施例中,计算机设备可以根据目标对象在相邻图像帧中的位置变化,确定目标对象的运动轨迹,根据相应运动轨迹,确定目标对象 的动作。比如,运动轨迹为左右来回移动的运动轨迹时,则确定目标对象的动作为左右来回移动。
上述实施例,根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作,并不局限于单单通过姿态所表征的目标对象的动作,提高了目标对象的动作的灵活性、从而提高了弹幕内容调整的多样性。
在一个实施例中,根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作包括:在以采集图像帧的摄像头为原点建立的空间直角坐标系中,确定图像帧中的目标对象所处的位置坐标;根据在相邻图像帧中的目标对象所处的位置坐标的变化,确定目标对象的动作。
本实施例中,获取的图像帧是通过集成或外接的摄像头采集的。
具体地,计算机设备可以以采集图像帧的摄像头为原点建立空间直角坐标系。在一个实施例中,计算机设备可以以摄像头为原点,以平行于计算机设备的显示屏幕水平方向为横轴(X轴)、以平行于计算机设备的显示屏幕的竖直方向为纵轴(Y轴),以垂直于计算机设备的显示屏幕的方向为竖轴(Z轴)建立空间直角坐标系。可以理解,计算机设备也可以以摄像头为原点,以其它相互垂直的方向为横、轴、竖轴,建立空间直角坐标系。对此不做限定。
在该空间直角坐标系中,确定所获取的图像帧中的目标对象所处的位置坐标。计算机设备可以比对在相邻图像帧中的目标对象所处的位置坐标,根据在相邻图像帧中的目标对象所处的位置坐标的变化,确定目标对象的动作。
其中,相邻图像帧是相邻的至少两个图像帧,可以是相邻的两个图像帧,也可以是两个以上相邻的图像帧,比如5个相邻的图像帧。
在一个实施例中,计算机设备可以根据在相邻图像帧中的目标对象所处的位置坐标的变化,构建目标对象的运动轨迹,根据相应运动轨迹,确定目标对象的动作。
在一个实施例中,计算机设备也可以获取在相邻图像帧中的目标对象所处的位置坐标间的差值,分析该位置坐标间的差值的变化规律,根据该变化 规律确定目标对象的动作。比如,在两两相邻的图像帧中的目标对象所处的位置坐标在水平方向上的差值为正负交替,则可以确定目标对象的动作为水平来回移动。
上述实施例中,以采集图像帧的摄像头为参照,能够准确地判定出目标对象的位置变化,从而能较准确地确定目标对象的动作,进而提高了弹幕内容调整的准确性。
在一个实施例中,该方法还包括:获取表征目标对象的动作的特征参数;将特征参数输入预先训练的机器学习模型中;当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。
可以理解,目标对象的动作通过特征参数进行表征,计算机设备可以通过解析特征参数确定其所表征的动作。
具体地,计算机设备可以获取表征目标对象的动作的特征参数,并将特征参数输入预先训练的机器学习模型中。其中,机器学习模型是预先根据表征动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练得到的。
在一个实施例中,机器学习模型可以存储于计算机设备中,计算机设备将特征参数输入自身所存储的机器学习模型中。当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,计算机设备则可以判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。
在一个实施例中,机器学习模型可以存储于服务器中,计算机设备可以将特征参数发送至服务器,使服务器将特征参数输入机器学习模型中。当服务器通过机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。服务器可以将该判定结果反馈至计算机设备。
在一个实施例中,目标对象的动作可以为左右来回移动或上下来回移动或静止动作,相应输出的弹幕调整动作类型分别为弹幕快进动作类型或弹幕 移除动作类型或弹幕暂停动作类型。
上述实施例中,通过预先训练的机器学习模型,来确定目标对象的动作匹配的弹幕调整动作类型,提高了匹配弹幕调整动作类型的准确性。
在一个实施例中,该方法还包括:以采集图像帧的摄像头为原点建立空间直角坐标系;获取通过摄像头从现实场景中采集的训练图像帧;根据相邻训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化,确定目标对象相应的动作;根据表征确定的动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。
其中,训练图像帧,是用于进行机器学习训练的图像帧。相邻训练图像帧是采集时序相邻的训练图像帧。相邻训练图像帧是相邻的至少两个训练图像帧,可以是相邻的两个训练图像帧,也可以是两个以上相邻的训练图像帧。
本实施例中,获取的图像帧是通过集成或外接的摄像头采集的。
具体地,计算机设备可以以采集图像帧的摄像头为原点建立空间直角坐标系。在一个实施例中,计算机设备可以以摄像头为原点,以平行于计算机设备的显示屏幕水平方向为横轴(X轴)、以平行于计算机设备的显示屏幕的竖直方向为纵轴(Y轴),以垂直于计算机设备的显示屏幕的方向为竖轴(Z轴)建立空间直角坐标系。可以理解,计算机设备也可以以摄像头为原点,以其它相互垂直的方向为横、纵、竖轴,建立空间直角坐标系。对此不做限定。
计算机设备可以通过摄像头从现实场景中采集训练图像帧。计算机设备可以识别训练图像帧中的目标对象,并确定各训练图像帧中目标对象在空间直角坐标系中的位置坐标,根据位置坐标,确定训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化。计算机设备可以根据训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化,确定目标对象相应的动作。
在一个实施例中,计算机设备可以根据在相邻训练图像帧中的目标对象所处的位置坐标的变化,构建目标对象的运动轨迹,根据相应运动轨迹,确定目标对象的动作。
在一个实施例中,计算机设备可以获取在相邻训练图像帧中的目标对象所处的位置坐标间的差值,分析该位置坐标间的差值的变化规律,根据该变化规律确定目标对象的动作。
计算机设备可以获取表征所确定的目标对象的动作的特征参数,计算机设备可以获取针对获取的特征参数预设的相应的弹幕调整动作类型,计算机设备可以将特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。可以理解,该机器学习模型用于输出与输入的该特征参数匹配的弹幕调整动作类型。
图4为一个实施例中机器学***方向来回变化(如404所示),则可以确定目标对象的动作为左右来回移动,假设相邻训练图像帧中的手在空间直角坐标系中所处的位置坐标不变化(如406所示),则可以确定目标对象的动作为静止动作。计算机设备可以获取表征手的动作的特征参数,并获取针对该特征参数预设的弹幕调整动作类型,根据特征参数与相应的弹幕调整动作类型进行机器学习训练。比如,针对表征左右来回移动的特征参数设置弹幕快进动作类型,针对表征上下来回移动的特征参数设置弹幕移除动作类型,针对表征静止动作的特征参数设置弹幕暂停动作类型。
上述实施例中,以采集图像帧的摄像头为原点建立空间直角坐标系;根据采集的训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化,确定目标对象相应的动作;根据表征确定的动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。基于该机器学习训练得到的机器学习模型判断目标对象的动作是否与预设的匹配弹幕调整动作类型,提高了准确性。
在一个实施例中,步骤S208包括:当匹配的弹幕调整动作类型为弹幕移 除动作类型时,响应于弹幕操作指令,将播放的弹幕内容调整为不可见。
其中,弹幕移除动作类型,是将弹幕内容移除的动作类型。
具体地,当与目标对象的动作匹配的弹幕调整动作类型为弹幕移除动作类型时,则触发生成与弹幕移除动作类型匹配的、触发对弹幕内容的移除操作处理的弹幕操作指令。计算机设备响应于弹幕操作指令,将播放的弹幕内容调整为不可见。
可以理解,被调整的播放的弹幕内容,可以是采集该图像帧时所播放的弹幕内容,其中,图像帧是识别出与该弹幕调整动作类型匹配的目标对象的动作的图像帧。被调整的播放的弹幕内容,也可以是从采集图像帧时起所播放的弹幕内容或正在播放的弹幕内容。
在一个实施例中,计算机设备可以调整播放的弹幕内容的可见度至不可见。其中,可见度(Visible)是能被看见的清晰程度。具体地,计算机设备可以调整播放的弹幕内容的可见度的值为False,使得弹幕内容不可见。
在一个实施例中,计算机设备也可以将播放的弹幕内容删除。
图5至图6为一个实施例中移除弹幕内容的界面示意图。参照图5,图5中显示了一些不健康的弹幕内容,比如“衣服丑毙了”、“发型丑到爆”等弹幕内容,则用户可以做出匹配弹幕移除动作类型的手势(即手部动作),计算机设备则可以按照弹幕移除动作类型调整播放的弹幕内容至不可见。图6中则为将“衣服丑毙了”、“发型丑到爆”这些不健康的弹幕内容调整为不可见后的弹幕内容。
上述实施例中,按照匹配的弹幕移除动作类型,将播放的弹幕内容自动地调整为不可见。不需要手动地添加关键词来屏蔽弹幕内容,提高了弹幕内容调整的效率。此外,当用户与视频播放界面还有一定距离时,也可以实现对弹幕内容的调整,而不需要用户一定要对弹幕内容进行直接操作,提高了弹幕内容调整的效率以及灵活性。
在一个实施例中,将播放的弹幕内容调整为不可见包括:确定目标对象映射到弹幕内容的播放画面中的位置;在播放的弹幕内容中按照位置确定待 移除的弹幕内容;将待移除的弹幕内容调整为不可见。
其中,目标对象映射到弹幕内容的播放画面中的位置,是将目标对象在现实场景中所处的位置映射为在弹幕内容的播放画面中的位置。可以理解,该映射可以实现现实场景中的动作与对播放画面的操作的有效对接。
在一个实施例中,目标对象映射到弹幕内容的播放画面中的位置,可以是目标对象做弹幕移除动作时映射到弹幕内容的播放画面中的起始位置,也可以是目标对象在做弹幕移除动作的过程中映射到弹幕内容的播放画面中的位置。
需要说明的是,计算机设备中预先设置了现实场景中的位置与播放画面中的位置之间的映射关系。该映射关系,用于实现现实场景中的位置与播放画面中的位置之间的对应转换。
在一个实施例中,计算机设备可以获取在现实场景中目标对象做弹幕移除动作的起始位置,根据上述映射关系,将目标对象在现实场景中的起始位置映射为在弹幕内容的播放画面中的起始位置。计算机设备可以将当前播放至目标对象所映射到的起始位置处的弹幕内容进行选中,作为待移除的弹幕内容。
在另一个实施例中,计算机设备也可以获取目标对象在做弹幕移除动作的过程中于现实场景中所处的位置,根据上述映射关系,将获取的做弹幕移除动作的过程中所处的各个位置,依次映射为在弹幕内容的播放画面中的位置,得到相应的弹幕移除轨迹。计算机设备可以根据该弹幕移除轨迹从播放的弹幕内容中确定待移除的弹幕内容。在一个实施例中,计算机设备可以将该弹幕移除轨迹所覆盖的弹幕内容作为待移除的弹幕内容。
上述实施例中,在对弹幕内容进行移除处理时,可以根据目标对象映射到弹幕内容的播放画面中的位置,准确地确定出需要移除的弹幕内容,而不需要对其他播放的弹幕内容进行移除处理,提高了弹幕内容移除的精准性。
在一个实施例中,步骤S208还包括:当匹配的弹幕调整动作类型为弹幕快进动作类型时,响应于弹幕操作指令,加快播放的弹幕内容的滚动速度。
其中,弹幕快进动作类型,是对弹幕内容进行快进的动作类型。弹幕暂停动作类型,是将弹幕内容暂停滚动的动作类型。
具体地,当与目标对象的动作匹配的弹幕调整动作类型为弹幕快进动作类型时,则触发生成与弹幕快进动作类型匹配的、触发对弹幕内容的快进操作处理的弹幕操作指令。计算机设备响应于弹幕操作指令,加快播放的弹幕内容的滚动速度。在一个实施例中,计算机设备可以将弹幕内容的滚动时间进行缩短处理,以加快播放的弹幕内容的滚动速度。
在一个实施例中,步骤S208还包括:当匹配的弹幕调整动作类型为弹幕暂停动作类型时,响应于弹幕操作指令,将播放的弹幕内容停止滚动。
当与目标对象的动作匹配的弹幕调整动作类型为弹幕暂停动作类型时,则触发生成与弹幕暂停动作类型匹配的、触发对弹幕内容的暂停操作处理的弹幕操作指令。计算机设备响应于弹幕操作指令,将播放的弹幕内容的停止滚动。
可以理解,被调整的播放的弹幕内容,可以是采集该图像帧时所播放的弹幕内容,其中,图像帧是识别出与该弹幕调整动作类型匹配的目标对象的动作的图像帧。被调整的播放的弹幕内容,也可以是从采集图像帧时起所播放的弹幕内容或正在播放的弹幕内容。
上述实施例中,通过从现实场景中采集的图像帧,匹配出弹幕移快进或暂停动作类型,按照匹配的弹幕移快进或暂停动作类型,将播放的弹幕内容自动地快进或暂停,当用户与视频播放界面还有一定距离时,也可以实现对弹幕内容的调整,而不需要用户一定要对弹幕内容进行直接操作,提高了弹幕内容调整的效率以及灵活性。
在一个实施例中,该方法还包括:当匹配的弹幕调整动作类型为弹幕移除动作类型时,获取调整为不可见的弹幕内容中所包括的关键词;从与视频关联的弹幕内容中过滤掉包括关键词的弹幕内容;在视频的播放过程中,播放过滤后剩余的弹幕内容。
具体地,计算机设备可以自身对调整为不可见的弹幕内容进行分析,提 取调整为不可见的弹幕内容中所包括的关键词。计算机设备也可以将调整为不可见的弹幕内容发送至服务器,获取服务器反馈的从调整为不可见的弹幕内容中提取的关键词。
在一个实施例中,计算机设备可以对调整为不可见的弹幕内容进行分词,得到词片段。计算机设备可以将分词得到的词片段与预设的关键词库中的关键词进行匹配,得到调整为不可见的弹幕内容中包括的关键词。
可以理解,预设的关键词库可以是设置于计算机设备本地的关键词库,也可以是设置于服务器中的关键词库。当预设的关键词库设置于服务器中时,计算机设备可以将分词得到的词片段发送至服务器,使服务器将词片段与关键词库中的关键词进行匹配,并反馈匹配结果至计算机设备,得到调整为不可见的弹幕内容中包括的关键词。
在一个实施例中,计算机设备也可以对调整为不可见的弹幕内容进行语义分析,以提取出调整为不可见的弹幕内容中所包括的关键词。
计算机设备可以从与视频关联的弹幕内容中过滤掉包括关键词的弹幕内容。计算机设备在该视频的播放过程中,播放过滤后剩余的弹幕内容,即计算机设备在播放该视频的过程中,播放的弹幕内容中不再有包括关键词的弹幕内容。
上述实施例中,当匹配的弹幕调整动作类型为弹幕移除动作类型时,获取调整为不可见的弹幕内容中所包括的关键词;从与视频关联的弹幕内容中过滤掉包括关键词的弹幕内容;在视频的播放过程中,播放过滤后剩余的弹幕内容。自动地过滤了想要屏蔽的弹幕内容,不需要手动地添加关键词来屏蔽弹幕内容,提高了弹幕内容调整的效率。
在一个实施例中,获取调整为不可见的弹幕内容中所包括的关键词包括:通过本地运行的服务将调整为不可见的弹幕内容发送至服务器;接收服务器反馈的从发送至服务器的弹幕内容中提取的关键词。
在一个实施例中,计算机设备可以将调整为不可见的弹幕内容作为参数保存至消息传递载体中,并通过消息传递载体将调整为不可见的弹幕内容传 递至本地运行的服务。通过本地运行的服务将消息传递载体中的调整为不可见的弹幕内容发送至服务器。
在一个实施例中,计算机设备可以通过本地运行的服务根据消息传递载体中的调整为不可见的弹幕内容构造HTTP请求发送至服务器,该HTTP请求中包括调整为不可见的弹幕内容。
在一个实施例中,服务器可以对接收的调整为不可见的弹幕内容进行语义分析处理,得到调整为不可见的弹幕内容中包括的关键词。
在一个实施例中,服务器也可以对调整为不可见的弹幕内容进行分词,得到词片段,将分词得到的词片段与预设的关键词库中的关键词进行匹配,以提取调整为不可见的弹幕内容中包括的关键词。
服务器可以将从调整为不可见的弹幕内容中提取的关键词反馈至计算机设备。
上述实施例中,通过本地运行的服务将调整为不可见的弹幕内容发送至服务器,从服务器中直接接收提取的关键词,减少了计算机设备这一端的数据处理,从而减少对视频播放的影响。即在自动屏蔽掉弹幕内容的同时,也保证了视频播放质量。
在一个实施例中,视频在以用户标识进行登录后播放。该方法还包括:将关键词与用户标识相关联;其中,关联的关键词,用于从用户标识登录后播放的视频所关联的弹幕内容中,过滤掉包括关联的关键词的弹幕内容。
在一个实施例中,视频可以是在以用户标识登录视频客户端后,通过视频客户端播放。
其中,用户标识登录后播放的视频,可以是以用户标识登录后播放的任意视频,并不限定于与关键词所提取自的弹幕内容对应的视频。可以理解,以用户标识登录后播放的视频,也可以是指与关键词所提取自的弹幕内容对应的视频。
在一个实施例中,计算机设备可以在获取关键词后,自身将该关键词与该用户标识关联。
在一个实施例中,计算机设备也可以通过服务器将关键词与登录的用户标识关联,获取服务器反馈的与用户标识关联的关键词。即服务器在提取关键词后,可以将关键词与用户标识关联,并向计算机设备返回与用户标识关联的关键词。具体地,服务器可以将提取的关键词保存至与用户标识对应的个人屏蔽列表中。
在以用户标识登录后播放视频时,计算机设备可以获取与以用户标识登录后播放的视频关联的弹幕内容,并从获取的弹幕内容中过滤掉包括与该用户标识关联的关键词的弹幕内容。
现举例对上述处理进行解释说明。假设以用户标识1登录后播放视频A,计算机设备可以获取从播放视频A和相应弹幕内容a时的现实场景中采集的图像帧,并识别图像帧中的目标对象的动作,当目标对象的动作与预设的弹幕移除动作类型匹配时,则触发生成与匹配的弹幕移除动作类型相应的弹幕移除操作指令,将播放的弹幕内容调整为不可见。计算机设备可以获取调整为不可见的弹幕内容中所包括的关键词n,并将该关键词n与用户标识1关联。在以用户标识1登录后播放视频B时,计算机设备也可以从与视频B关联的弹幕内容b中过滤掉包括关联的关键词n的弹幕内容。
上述实施例中,通过将关键词与用户标识关联,可以在以后以用户标识登录后播放视频时,直接自动地过滤掉包括关键词的弹幕内容,既减少了弹幕内容调整操作步骤,又能够将用户不想要的弹幕内容过滤掉,提高了播放的弹幕内容的准确性和价值量。
图7为一个实施例中弹幕内容控制方法的时序图。时序图中的视频客户端安装于计算机设备中。该时序图具体包括以下步骤:
1)视频客户端在以用户标识登录后,播放视频和相应的弹幕内容。
2)视频客户端获取摄像头在播放视频和相应的弹幕内容时从现实场景中采集的图像帧。
3)视频客户端识别图像帧中的目标对象。
4)在以摄像头为原点建立的空间直角坐标系中,确定图像帧中的目标对 象所处的位置坐标;根据在相邻图像帧中的目标对象所处的位置坐标的变化,确定目标对象的动作。
5)视频客户端将表征目标对象的动作的特征参数输入预先训练的机器学习模型中;当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。
6)视频客户端在目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令。
7)视频客户端在匹配的弹幕调整动作类型为弹幕移除动作类型时,响应于弹幕操作指令,将播放的弹幕内容调整为不可见。
8)视频客户端发送调整为不可见的弹幕内容至本地运行的服务。
9)本地运行的服务将调整为不可见的弹幕内容发送至服务器。
10)服务器从调整为不可见的弹幕内容中提取关键词。
11)视频客户端接收服务器反馈的从发送至服务器的弹幕内容中提取的关键词。
可以理解,视频客户端可以通过本地运行的服务接收服务器反馈的从发送至服务器的弹幕内容中提取的关键词,这里本地运行的服务仅是起一个关键词中转作用,为了简洁示意,所以简化地画出视频客户端接收服务器反馈的关键词。
12)视频客户端将关键词与用户标识相关联。
其中,关联的关键词,用于从以用户标识登录后播放的视频所关联的弹幕内容中,过滤掉包括关联的关键词的弹幕内容。
13)视频客户端在视频的播放过程中,过滤掉包括关联的关键词的弹幕内容,并播放过滤后剩余的弹幕内容。
在一个实施例中,提供了一种计算机设备,该计算机设备可以为图1中的终端110。该计算机设备的内部结构框图可如图11所示,该计算机设备包括弹幕内容控制装置,弹幕内容控制装置中包括各个模块,每个模块可全部或部分通过软件、硬件或其组合来实现。
如图8所示,在一个实施例中,提供了一种弹幕内容控制装置800,该装置800包括:图像帧获取模块802、动作识别模块804、操作指令生成模块806以及弹幕内容调整模块808,其中:
图像帧获取模块802,用于获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧。
动作识别模块804,用于识别图像帧中的目标对象的动作。
操作指令生成模块806,用于当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令。
弹幕内容调整模块808,用于响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容。
在一个实施例中,动作识别模块804还用于识别获取的图像帧中的目标对象;根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作。
在一个实施例中,动作识别模块804还用于在以采集图像帧的摄像头为原点建立的空间直角坐标系中,确定图像帧中的目标对象所处的位置坐标;根据在相邻图像帧中的目标对象所处的位置坐标的变化,确定目标对象的动作。
如图9所示,在一个实施例中,该装置800还包括:
动作类型匹配模块805,用于获取表征目标对象的动作的特征参数;将特征参数输入预先训练的机器学习模型中;当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。
在一个实施例中,动作类型匹配模块805还用于以采集图像帧的摄像头为原点建立空间直角坐标系;获取通过摄像头从现实场景中采集的训练图像帧;根据相邻训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化,确定目标对象相应的动作;根据表征确定的动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。
在一个实施例中,弹幕内容调整模块808还用于当匹配的弹幕调整动作 类型为弹幕移除动作类型时,响应于弹幕操作指令,将播放的弹幕内容调整为不可见。
在一个实施例中,弹幕内容调整模块808还用于当匹配的弹幕调整动作类型为弹幕快进动作类型时,响应于弹幕操作指令,加快播放的弹幕内容的滚动速度;当匹配的弹幕调整动作类型为弹幕暂停动作类型时,响应于弹幕操作指令,将播放的弹幕内容停止滚动。
在一个实施例中,弹幕内容调整模块808还用于当匹配的弹幕调整动作类型为弹幕移除动作类型时,获取调整为不可见的弹幕内容中所包括的关键词;从与视频关联的弹幕内容中过滤掉包括关键词的弹幕内容;在视频的播放过程中,播放过滤后剩余的弹幕内容。
在一个实施例中,弹幕内容调整模块808还用于通过本地运行的服务将调整为不可见的弹幕内容发送至服务器;接收服务器反馈的从发送至服务器的弹幕内容中提取的关键词。
在一个实施例中,视频在以用户标识进行登录后播放。如图10所示,该装置800还包括:
关联模块810,用于将关键词与用户标识相关联;其中,关联的关键词,用于从用户标识登录后播放的视频所关联的弹幕内容中,过滤掉包括关联的关键词的弹幕内容。
在一个实施例中,弹幕内容调整模块808还用于确定目标对象映射到弹幕内容的播放画面中的位置;在播放的弹幕内容中按照位置确定待移除的弹幕内容;将待移除的弹幕内容调整为不可见。
图11为一个实施例中计算机设备的内部结构示意图。参照图11,该计算机设备可以是图1中所示的终端110,该计算机设备包括通过***总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质可存储操作***和计算机可读指令。该计算机可读指令被执行时,可使得处理器执行一种弹幕内容控制方法。该计算机设备的处理器用于提供计算和控制能力,支撑 整个计算机设备的运行。该内存储器中可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种弹幕内容控制方法。计算机设备的网络接口用于进行网络通信。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏等。计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是终端外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该计算机设备可以是个人计算机、移动终端或车载设备,移动终端包括手机、平板电脑、个人数字助理或可穿戴设备等中的至少一种。
本领域技术人员可以理解,图11中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本申请提供的弹幕内容控制装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图11所示的计算机设备上运行,计算机设备的非易失性存储介质可存储组成该弹幕内容控制装置的各个程序模块,比如,图8所示的图像帧获取模块802、动作识别模块804、操作指令生成模块806以及弹幕内容调整模块808。各个程序模块所组成的计算机可读指令用于使该计算机设备执行本说明书中描述的本申请各个实施例的弹幕内容控制方法中的步骤,例如,计算机设备可以通过如图8所示的弹幕内容控制装置800中的图像帧获取模块802获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧,并通过动作识别模块804识别图像帧中的目标对象的动作。计算机设备可以通过操作指令生成模块806当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令。计算机设备可以通过弹幕内容调整模块808响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容。
在一个实施例中,提供了一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被一个或 多个处理器执行时,使得所述一个或多个处理器执行如下步骤:获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;识别图像帧中的目标对象的动作;当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容。
在一个实施例中,识别图像帧中的目标对象的动作包括:识别获取的图像帧中的目标对象;根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作。
在一个实施例中,根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作包括:在以采集图像帧的摄像头为原点建立的空间直角坐标系中,确定图像帧中的目标对象所处的位置坐标;根据在相邻图像帧中的目标对象所处的位置坐标的变化,确定目标对象的动作。
在一个实施例中,计算机可读指令还使得处理器执行以下步骤:获取表征目标对象的动作的特征参数;将特征参数输入预先训练的机器学习模型中;当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。
在一个实施例中,计算机可读指令还使得处理器执行以下步骤:以采集图像帧的摄像头为原点建立空间直角坐标系;获取通过摄像头从现实场景中采集的训练图像帧;根据相邻训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化,确定目标对象相应的动作;根据表征确定的动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。
在一个实施例中,响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容包括:当匹配的弹幕调整动作类型为弹幕移除动作类型时,响应于弹幕操作指令,将播放的弹幕内容调整为不可见。
在一个实施例中,响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容还包括:当匹配的弹幕调整动作类型为弹幕快进动作类型时,响应于弹幕操作指令,加快播放的弹幕内容的滚动速度;当匹配的弹 幕调整动作类型为弹幕暂停动作类型时,响应于弹幕操作指令,将播放的弹幕内容停止滚动。
在一个实施例中,计算机可读指令还使得处理器执行以下步骤:当匹配的弹幕调整动作类型为弹幕移除动作类型时,获取调整为不可见的弹幕内容中所包括的关键词;从与视频关联的弹幕内容中过滤掉包括关键词的弹幕内容;在视频的播放过程中,播放过滤后剩余的弹幕内容。
在一个实施例中,获取调整为不可见的弹幕内容中所包括的关键词包括:通过本地运行的服务将调整为不可见的弹幕内容发送至服务器;接收服务器反馈的从发送至服务器的弹幕内容中提取的关键词。
在一个实施例中,视频在以用户标识进行登录后播放;计算机可读指令还使得处理器执行以下步骤:将关键词与用户标识相关联;其中,关联的关键词,用于从用户标识登录后播放的视频所关联的弹幕内容中,过滤掉包括关联的关键词的弹幕内容。
在一个实施例中,将播放的弹幕内容调整为不可见包括:确定目标对象映射到弹幕内容的播放画面中的位置;在播放的弹幕内容中按照位置确定待移除的弹幕内容;将待移除的弹幕内容调整为不可见。
在一个实施例中,提供了一个或多个存储有计算机可读指令的存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;识别图像帧中的目标对象的动作;当目标对象的动作与预设的弹幕调整动作类型匹配时,则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容。
在一个实施例中,识别图像帧中的目标对象的动作包括:识别获取的图像帧中的目标对象;根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作。
在一个实施例中,根据目标对象在相邻图像帧中的位置变化,确定目标对象的动作包括:在以采集图像帧的摄像头为原点建立的空间直角坐标系中, 确定图像帧中的目标对象所处的位置坐标;根据在相邻图像帧中的目标对象所处的位置坐标的变化,确定目标对象的动作。
在一个实施例中,计算机可读指令还使得处理器执行以下步骤:获取表征目标对象的动作的特征参数;将特征参数输入预先训练的机器学习模型中;当机器学习模型根据特征参数输出预设的弹幕调整动作类型时,则判定目标对象的动作与输出的预设的弹幕调整动作类型匹配。
在一个实施例中,计算机可读指令还使得处理器执行以下步骤:以采集图像帧的摄像头为原点建立空间直角坐标系;获取通过摄像头从现实场景中采集的训练图像帧;根据相邻训练图像帧中的目标对象在空间直角坐标系中的位置坐标的变化,确定目标对象相应的动作;根据表征确定的动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。
在一个实施例中,响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容包括:当匹配的弹幕调整动作类型为弹幕移除动作类型时,响应于弹幕操作指令,将播放的弹幕内容调整为不可见。
在一个实施例中,响应于弹幕操作指令,按照匹配的弹幕调整动作类型调整播放的弹幕内容还包括:当匹配的弹幕调整动作类型为弹幕快进动作类型时,响应于弹幕操作指令,加快播放的弹幕内容的滚动速度;当匹配的弹幕调整动作类型为弹幕暂停动作类型时,响应于弹幕操作指令,将播放的弹幕内容停止滚动。
在一个实施例中,计算机可读指令还使得处理器执行以下步骤:当匹配的弹幕调整动作类型为弹幕移除动作类型时,获取调整为不可见的弹幕内容中所包括的关键词;从与视频关联的弹幕内容中过滤掉包括关键词的弹幕内容;在视频的播放过程中,播放过滤后剩余的弹幕内容。
在一个实施例中,获取调整为不可见的弹幕内容中所包括的关键词包括:通过本地运行的服务将调整为不可见的弹幕内容发送至服务器;接收服务器反馈的从发送至服务器的弹幕内容中提取的关键词。
在一个实施例中,视频在以用户标识进行登录后播放;计算机可读指令 还使得处理器执行以下步骤:将关键词与用户标识相关联;其中,关联的关键词,用于从用户标识登录后播放的视频所关联的弹幕内容中,过滤掉包括关联的关键词的弹幕内容。
在一个实施例中,将播放的弹幕内容调整为不可见包括:确定目标对象映射到弹幕内容的播放画面中的位置;在播放的弹幕内容中按照位置确定待移除的弹幕内容;将待移除的弹幕内容调整为不可见。
应该理解的是,本申请各实施例中的各个步骤并不是必然按照步骤标号指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各实施例中至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上 述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种弹幕内容控制方法,包括:
    计算机设备获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;
    所述计算机设备识别所述图像帧中的目标对象的动作;
    当所述目标对象的动作与预设的弹幕调整动作类型匹配时,所述计算机设备则触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;及
    所述计算机设备响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。
  2. 根据权利要求1所述的方法,其特征在于,所述计算机设备识别所述图像帧中的目标对象的动作包括:
    所述计算机设备识别获取的所述图像帧中的目标对象;及
    所述计算机设备根据所述目标对象在相邻图像帧中的位置变化,确定所述目标对象的动作。
  3. 根据权利要求2所述的方法,其特征在于,所述计算机设备根据所述目标对象在相邻图像帧中的位置变化,确定所述目标对象的动作包括:
    所述计算机设备在以采集所述图像帧的摄像头为原点建立的空间直角坐标系中,确定所述图像帧中的目标对象所处的位置坐标;及
    所述计算机设备根据在相邻图像帧中的所述目标对象所处的位置坐标的变化,确定所述目标对象的动作。
  4. 根据权利要求1所述的方法,其特征在于,还包括:
    所述计算机设备获取表征所述目标对象的动作的特征参数;
    所述计算机设备将所述特征参数输入预先训练的机器学习模型中;及
    当所述机器学习模型根据所述特征参数输出预设的弹幕调整动作类型时,所述计算机设备则判定所述目标对象的动作与输出的所述预设的弹幕调整动作类型匹配。
  5. 根据权利要求4所述方法,其特征在于,还包括:
    所述计算机设备以采集所述图像帧的摄像头为原点建立空间直角坐标系;
    所述计算机设备获取通过所述摄像头从现实场景中采集的训练图像帧;
    所述计算机设备根据相邻训练图像帧中的所述目标对象在所述空间直角坐标系中的位置坐标的变化,确定所述目标对象相应的动作;及
    所述计算机设备根据表征确定的所述动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述计算机设备响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容包括:
    当匹配的弹幕调整动作类型为弹幕移除动作类型时,所述计算机设备响应于所述弹幕操作指令,将播放的弹幕内容调整为不可见。
  7. 根据权利要求6所述的方法,其特征在于,所述计算机设备响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容还包括:
    当匹配的弹幕调整动作类型为弹幕快进动作类型时,所述计算机设备响应于所述弹幕操作指令,加快播放的弹幕内容的滚动速度;及
    当匹配的弹幕调整动作类型为弹幕暂停动作类型时,所述计算机设备响应于所述弹幕操作指令,将播放的弹幕内容停止滚动。
  8. 根据权利要求6所述的方法,其特征在于,还包括:
    当匹配的弹幕调整动作类型为弹幕移除动作类型时,所述计算机设备获取调整为不可见的弹幕内容中所包括的关键词;
    所述计算机设备从与所述视频关联的弹幕内容中过滤掉包括所述关键词的弹幕内容;及
    所述计算机设备在所述视频的播放过程中,播放过滤后剩余的弹幕内容。
  9. 根据权利要求8所述的方法,其特征在于,所述计算机设备获取调整为不可见的弹幕内容中所包括的关键词包括:
    所述计算机设备通过本地运行的服务将调整为不可见的弹幕内容发送至服务器;及
    所述计算机设备接收所述服务器反馈的从发送至所述服务器的弹幕内容中提取的关键词。
  10. 根据权利要求8所述的方法,其特征在于,所述视频在以用户标识进行登录后播放;所述方法还包括:
    所述计算机设备将所述关键词与所述用户标识相关联;其中,关联的关键词,用于从所述用户标识登录后播放的视频所关联的弹幕内容中,过滤掉包括所述关联的关键词的弹幕内容。
  11. 根据权利要求6所述的方法,其特征在于,所述将播放的弹幕内容调整为不可见包括:
    所述计算机设备确定所述目标对象映射到弹幕内容的播放画面中的位置;
    所述计算机设备在播放的弹幕内容中按照所述位置确定待移除的弹幕内容;及
    所述计算机设备将待移除的弹幕内容调整为不可见。
  12. 一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;
    识别所述图像帧中的目标对象的动作;
    当所述目标对象的动作与预设的弹幕调整动作类型匹配时,则
    触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;及
    响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述识别所述图像帧中的目标对象的动作包括:
    识别获取的所述图像帧中的目标对象;及
    根据所述目标对象在相邻图像帧中的位置变化,确定所述目标对象的动作。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述根据所述目标对象在相邻图像帧中的位置变化,确定所述目标对象的动作包括:
    在以采集所述图像帧的摄像头为原点建立的空间直角坐标系中,确定所述图像帧中的目标对象所处的位置坐标;及
    根据在相邻图像帧中的所述目标对象所处的位置坐标的变化,确定所述目标对象的动作。
  15. 根据权利要求12所述的计算机设备,其特征在于,还包括:
    获取表征所述目标对象的动作的特征参数;
    将所述特征参数输入预先训练的机器学习模型中;及
    当所述机器学习模型根据所述特征参数输出预设的弹幕调整动作类型时,则判定所述目标对象的动作与输出的所述预设的弹幕调整动作类型匹配。
  16. 根据权利要求15所述计算机设备,其特征在于,还包括:
    以采集所述图像帧的摄像头为原点建立空间直角坐标系;
    获取通过所述摄像头从现实场景中采集的训练图像帧;
    根据相邻训练图像帧中的所述目标对象在所述空间直角坐标系中的位置坐标的变化,确定所述目标对象相应的动作;及
    根据表征确定的所述动作的特征参数和相应预设的弹幕调整动作类型进行机器学习训练,得到机器学习模型。
  17. 根据权利要求12至16中任一项所述的计算机设备,其特征在于,所述响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容包括:
    当匹配的弹幕调整动作类型为弹幕移除动作类型时,响应于所述弹幕操作指令,将播放的弹幕内容调整为不可见。
  18. 根据权利要求17所述的计算机设备,其特征在于,所述响应于所述 弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容还包括:
    当匹配的弹幕调整动作类型为弹幕快进动作类型时,响应于所述弹幕操作指令,加快播放的弹幕内容的滚动速度;及
    当匹配的弹幕调整动作类型为弹幕暂停动作类型时,响应于所述弹幕操作指令,将播放的弹幕内容停止滚动。
  19. 根据权利要求17所述的计算机设备,其特征在于,还包括:
    当匹配的弹幕调整动作类型为弹幕移除动作类型时,获取调整为不可见的弹幕内容中所包括的关键词;及
    从与所述视频关联的弹幕内容中过滤掉包括所述关键词的弹幕内容;
    在所述视频的播放过程中,播放过滤后剩余的弹幕内容。
  20. 一个或多个存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
    获取在播放视频和相应的弹幕内容时从现实场景中采集的图像帧;
    识别所述图像帧中的目标对象的动作;
    当所述目标对象的动作与预设的弹幕调整动作类型匹配时,则
    触发生成与匹配的弹幕调整动作类型相应的弹幕操作指令;及
    响应于所述弹幕操作指令,按照所述匹配的弹幕调整动作类型调整播放的弹幕内容。
PCT/CN2018/116190 2017-11-22 2018-11-19 弹幕内容控制方法、计算机设备和存储介质 WO2019101038A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711176605.6A CN109819342B (zh) 2017-11-22 2017-11-22 弹幕内容控制方法、装置、计算机设备和存储介质
CN201711176605.6 2017-11-22

Publications (1)

Publication Number Publication Date
WO2019101038A1 true WO2019101038A1 (zh) 2019-05-31

Family

ID=66601336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116190 WO2019101038A1 (zh) 2017-11-22 2018-11-19 弹幕内容控制方法、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN109819342B (zh)
WO (1) WO2019101038A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110351596A (zh) * 2019-07-17 2019-10-18 刘进 一种互联网流媒体大数据弹幕处理***及处理方法
CN110460899A (zh) * 2019-06-28 2019-11-15 咪咕视讯科技有限公司 弹幕内容的展示方法、终端设备及计算机可读存储介质
CN112637670A (zh) * 2020-12-15 2021-04-09 上海哔哩哔哩科技有限公司 视频生成方法及装置
CN112667081A (zh) * 2020-12-28 2021-04-16 北京大米科技有限公司 一种弹幕显示方法、装置、存储介质及终端
CN113365155A (zh) * 2021-04-26 2021-09-07 北京房江湖科技有限公司 弹幕管理方法及装置、设备及介质
CN113673414A (zh) * 2021-08-18 2021-11-19 北京奇艺世纪科技有限公司 弹幕生成方法、装置、电子设备及存储介质
CN114173173A (zh) * 2020-09-10 2022-03-11 腾讯数码(天津)有限公司 弹幕信息的显示方法和装置、存储介质及电子设备
CN114339362A (zh) * 2021-12-08 2022-04-12 腾讯科技(深圳)有限公司 视频弹幕匹配方法、装置、计算机设备和存储介质
CN114845128A (zh) * 2022-04-22 2022-08-02 咪咕文化科技有限公司 弹幕互动方法、装置、设备及存储介质
CN114915832A (zh) * 2022-05-13 2022-08-16 咪咕文化科技有限公司 一种弹幕显示方法、装置及计算机可读存储介质
CN115174957A (zh) * 2022-06-27 2022-10-11 咪咕文化科技有限公司 弹幕调用方法、装置、计算机设备及可读存储介质
CN115243068A (zh) * 2022-07-25 2022-10-25 武汉博昂泰捷科技有限公司 一种基于直播内容弹幕互动的摄像头控制方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982128B (zh) * 2019-03-19 2020-11-03 腾讯科技(深圳)有限公司 视频的弹幕生成方法、装置、存储介质和电子装置
CN110231867A (zh) * 2019-05-31 2019-09-13 重庆爱奇艺智能科技有限公司 在虚拟现实设备中调整弹幕的显示属性的方法和装置
CN110263276B (zh) * 2019-06-14 2021-10-15 北京字节跳动网络技术有限公司 消息分发方法、装置、设备及存储介质
CN110740387B (zh) * 2019-10-30 2021-11-23 深圳Tcl数字技术有限公司 一种弹幕编辑方法、智能终端及存储介质
CN111163359B (zh) * 2019-12-31 2021-01-05 腾讯科技(深圳)有限公司 弹幕生成方法、装置和计算机可读存储介质
CN113709544B (zh) * 2021-03-31 2024-04-05 腾讯科技(深圳)有限公司 视频的播放方法、装置、设备及计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221885A (zh) * 2011-06-15 2011-10-19 青岛海信电器股份有限公司 电视机及其控制方法和装置
CN103179359A (zh) * 2011-12-21 2013-06-26 北京新岸线移动多媒体技术有限公司 控制视频终端的方法及装置、视频终端
US20130318099A1 (en) * 2012-05-25 2013-11-28 Dwango Co., Ltd. Comment distribution system, and a method and a program for operating the comment distribution system
CN103956036A (zh) * 2013-10-14 2014-07-30 天津锋时互动科技有限公司 一种应用于家电的非触碰式遥控器
CN105516820A (zh) * 2015-12-10 2016-04-20 腾讯科技(深圳)有限公司 一种弹幕交互方法和装置
CN105592331A (zh) * 2015-12-16 2016-05-18 广州华多网络科技有限公司 一种弹幕消息的处理方法、相关设备和***
CN105635822A (zh) * 2016-01-07 2016-06-01 天脉聚源(北京)科技有限公司 一种视频弹幕处理方法及装置
CN107272890A (zh) * 2017-05-26 2017-10-20 歌尔科技有限公司 一种基于手势识别的人机交互方法和装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221885A (zh) * 2011-06-15 2011-10-19 青岛海信电器股份有限公司 电视机及其控制方法和装置
CN103179359A (zh) * 2011-12-21 2013-06-26 北京新岸线移动多媒体技术有限公司 控制视频终端的方法及装置、视频终端
US20130318099A1 (en) * 2012-05-25 2013-11-28 Dwango Co., Ltd. Comment distribution system, and a method and a program for operating the comment distribution system
CN103956036A (zh) * 2013-10-14 2014-07-30 天津锋时互动科技有限公司 一种应用于家电的非触碰式遥控器
CN105516820A (zh) * 2015-12-10 2016-04-20 腾讯科技(深圳)有限公司 一种弹幕交互方法和装置
CN105592331A (zh) * 2015-12-16 2016-05-18 广州华多网络科技有限公司 一种弹幕消息的处理方法、相关设备和***
CN105635822A (zh) * 2016-01-07 2016-06-01 天脉聚源(北京)科技有限公司 一种视频弹幕处理方法及装置
CN107272890A (zh) * 2017-05-26 2017-10-20 歌尔科技有限公司 一种基于手势识别的人机交互方法和装置

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460899A (zh) * 2019-06-28 2019-11-15 咪咕视讯科技有限公司 弹幕内容的展示方法、终端设备及计算机可读存储介质
CN110351596B (zh) * 2019-07-17 2021-07-27 上海播呗网络科技有限公司 一种互联网流媒体大数据弹幕处理***及处理方法
CN110351596A (zh) * 2019-07-17 2019-10-18 刘进 一种互联网流媒体大数据弹幕处理***及处理方法
CN114173173A (zh) * 2020-09-10 2022-03-11 腾讯数码(天津)有限公司 弹幕信息的显示方法和装置、存储介质及电子设备
CN114173173B (zh) * 2020-09-10 2024-06-11 腾讯数码(天津)有限公司 弹幕信息的显示方法和装置、存储介质及电子设备
CN112637670A (zh) * 2020-12-15 2021-04-09 上海哔哩哔哩科技有限公司 视频生成方法及装置
CN112667081A (zh) * 2020-12-28 2021-04-16 北京大米科技有限公司 一种弹幕显示方法、装置、存储介质及终端
CN113365155A (zh) * 2021-04-26 2021-09-07 北京房江湖科技有限公司 弹幕管理方法及装置、设备及介质
CN113673414B (zh) * 2021-08-18 2023-09-01 北京奇艺世纪科技有限公司 弹幕生成方法、装置、电子设备及存储介质
CN113673414A (zh) * 2021-08-18 2021-11-19 北京奇艺世纪科技有限公司 弹幕生成方法、装置、电子设备及存储介质
CN114339362A (zh) * 2021-12-08 2022-04-12 腾讯科技(深圳)有限公司 视频弹幕匹配方法、装置、计算机设备和存储介质
CN114339362B (zh) * 2021-12-08 2023-06-13 腾讯科技(深圳)有限公司 视频弹幕匹配方法、装置、计算机设备和存储介质
CN114845128A (zh) * 2022-04-22 2022-08-02 咪咕文化科技有限公司 弹幕互动方法、装置、设备及存储介质
CN114915832A (zh) * 2022-05-13 2022-08-16 咪咕文化科技有限公司 一种弹幕显示方法、装置及计算机可读存储介质
CN114915832B (zh) * 2022-05-13 2024-02-23 咪咕文化科技有限公司 一种弹幕显示方法、装置及计算机可读存储介质
CN115174957A (zh) * 2022-06-27 2022-10-11 咪咕文化科技有限公司 弹幕调用方法、装置、计算机设备及可读存储介质
CN115174957B (zh) * 2022-06-27 2023-08-15 咪咕文化科技有限公司 弹幕调用方法、装置、计算机设备及可读存储介质
CN115243068A (zh) * 2022-07-25 2022-10-25 武汉博昂泰捷科技有限公司 一种基于直播内容弹幕互动的摄像头控制方法
CN115243068B (zh) * 2022-07-25 2024-06-07 武汉博昂泰捷科技有限公司 一种基于直播内容弹幕互动的摄像头控制方法

Also Published As

Publication number Publication date
CN109819342A (zh) 2019-05-28
CN109819342B (zh) 2022-01-11

Similar Documents

Publication Publication Date Title
WO2019101038A1 (zh) 弹幕内容控制方法、计算机设备和存储介质
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
US10438077B2 (en) Face liveness detection method, terminal, server and storage medium
US10339402B2 (en) Method and apparatus for liveness detection
CN105518712B (zh) 基于字符识别的关键词通知方法及设备
US11430265B2 (en) Video-based human behavior recognition method, apparatus, device and storage medium
EP3341851B1 (en) Gesture based annotations
WO2023279704A1 (zh) 直播方法、装置、计算机设备、存储介质及程序
US20140281975A1 (en) System for adaptive selection and presentation of context-based media in communications
WO2021114710A1 (zh) 直播视频互动方法、装置以及计算机设备
CN111314759B (zh) 视频处理方法、装置、电子设备及存储介质
JP6986187B2 (ja) 人物識別方法、装置、電子デバイス、記憶媒体、及びプログラム
US9519355B2 (en) Mobile device event control with digital images
WO2019153925A1 (zh) 一种搜索方法及相关装置
WO2021169616A1 (zh) 非活体人脸的检测方法、装置、计算机设备及存储介质
JP7231638B2 (ja) 映像に基づく情報取得方法及び装置
TW201504839A (zh) 可攜式電子裝置及互動式人臉登入方法
US20230409632A1 (en) Systems and methods for using conjunctions in a voice input to cause a search application to wait for additional inputs
TW201351210A (zh) 操作區的決定方法與系統
KR20220043004A (ko) 차폐된 이미지 검출 방법, 장치 및 매체
TWI734246B (zh) 人臉辨識的方法及裝置
WO2020052062A1 (zh) 检测方法和装置
WO2018177134A1 (zh) 用户生成内容处理方法、存储介质和终端
JP2017146672A (ja) 画像表示装置、画像表示方法、画像表示プログラム及び画像表示システム
CN113657173B (zh) 一种数据处理方法、装置和用于数据处理的装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18880715

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18880715

Country of ref document: EP

Kind code of ref document: A1