WO2019042064A1 - 一种基于直播的事件提醒方法及装置 - Google Patents

一种基于直播的事件提醒方法及装置 Download PDF

Info

Publication number
WO2019042064A1
WO2019042064A1 PCT/CN2018/097992 CN2018097992W WO2019042064A1 WO 2019042064 A1 WO2019042064 A1 WO 2019042064A1 CN 2018097992 W CN2018097992 W CN 2018097992W WO 2019042064 A1 WO2019042064 A1 WO 2019042064A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
facial
preset
prompt
facial image
Prior art date
Application number
PCT/CN2018/097992
Other languages
English (en)
French (fr)
Inventor
白锡亮
Original Assignee
乐蜜有限公司
白锡亮
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐蜜有限公司, 白锡亮 filed Critical 乐蜜有限公司
Priority to US16/642,698 priority Critical patent/US11190853B2/en
Publication of WO2019042064A1 publication Critical patent/WO2019042064A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Definitions

  • the present application relates to the field of network application technologies, and in particular, to a live event-based event reminding method and apparatus.
  • Live broadcast software is one of them.
  • the anchor broadcasts to the viewer watching the live broadcast or other content that the anchor wants to display, and can also interact with the viewer. If the live content and interaction of the anchor are loved by the audience, more viewers will be attracted to watch the live broadcast. Even the anchor will receive the viewer's message and the gift from the audience. The viewer's message or the audience's gift can be considered as an event in the live broadcast.
  • the gift is a virtual item applied in the live broadcast, such as flowers, yachts, diamond rings, sports cars, etc., but these virtual items can be converted into a main broadcast. Real income. This will further stimulate the anchor to actively interact with the audience during the live broadcast, to win the audience's love, thereby increasing the number of viewers and receiving more gifts.
  • the present application provides a live event-based event reminding method and apparatus to solve the problem of how to remind the anchor of the unobserved screen when an event occurs during the live broadcast.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides a live event-based event reminding method, where the method includes:
  • the first prompt information is sent, and the first prompt information is used to prompt the occurrence of the preset event.
  • the preset facial image is a complete facial image of the main broadcast.
  • the step of detecting whether the image includes an image corresponding to the preset facial image comprises:
  • the step of determining whether the first facial image matches the complete facial image comprises:
  • the method further includes:
  • the second prompt information is sent before detecting whether the preset event occurs, wherein the second prompt information is used to prompt the live broadcast.
  • the face of the anchor is displayed in the video.
  • the preset facial image is an eye image of the main broadcast
  • the step of detecting whether the image includes an image corresponding to a preset facial image includes:
  • the step of determining whether the first eye image matches the eye image comprises:
  • the method further includes:
  • the third prompt information is sent, wherein the third prompt information is used to prompt the anchor to face up. screen.
  • the method when it is detected that the image includes an image that matches a preset facial image, the method further includes:
  • the second facial image being: an image corresponding to the preset facial image
  • the fourth prompt information is sent, wherein the fourth prompt information is used to prompt to adjust the distance between the anchor and the screen.
  • the first prompt information is one or more of the following types of information: a voice prompt, a vibration prompt, a light prompt, and a text prompt.
  • the embodiment of the present application provides a live event-based event reminding device, where the device includes:
  • An acquisition module configured to collect images in a live video
  • a first detecting module configured to detect whether an image corresponding to the preset facial image is included in the image
  • a second detecting module configured to detect whether a preset event occurs when the detection result of the first detecting module is negative
  • the first prompting module is configured to: when the detection result of the second detecting module is YES, issue the first prompting information, where the first prompting information is used to prompt the occurrence of the preset event.
  • the preset facial image is a complete facial image of the main broadcast.
  • the first detecting module includes:
  • a detecting submodule configured to detect whether there is a first facial image in the image
  • a first determining sub-module configured to: when the detection result of the detecting sub-module is negative, determine that the image does not include an image that matches the preset facial image;
  • a first determining sub-module configured to determine, when the detection result of the detecting sub-module is YES, whether the first facial image matches the complete facial image; if not, triggering the first determining sub-module.
  • the first determining submodule includes:
  • a first extracting unit configured to extract a first facial feature of the first facial image
  • a first matching unit configured to match the extracted first facial feature with a corresponding facial feature in the complete facial image
  • a first statistic unit configured to count, in the extracted first facial image, a number of first facial features that match facial features in the complete facial image
  • a first determining unit configured to determine whether the quantity is less than a first preset threshold
  • the first determining unit is configured to determine that the first facial image does not match the complete facial image when the determination result of the first determining unit is YES.
  • the device further includes:
  • a second prompting module configured to: after detecting that the image does not include an image corresponding to the preset facial image, send a second prompt information before detecting whether a preset event occurs, wherein the second prompt information Used to prompt to display the face of the anchor in the live video.
  • the preset facial image is an eye image of the main broadcast
  • the first detecting module includes:
  • a second determining sub-module configured to determine whether the first eye image matches the eye image
  • the second determining sub-module is configured to determine, when the determination result of the second determining sub-module is negative, that the image does not include an image that matches the preset facial image.
  • the second determining submodule includes:
  • a second extracting unit configured to extract a first eye feature of the first eye image
  • a second matching unit configured to match the extracted first eye feature with a corresponding eye feature in the eye image
  • a second statistic unit configured to count, among the extracted first ocular features, a number of first ocular features that match an ocular feature in the ocular image
  • a second determining unit configured to determine whether the quantity is less than a second preset threshold
  • the second determining unit is configured to determine that the first eye image does not match the eye image when the determination result of the second determining unit is YES.
  • the device further includes:
  • a third prompting module configured to: after detecting that the image does not include an image corresponding to the preset facial image, send a third prompt information before detecting whether a preset event occurs, wherein the third prompt information Used to prompt the anchor to face the screen.
  • the device when it is detected that the image includes an image that matches a preset facial image, the device further includes:
  • An acquiring module configured to acquire a second facial image of the anchor in the image, where the second facial image is: an image that matches the facial image;
  • a calculation module configured to calculate a percentage of an area of the second facial image to an area of the image
  • a determining module configured to determine whether the percentage is greater than a third preset threshold
  • the fourth prompting module is configured to: when the determining result of the determining module is negative, issue the fourth prompting information, where the fourth prompting information is used to prompt to adjust the distance between the anchor and the screen.
  • the first prompt information is one or more of the following types of information: a voice prompt, a vibration prompt, a light prompt, and a text prompt.
  • an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through a communication bus;
  • a memory for storing a computer program
  • the processor when executed to execute the program stored on the memory, performs the live event-based event reminding method described in any of the above.
  • an embodiment of the present application provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, where the computer program is executed by a processor to implement any of the foregoing Live-based event reminder method.
  • the image in the live video is collected; whether the image includes an image corresponding to the preset facial image; and detecting whether the image does not include the image corresponding to the preset facial image, whether the detection occurs
  • the preset event if a preset event occurs, the first prompt information is sent, and the first prompt information is used to prompt the information of the preset event.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • FIG. 1 is a first flowchart of a live event-based event reminding method according to an embodiment of the present application
  • FIG. 2 is a second flowchart of a live event-based event reminding method according to an embodiment of the present application
  • FIG. 3 is a third flowchart of a live event-based event reminding method according to an embodiment of the present application.
  • FIG. 4 is a fourth flowchart of a live event-based event reminding method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a first structure of an event reminding device based on a live broadcast according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of a second structure of an event reminding device based on a live broadcast according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of a third structure of an event reminding device based on a live broadcast according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a fourth structure of an event reminding device based on a live broadcast according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides a live event-based event reminding method and apparatus.
  • the event reminding method based on live broadcast includes:
  • the first prompt message is sent, and the first prompt information is used to prompt the occurrence of the preset event.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • the live event-based event reminding method provided by the embodiment of the present application can be applied to an electronic device such as a mobile phone, a tablet, a computer, and the like, and can also be applied to other electronic devices having a display function and a camera function, which are not limited herein.
  • a live event-based event reminding method provided by an embodiment of the present application includes the following steps.
  • S101 Collect an image in a live video.
  • the captured image is derived from the frame image of each frame in the live video.
  • the image may be collected in an image of the live video every preset time interval.
  • the preset duration can be customized. For example, if the preset duration is 2 milliseconds, then the image is captured from the live video every 2 milliseconds.
  • step S102 Detect whether an image corresponding to the preset facial image is included in the image, and if no, perform step S103.
  • the preset face image is a pre-stored face image of the anchor, and the preset face image can be divided into two types: a full face image and a partial face image.
  • the full facial image may be the front face of the anchor, or may be the non-frontal face of the anchor, and the non-frontal face may be the face presented by the anchor in the live video when the anchor is moving, looking down, or facing the face.
  • the complete facial image is a non-frontal face
  • the non-frontal face also needs to include facial features and other corresponding facial features so as to be a reference for the complete facial features.
  • the partial facial image may be an eye image.
  • the preset facial image is the complete facial image of the main broadcast
  • the preset event can be given timely feedback, thereby enhancing the audience interaction. Enthusiasm.
  • the preset facial image is a partial facial image of the main broadcast
  • attention can be paid to the occurrence of the preset event, and feedback can be given in time when the preset event occurs, so as to better interact with the audience.
  • One implementation of detecting whether an image corresponding to a preset facial image is included in the image is detected by face recognition.
  • the face recognition function is turned on, and the face recognition function can monitor the face of the anchor in the live video in real time during the live broadcast.
  • the detected image includes an image corresponding to the preset facial image, it indicates that the face of the live broadcast has an anchor, and may not be processed, and may further determine whether the percentage of the area occupied by the anchor in the live video is satisfied. Claim. There is no limit here.
  • step S103 Detect whether a preset event occurs, and if yes, execute step S104.
  • the preset event may be an event related to the viewer that occurs during the live broadcast process, and may include at least one of a message of the viewer, a gift sent by the viewer, a greeting of the viewer, a live broadcast room of the viewer entering the anchor, and the like. Not limited.
  • the first prompt information is used to prompt that a preset event occurs.
  • the first prompt information may be at least one of a voice prompt, a vibration prompt, a light prompt, a text prompt, and the like.
  • the content of the voice prompt may be preset.
  • the content of the voice prompt may be “you don’t have a front view display”, “there is a viewer giving you a gift”, and the like.
  • the volume is adjusted to the maximum, and then the voice prompt can be performed at the maximum volume. In this way, the purpose of enhancing the reminder for the anchor can be achieved, and the anchor can be prompted by voice even in a noisy environment.
  • the vibration can be prompted in different vibration modes, and the vibration mode can be preset.
  • the vibration may be in the form of continuous vibration, intermittent vibration, or the like.
  • the duration of each vibration in the interval vibration and the interval duration of the two adjacent vibrations are all presettable.
  • the interval vibration may be set as: the duration of each vibration is 2 seconds, and the interval between two adjacent vibrations The duration is 1 second. Thus, after shaking for 2 seconds each time, it stops for 1 second, and then vibrates for 2 seconds, so that the cycle repeatedly vibrates until the vibration prompt is turned off.
  • the light used for the prompt can be the light of the display screen or the indicator light, which is not limited herein.
  • the manner of prompting the light can be preset.
  • the light can be flashed at a preset frequency and a preset brightness.
  • the prompt is performed within a preset prompt duration when the preset event starts, and after the preset prompt duration ends, No further prompts are given.
  • the preset prompt duration can be customized.
  • the first prompt is a voice prompt
  • the preset prompt duration is 1 minute. Then, the voice prompt is repeatedly performed within 1 minute from the start of the preset event, and when 1 minute is over, the voice prompt is no longer performed.
  • the prompt duration is not set, and a corresponding prompt is issued when a preset event occurs until the anchor is manually closed, otherwise, the prompt is continuously performed.
  • the content of the text prompt is displayed in the floating window, and the floating window can be displayed on the screen until the anchor manually closes the floating window.
  • the position of the floating window on the screen can be preset. For example, the floating window can be set at the top of the screen. At this time, the text prompt will be displayed on the top of the screen.
  • the first prompt information may be prompted by a single information type, and may also be prompted by a combination of multiple information types: prompting any combination of voice prompts, vibration prompts, light prompts, and text prompts.
  • the first prompt information uses both a voice prompt and a vibration prompt, and simultaneously prompts voice and vibration when a preset event occurs. By applying a combination of various prompting methods, it is possible to enhance the prompt.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • a live event-based event reminding method provided by an embodiment of the present application includes the following steps:
  • S201 Collect an image in a live video.
  • step S201 is the same as step S101 in the embodiment of FIG. 1 described above, and details are not described herein.
  • step S202 Detect whether there is a first facial image in the image; if yes, execute step S203; if no, perform step S204.
  • the manner of detecting the first facial image may be the manner of face recognition.
  • the detected first facial image may be a frontal face image in the image, or may be a partial face image.
  • the partial face image is a face image including only partial facial features, for example, a side face image.
  • step S203 Determine whether the first facial image matches the complete facial image; if not, execute step S204.
  • the first facial image matches the preset complete facial image, it may be determined that the anchor appears in the live video; if the first facial image does not match the preset complete facial image, it may be determined that the preview is not included
  • the image corresponding to the facial image is set, and at this time, the second prompt information is issued.
  • the step of determining whether the first facial image matches the complete facial image may include the following steps.
  • the first facial feature of the first facial image is extracted.
  • the first facial feature may include at least one of an eye feature, an eyebrow feature, a nose feature, an ear feature, a mouth feature, a chin feature, a forehead feature, and the like in the first facial image.
  • the first facial feature can be extracted from the first facial image by means of face recognition.
  • the extracted first facial features are then matched to corresponding facial features in the full facial image.
  • the facial features in the complete facial image may be pre-stored, and the pre-stored facial features may include at least at least one of an eye feature, an eyebrow feature, a nose feature, an ear feature, a mouth feature, a chin feature, and a forehead feature.
  • the facial features in the preset complete facial image can be extracted by face recognition.
  • the type of the extracted first facial feature may be determined based on the facial features in the complete facial image. That is, the type of the extracted first facial feature is included in the facial feature in the complete facial image. For example, when the eye feature is extracted as the first facial feature, at least the eye feature is included in the facial feature in the complete facial image.
  • the type of the extracted first facial feature and the type of the facial feature in the stored complete facial image are One-to-one correspondence.
  • the features in the pre-stored complete facial image include eye features, eyebrow features, nose features, ear features, mouth features, chin features, forehead features; then, the extracted first facial features include eye features, eyebrow features , nose features, ear characteristics, mouth characteristics, chin characteristics, forehead features.
  • the feature type in the face image to be extracted can be set in advance.
  • only facial features are extracted in advance: eye features, eyebrow features, nose features, ear features, and mouth features.
  • the features in the pre-stored complete facial image include only the eye features, the eyebrow features, the nose features, the ear features, and the mouth features, and the first facial features of the extracted first facial images also include only the first facial images. Eye features, eyebrow features, nose features, ear features, and mouth features.
  • the feature matching in the first facial feature and the complete facial image is performed, comparing the extracted first facial feature with the features in the pre-stored complete facial image, that is, comparing the same feature type
  • the eye feature in the first facial feature is compared to the eye feature in the feature of the full facial image
  • the mouth feature in the first facial feature is compared to the mouth feature in the feature of the full facial image, and the like.
  • the number of first facial features matching the facial features in the complete facial image among the extracted first facial features is counted.
  • the facial features are respectively matched, and the result of the matching is: the eyebrow feature, the eye feature, and the nose feature in the first facial image and the complete facial image, respectively.
  • the eyebrow features, eye features, and nose features are corresponding matches. Then, it is counted that the number of the first facial features matching the facial features in the complete facial image is three in the extracted first facial features.
  • the counted number is smaller than the first preset threshold; if less, it is determined that the first facial image does not match the complete facial image.
  • the first preset threshold may be a custom setting. If greater than or equal to the first predetermined threshold, the first facial image may be considered to match the full facial image.
  • the first preset threshold is 3, when the number of the facial features matching the complete facial image in the first facial image is 2, it may be determined that the first facial image and the complete facial image are not match.
  • the preset facial image is a complete facial image of the main broadcast.
  • the full facial image may be a frontal face or a non-positive face. Both front and non-frontal faces include facial features and other facial features so that they can be used as a reference for complete facial features.
  • the first face image When the first face image is not detected in the image, it can be considered that there is no face image in the video image, and at this time, it can be determined that the face image of the anchor does not exist in the image. Further, it can be determined that the image does not include an image corresponding to the preset face image.
  • the second prompt information is sent before detecting whether the preset event occurs.
  • the second prompt information is used to prompt the display of the face of the anchor in the live video, so that the anchor can present the best side to the viewer to obtain more support and love of the viewer.
  • the second prompt information may be any one or more of a plurality of types of information such as a voice prompt, a vibration prompt, a light prompt, and a text prompt.
  • the second prompt information may be the same as or different from the first prompt information.
  • the first prompt information and the second prompt information may respectively use different types of information prompts.
  • the first prompt information uses a voice prompt
  • the second prompt information uses a vibration prompt.
  • the first prompt information and the second prompt information may use the same type of information prompt, but the content of the information prompt is different.
  • the first prompt information and the second prompt information both use voice prompts, and the content of the voice prompt of the first prompt information may be “there is a viewer giving you a gift”, and the voice prompt of the second prompt information is “you Did not appear in the live video.”
  • the priority levels of the first prompt information and the second prompt information may be set.
  • the first prompt information has a higher priority than the second prompt information.
  • step S205 Detect whether a preset event occurs, and if yes, execute step S206.
  • steps S205 and S206 are the same as steps S103 and S104 in the embodiment of FIG. 1 described above, and are not described herein.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • a live event-based event reminding method provided by an embodiment of the present application includes the following steps.
  • S301 Collect an image in a live video.
  • step S301 is the same as step S101 in the embodiment of FIG. 1 described above, and details are not described herein.
  • the manner of acquiring the first eye image may be obtained by a face recognition method.
  • the feature extraction can be performed on the acquired first eye image by face recognition.
  • the extracted eye feature may be: the position of the eyeball.
  • step S303 Determine whether the first eye image and the eye image match; if not, execute step S304.
  • a first eye feature of the first eye image is extracted.
  • the first eye feature may be a position feature of the eyeball, and may also include other features of the eye, which are not limited herein.
  • the eye image may be pre-stored, and the eye feature in the eye image may also be pre-stored.
  • the ocular features may include a plurality of types of features, such as positional features of the eyeball, etc., wherein the pre-stored eyeball position features are eyeball positions when the front view screen is the main screen.
  • the second preset threshold may be customized. If greater than or equal to the second predetermined threshold, the first facial image may be considered to match the eye image. In turn, it can be determined that the anchor is facing the screen during the live broadcast.
  • the corresponding feature in the pre-stored eye image is an eyeball position feature
  • the first eye feature of the extracted first eye image is an eyeball position feature
  • the eyeball position feature in the first eye image is Comparing with the eyeball position feature in the pre-stored eye image
  • the image includes and presets The face image conforms to the image, that is, the anchor front view screen
  • the eyeball position feature in the first eye image does not match the eyeball position feature in the pre-stored eye image
  • the anchor appears in the live video, and the anchor faces the screen.
  • the preset facial image is the eye image of the main broadcast.
  • the preset facial image is not included in the image.
  • the matching image that is, the anchor does not face the screen during the live broadcast.
  • the third prompt information is issued after detecting that the image does not include the image corresponding to the preset facial image, and before detecting whether the preset event occurs.
  • the third prompt information is used to prompt the anchor to face the screen, and the third prompt information may be any one or more of a voice prompt, a vibration prompt, a light prompt, a text prompt, and the like.
  • the third prompt information may be the same as or different from the first prompt information.
  • the first prompt information and the third prompt information may respectively use different types of information prompts.
  • the first prompt information and the third prompt information may use the same type of information prompt, but the content of the information prompt is different.
  • the first prompt information and the third prompt information both use voice prompts, and the content of the voice prompt of the first prompt information may be “there is a viewer giving you a gift”, and the content of the third prompt information voice prompt is “you Face the screen.”
  • the priority levels of the third prompt information and the first prompt information may be set.
  • the first prompt information has a higher priority than the third prompt information.
  • steps S305 and S306 are the same as steps S103 and S104 in the above embodiment, and are not described herein.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • a live event-based event reminding method provided by an embodiment of the present application is described below in conjunction with another specific embodiment.
  • a live event-based event reminding method provided by the embodiment of the present application, as shown in FIG. 4, includes the following steps.
  • S401 Collect an image in a live video.
  • step S402. Detect whether an image corresponding to the preset facial image is included in the image; if not, step S403 is performed, and if yes, step S405 is performed.
  • steps S401 to S404 are the same as the steps S101 to S104 in the above embodiment, and are not described herein.
  • the second facial image is an image that matches the preset facial image.
  • the second facial image may be acquired by a method of face recognition, and the acquired second facial image may be a frontal face image of the anchor in the image, or may be a partial face image of the anchor, that is, may include only partial facial features.
  • a facial image for example, a side face image.
  • the size of the percentage reflects the size of the image displayed on the screen by the first face image: the smaller the percentage, the smaller the image displayed on the screen by the first face image, indicating the person corresponding to the first face image at this time. The farther the distance of the screen is; the larger the percentage, the larger the image displayed on the screen by the first face image, indicating that the distance between the person corresponding to the first face image and the screen is closer.
  • step S407. Determine whether the percentage is greater than a third preset threshold. If no, go to step S408.
  • the third preset threshold may be preset.
  • the minimum percentage that is, the third preset threshold, may be preset. Only when it is greater than the set minimum percentage, the requirement of the image displayed on the screen is met; when it is not greater than the set minimum percentage, if the requirement of the image displayed on the screen is not met, a prompt message is issued to prompt The anchor is closer to the screen.
  • the fourth prompt information is used to prompt to adjust the distance between the anchor and the screen.
  • the fourth prompt information may be any one or more of different types of information such as a voice prompt, a vibration prompt, a light prompt, and a text prompt.
  • the fourth prompt information may be the same as the first prompt information, the second prompt information, and the third prompt information, or may be different.
  • the first prompt information, the second prompt information, the third prompt information, and the fourth prompt information may be Different types of information prompts are used respectively.
  • the first prompt information uses a voice prompt
  • the second prompt information uses a vibration prompt
  • the third prompt information uses a text prompt
  • the fourth prompt information uses a light prompt.
  • the same type of information prompt may be used between the first prompt information, the second prompt information, the third prompt information, and the fourth prompt information, but the content of the information prompt is different, for example,
  • the prompt information, the second prompt information and the third prompt information all adopt voice prompts, and the content of the voice prompt of the first prompt information may be “there is a viewer giving you a gift”, and the content of the voice prompt of the second prompt information is "You are not in the live video”, the content of the voice prompt of the third prompt message may be "Please face the screen”, and the content of the voice prompt of the fourth prompt information may be "Please approach the display screen”.
  • the solution provided by the embodiment of the present application is provided.
  • the technical solution promptly reminds the anchor to adjust the distance between the screen and the screen, so that the anchor can be better presented to the viewer, thereby obtaining the viewer's favorite and support.
  • the embodiment of the present application further provides a live event-based event reminding device.
  • the device includes:
  • the collecting module 510 is configured to collect an image in the live video.
  • the first detecting module 520 is configured to detect whether an image corresponding to the preset facial image is included in the image
  • the second detecting module 530 is configured to detect whether a preset event occurs when the detection result of the first detecting module is negative;
  • the first prompting module 540 is configured to: when the detection result of the second detecting module is YES, issue the first prompting information, where the first prompting information is used to prompt the information that the preset event occurs.
  • the first prompt information is one or more of the following types of information: a voice prompt, a vibration prompt, a light prompt, and a text prompt.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • the embodiment of the present application further provides another specific embodiment.
  • the preset facial image is a complete facial image of the main broadcast.
  • a live broadcast-based event reminding device provided by the embodiment of the present application, based on the embodiments corresponding to FIG. 5 and FIG. 5 , the first detecting module 520 may include:
  • a detecting submodule 521 configured to detect whether there is a first facial image in the image
  • the first determining sub-module 522 is configured to: when the detection result of the detecting sub-module is negative, determine that the image does not include an image that matches the preset facial image;
  • the first determining sub-module 523 is configured to determine whether the first facial image matches the complete facial image when the detection result of the detecting sub-module is YES; if not, trigger the first determining sub-module 522.
  • the first determining submodule 523 includes:
  • a first extracting unit configured to extract a first facial feature of the first facial image
  • a first matching unit configured to match the extracted first facial feature with a corresponding facial feature in the complete facial image
  • a first statistic unit configured to count, among the extracted first facial features, a number of first facial features that match facial features in the complete facial image
  • a first determining unit configured to determine whether the quantity is less than a first preset threshold
  • the first determining unit is configured to determine that the first facial image does not match the complete facial image when the determination result of the first determining unit is YES.
  • the device further includes: a second prompting module, configured to: after detecting that the image does not include the image corresponding to the preset facial image, send the second prompt information before detecting whether the preset event occurs, The second prompt information is used to prompt to display the face of the anchor in the live video.
  • a second prompting module configured to: after detecting that the image does not include the image corresponding to the preset facial image, send the second prompt information before detecting whether the preset event occurs, The second prompt information is used to prompt to display the face of the anchor in the live video.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • the embodiment of the present application further provides another specific embodiment.
  • the live event-based event reminding device provided by the embodiment of the present application
  • the preset facial image is the eye of the main broadcast. image
  • the first detecting module 520 can include:
  • the obtaining sub-module 524 is configured to acquire a first eye image of the anchor in the image
  • a second determining sub-module 525 configured to determine whether the first eye image and the eye image match
  • the second determining sub-module 526 is configured to determine that the image conforming to the preset facial image is not included in the image when the determination result of the second determining sub-module is negative.
  • the second determining submodule 525 includes:
  • a second extracting unit configured to extract a first eye feature of the first eye image
  • a second matching unit configured to match the extracted first eye feature with a corresponding eye feature in the eye image
  • a second statistical unit configured to count, among the extracted first eye features, a number of first eye features that match an eye feature in the eye image
  • a second determining unit configured to determine whether the quantity is less than a second preset threshold
  • the second determining unit is configured to determine that the first eye image does not match the eye image when the determination result of the second determining unit is YES.
  • the apparatus further includes:
  • the third prompting module is configured to: after detecting that the image conforms to the preset facial image, the third prompt information is sent before detecting whether the preset event occurs, wherein the third prompt information is used to prompt the anchor Face the screen.
  • the anchor when there is no anchor in the live broadcast video or the live video, and a preset event occurs in the live broadcast process, the anchor can be effectively reminded, so that the anchor knows the occurrence of the event. In this way, the anchor can make timely feedback on the events that occur, and increase the enthusiasm of the audience, so that more audience support can be obtained and more gifts can be received.
  • the embodiment of the present application further provides another specific embodiment.
  • a live broadcast-based event reminding device is provided in the embodiment of the present application.
  • the device may further include:
  • the acquiring module 550 is configured to acquire a second facial image of the anchor in the image, where the second facial image is: an image that matches the facial image;
  • a calculating module 560 configured to calculate a percentage of an area of the second facial image as an area of the image
  • the determining module 570 is configured to determine whether the percentage is greater than a third preset threshold
  • the fourth prompting module 580 is configured to: when the determining result of the module 570 is negative, issue a fourth prompting information, where the fourth prompting information is used to prompt to adjust the distance between the anchor and the screen.
  • the solution provided by the embodiment of the present application is provided.
  • the technical solution promptly reminds the anchor to adjust the distance between the screen and the screen, so that the anchor can be better presented to the viewer, thereby obtaining the viewer's favorite and support.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 9, including a processor 910, a communication interface 920, a memory 930, and a communication bus 940, wherein the processor 910, the communication interface 920, and the memory 930 pass through the communication bus 940.
  • a processor 910 a communication interface 920, a memory 930, and a communication bus 940, wherein the processor 910, the communication interface 920, and the memory 930 pass through the communication bus 940.
  • a memory 930 configured to store a computer program
  • the processor 910 is configured to perform the following steps when executing the program stored on the memory 930:
  • the first prompt information is sent, and the first prompt information is used to prompt information about the preset event.
  • the communication bus mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage.
  • RAM random access memory
  • NVM non-volatile memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processing (DSP), dedicated integration.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the embodiment of the present application provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, implements any of the above-mentioned live broadcast-based Event reminder method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供了一种基于直播的事件提醒方法及装置,方法包括:采集直播视频中的图像;检测图像中是否包括与预设的面部图像相符的图像;当检测到图像中不包括与预设的面部图像相符的图像时,检测是否发生预设事件;如果发生预设事件,发出第一提示信息,该第一提示信息用于提示发生预设事件的信息。通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。

Description

一种基于直播的事件提醒方法及装置
本申请要求于2017年8月30日提交中国专利局、申请号为201710762637.8发明名称为“一种基于直播的事件提醒方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络应用技术领域,特别是涉及一种基于直播的事件提醒方法及装置。
背景技术
随着互联网技术的发展,越来越多的应用软件应运而生,这些应用软件不仅在日常生活中为人们提供了便捷,而且丰富了人们的娱乐生活。直播软件便是其中一种,在直播过程中,主播向观看直播的观众展示自己或者主播想展示的其他内容,还可以与观众互动。若主播的直播内容以及互动受到观众的喜爱,会吸引更多的观众来观看直播,甚至,主播会接收到观众的留言,以及观众送的礼物。观众留言或者观众送礼物可以认为是直播中的一个事件,其中,礼物是一种应用在直播中的虚拟物品,例如:鲜花、游艇、钻戒、跑车等,但是这些虚拟物品可以转化为主播的一种实际收益。这样就更加激发了主播在直播过程中积极地与观众进行互动,争取观众的喜爱,从而增加观众数量以及能够接收到更多的礼物。
在与观众互动过程中,主播为了争取观众的喜爱需要做到的很重要的一点便是:对于观众的留言和礼物及时地给予反馈。例如,当有观众留言时应尽快回复,当接收到观众的礼物时应尽快表示感谢。这样,才能提高观众互动的积极性。但是,有的时候因为长时间的直播或者其他的干扰,导致主播没有正视屏幕,从而错过观众的留言和礼物,这样,留言和送礼物的观众得不到及时地反馈,互动的积极性也会降低,长此以往,便会导致观众的流失以及礼物的减少。因此,在直播过程中当有事件发生时如何提醒未正视屏幕的主播是亟待解决的问题。
发明内容
本申请提供了一种基于直播的事件提醒方法及装置,以解决在直播过程 中当有事件发生时如何提醒未正视屏幕的主播的问题。具体技术方案如下:
第一方面,本申请实施例提供了一种基于直播的事件提醒方法,所述方法包括:
采集直播视频中的图像;
检测所述图像中是否包括与预设的面部图像相符的图像;
当检测到所述图像中不包括与预设的面部图像相符的图像时,检测是否发生预设事件;
如果发生所述预设事件,发出第一提示信息,所述第一提示信息用于提示发生所述预设事件。
可选地,所述预设的面部图像为主播的完整面部图像。
可选地,所述检测所述图像中是否包括与预设的面部图像相符的图像的步骤,包括:
检测所述图像中是否有第一面部图像;
如果没有,判定所述图像中不包括与所述预设的面部图像相符的图像;
如果有,判断所述第一面部图像与所述完整面部图像是否匹配;
若不匹配,判定所述图像中不包括与所述预设的面部图像相符的图像。
可选地,所述判断所述第一面部图像与所述完整面部图像是否匹配的步骤,包括:
提取所述第一面部图像的第一面部特征;
将所提取的第一面部特征与所述完整面部图像中对应的面部特征进行匹配;
统计所提取的第一面部特征中,与所述完整面部图像中的面部特征相匹配的第一面部特征的数量;
判断所述数量是否小于第一预设阈值;
如果小于,判定所述第一面部图像与所述完整面部图像不匹配。
可选地,所述方法还包括:
当检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第二提示信息,其中,所述第二提示信息用于提示在所述直播视频中显示所述主播的面部。
可选地,所述预设的面部图像为主播的眼部图像;
所述检测所述图像中是否包括与预设的面部图像相符的图像的步骤,包括:
获取所述图像中所述主播的第一眼部图像;
判断所述第一眼部图像与所述眼部图像是否匹配;
如果不匹配,判定所述图像中不包括与预设的面部图像相符的图像。
可选地,所述判断所述第一眼部图像与所述眼部图像是否匹配的步骤,包括:
提取所述第一眼部图像的第一眼部特征;
将所提取的第一眼部特征与所述眼部图像中对应的眼部特征进行匹配;
统计所提取的第一眼部特征中,与所述眼部图像中的眼部特征相匹配的第一眼部特征的数量;
判断所述数量是否小于第二预设阈值;
如果小于,判定所述第一眼部图像与所述眼部图像不匹配。
可选地,所述方法还包括:
在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第三提示信息,其中,所述第三提示信息用于提示所述主播正视屏幕。
可选地,当检测到所述图像中包括与预设的面部图像相符的图像时,所述方法还包括:
获取所述图像中主播的第二面部图像,所述第二面部图像为:与所述预 设的面部图像相符的图像;
计算所述第二面部图像的面积占所述图像的面积的百分比;
判断所述百分比是否大于第三预设阈值;
如果不大于所述第三预设阈值,发出第四提示信息,其中,所述第四提示信息用于提示调整主播与屏幕的距离。
可选地,所述第一提示信息为以下信息类型中的一种或多种:语音提示、振动提示、灯光提示、文字提示。
第二方面,本申请实施例提供了一种基于直播的事件提醒装置,所述装置包括:
采集模块,用于采集直播视频中的图像;
第一检测模块,用于检测所述图像中是否包括与预设的面部图像相符的图像;
第二检测模块,用于当所述第一检测模块的检测结果为否时,检测是否发生预设事件;
第一提示模块,用于当所述第二检测模块的检测结果为是时,发出第一提示信息,所述第一提示信息用于提示发生所述预设事件。
可选地,所述预设的面部图像为主播的完整面部图像。
可选地,所述第一检测模块包括:
检测子模块,用于检测所述图像中是否有第一面部图像;
第一判定子模块,用于当所述检测子模块的检测结果为否时,判定所述图像中不包括与所述预设的面部图像相符的图像;
第一判断子模块,用于当所述检测子模块的检测结果为是时,判断所述第一面部图像与所述完整面部图像是否匹配;如果否,触发所述第一判定子模块。
可选地,所述第一判断子模块包括:
第一提取单元,用于提取所述第一面部图像的第一面部特征;
第一匹配单元,用于将所提取的第一面部特征与所述完整面部图像中对应的面部特征进行匹配;
第一统计单元,用于统计所提取的第一面部图像中,与所述完整面部图像中的面部特征相匹配的第一面部特征的数量;
第一判断单元,用于判断所述数量是否小于第一预设阈值;
第一判定单元,用于当所述第一判断单元的判断结果为是时,判定所述第一面部图像与所述完整面部图像不匹配。
可选地,所述装置还包括:
第二提示模块,用于在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第二提示信息,其中,所述第二提示信息用于提示在所述直播视频中显示所述主播的面部。
可选地,所述预设的面部图像为主播的眼部图像;
所述第一检测模块包括:
获取子模块,用于获取所述图像中所述主播的第一眼部图像;
第二判断子模块,用于判断所述第一眼部图像与所述眼部图像是否匹配;
第二判定子模块,用于当所述第二判断子模块的判断结果为否时,判定所述图像中不包括与预设的面部图像相符的图像。
可选地,所述第二判断子模块包括:
第二提取单元,用于提取所述第一眼部图像的第一眼部特征;
第二匹配单元,用于将所提取的第一眼部特征与所述眼部图像中对应的眼部特征进行匹配;
第二统计单元,用于统计所提取的第一眼部特征中,与所述眼部图像中的眼部特征相匹配的第一眼部特征的数量;
第二判断单元,用于判断所述数量是否小于第二预设阈值;
第二判定单元,用于当所述第二判断单元的判断结果为是时,判定所述第一眼部图像与所述眼部图像不匹配。
可选地,所述装置还包括:
第三提示模块,用于在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第三提示信息,其中,所述第三提示信息用于提示所述主播正视屏幕。
可选地,当检测到所述图像中包括与预设的面部图像相符的图像时,所述装置还包括:
获取模块,用于获取所述图像中主播的第二面部图像,所述第二面部图像为:与所述面部图像相符的图像;
计算模块,用于计算所述第二面部图像的面积占所述图像的面积的百分比;
判断模块,用于判断所述百分比是否大于第三预设阈值;
第四提示模块,用于所述判断模块的判断结果为否时,发出第四提示信息,其中,所述第四提示信息用于提示调整主播与屏幕的距离。
可选地,所述第一提示信息为以下信息类型中的一种或多种:语音提示、振动提示、灯光提示、文字提示。
第三方面,本申请实施例提供了一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,执行上述任一所述的基于直播的事件提醒方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一所述的基于直播的事件提醒方法。
由上述的技术方案可见,采集直播视频中的图像;检测图像中是否包括与预设的面部图像相符的图像;当检测到图像中不包括与预设的面部图像相符的图像时,检测是否发生预设事件;如果发生预设事件,发出第一提示信息,第一提示信息用于提示发生预设事件的信息。通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种基于直播的事件提醒方法的第一种流程图;
图2为本申请实施例提供的一种基于直播的事件提醒方法的第二种流程图;
图3为本申请实施例提供的一种基于直播的事件提醒方法的第三种流程图;
图4为本申请实施例提供的一种基于直播的事件提醒方法的第四种流程图;
图5为本申请实施例提供的一种基于直播的事件提醒装置的第一种结构示意图;
图6为本申请实施例提供的一种基于直播的事件提醒装置的第二种结构示意图;
图7为本申请实施例提供的一种基于直播的事件提醒装置的第三种结构示意图;
图8为本申请实施例提供的一种基于直播的事件提醒装置的第四种结构示意图;
图9为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决在直播过程中当有预设事件发生时如何提醒未正视屏幕的主播的问题,本申请实施例提供了一种基于直播的事件提醒方法及装置。其中,基于直播的事件提醒方法包括:
采集直播视频中的图像;
检测图像中是否包括与预设的面部图像相符的图像;
当检测到图像中不包括与预设的面部图像相符的图像时,检测是否发生预设事件;
如果发生预设事件,发出第一提示信息,第一提示信息用于提示发生预设事件。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
下面首先对本申请实施例提供的一种基于直播的事件提醒方法进行介绍。本申请实施例提供的一种基于直播的事件提醒方法可以应用于手机、平板、电脑等电子设备,还可以应用于其他具有显示功能和摄像功能的电子设备, 在此不作限定。
如图1所示,本申请实施例提供的一种基于直播的事件提醒方法,包括如下步骤。
S101,采集直播视频中的图像。
采集的图像来源于直播视频中每一帧的帧图像,其中,图像的采集方式可以是每间隔预设时长采集一次直播视频的图像。预设时长可以自定义设定。例如,预设时长为2毫秒,那么,每隔2毫秒从直播视频中采集一次图像。
S102,检测图像中是否包括与预设的面部图像相符的图像,如果否,执行步骤S103。
预设的面部图像为预先存储的主播的面部图像,该预设的面部图像可以分为两种:完整面部图像和局部面部图像。其中,完整面部图像可以是主播的正面脸,还可以是主播的非正面脸,非正面脸可以是主播在抬头、低头或者侧脸等动作时在直播视频中所呈现的脸部。完整面部图像为非正面脸时,非正面脸也需包括五官以及其他相应的脸部特征,以便于可以作为完整面部特征的参考。局部面部图像可以为眼部图像。
当预设的面部图像为主播的完整面部图像时,通过检测直播视频中是否有主播的面部图像,以确保主播在直播过程中正对屏幕,可以对预设事件及时给予反馈,进而提高观众互动的积极性。
当预设的面部图像为主播的局部面部图像时,通过检测在直播过程中主播是否正视屏幕,并在检测出主播没有正视屏幕时,提示主播正视屏幕。这样可以注意预设事件的发生,并在预设事件发生时及时给予反馈,从而更好的和观众互动交流。
检测图像中是否包括与预设的面部图像相符的图像的一种实现方式是通过人脸识别进行检测。当主播启动直播应用程序时,人脸识别功能开启,并且,在直播过程中人脸识别功能可以实时地对直播视频中主播的脸进行监测。
在检测图像中包括与预设的面部图像相符的图像时,说明直播视频中有主播的脸部,可以不进行处理,还可以继续判断主播的脸部在直播视频中所 占面积的百分比是否满足要求。在此不做限定。
S103,检测是否发生预设事件,如果是,执行步骤S104。
其中,预设事件可以为直播过程中发生的与观众有关联的事件,可以包括观众的留言、观众送的礼物,观众的招呼、观众进入主播的直播房间等事件中的至少一种,在此不做限定。
如果未检测到发生预设事件,则可以不做处理。
S104,发出第一提示信息。
其中,第一提示信息用于提示发生预设事件。第一提示信息可以为语音提示、振动提示、灯光提示、文字提示等信息类型中的至少一种。
针对语音提示,语音提示的内容可以是预设的,例如,语音提示的内容可以为“您没有正视显示屏”、“有观众给您送礼物”等。为了加强语音提示的作用,一种实施方式中,在进行语音提示时,将音量调至最大,进而可以以最大音量进行语音提示。这样,可以达到对主播加强提醒的目的,并且,即使在嘈杂的环境中还是可以通过语音提示主播。
针对振动提示,可以以不同振动的方式进行提示,振动方式可以是预设的。例如,振动的方式可以为连续振动、间隔振动等。其中,间隔振动中每次振动的时长以及相邻两次振动的间隔时长均是可以预设的,例如,间隔振动可以设置为:每次振动的时长为2秒,相邻两次振动的间隔时长为1秒。这样,每次振动2秒后便停1秒时间,然后再振动2秒,如此循环重复地振动,直至振动提示关闭。
针对灯光提示,用于提示的灯光可以是显示屏的灯光,还可以是指示灯,在此不做限定。灯光的提示方式可以预先设定,一种实施方式中,灯光可以以预设的频率和预设的亮度进行闪烁。
一种实施方式中,针对语音提示、振动提示和灯光提示中的任一种或多种提示方式,在发生预设事件开始的预设提示时长内进行提示,在预设提示时长结束后,便不再进行相应的提示。其中,预设提示时长可以是自定义设定的。例如,第一提示为语音提示,预设提示时长为1分钟,那么,在发生预 设事件开始的1分钟内反复地进行语音提示,当1分钟结束,不再进行语音提示。
另一种实施方式中,不设定提示时长,当发生预设事件时发出相应的提示,直至主播手动关闭,否则,一直持续地进行提示。
针对文字提示,文字提示的内容在悬浮窗中显示,悬浮窗可以在屏幕上一直显示,直至主播手动关闭悬浮窗。悬浮窗在屏幕上的位置可以预先设定,例如,可以将悬浮窗口设置在屏幕顶端,此时,文字提示会在屏幕顶端上显示。
第一提示信息可以采用单一信息类型的进行提示,还可以同时采用多种信息类型组合的方式进行提示:将语音提示、振动提示、灯光提示和文字提示中的任意多种信息类型组合进行提示。例如,第一提示信息同时采用语音提示和振动提示,在发生预设事件时,同时进行语音和振动的提示。通过将多种提示方式的组合应用,可以起到加强提示的作用。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
在上述图1及图1所对应的实施例的基础上,下面结合另一具体的实施例,当预设的面部图像为主播的完整面部图像的情况下,对本申请实施例提供的一种基于直播的事件提醒方法进行介绍。
如图2所示,本申请实施例提供的一种基于直播的事件提醒方法,包括如下步骤:
S201,采集直播视频中的图像。
本实施例中,步骤S201与上述图1的实施例中的步骤S101相同,在此不做赘述。
S202,检测图像中是否有第一面部图像;如果是,执行步骤S203;如果否,执行步骤S204。
检测第一面部图像的方式可以是人脸识别的方式。其中,所检测的第一面部图像可以是图像中的正面脸图像,还可以是部分脸图像。部分脸图像为仅包括部分脸部特征的面部图像,例如,侧脸图像。
S203,判断第一面部图像与完整面部图像是否匹配;如果否,执行步骤S204。
如果第一面部图像与预设的完整面部图像匹配,则可以确定主播出现在直播视频中;如果第一面部图像与预设的完整面部图像不匹配,则可以确定图像中不包括与预设的面部图像相符的图像,此时,发出第二提示信息。
一种实施方式中,判断第一面部图像与完整面部图像是否匹配(S204)的步骤可以包括如下步骤。
首先,提取第一面部图像的第一面部特征。其中,第一面部特征可以包括第一面部图像中眼睛特征、眉毛特征、鼻子特征、耳朵特征、嘴巴特征、下巴特征、额头特征等特征中的至少一种。第一面部特征可以通过人脸识别的方式从第一面部图像中提取出来。
然后,将所提取的第一面部特征与完整面部图像中对应的面部特征进行匹配。
其中,完整面部图像中的面部特征可以是预先存储的,预先存储的面部特征中可以包括主播的眼睛特征、眉毛特征、鼻子特征、耳朵特征、嘴巴特征、下巴特征、额头特征等特征中的至少一种。预设的完整面部图像中的面部特征可以通过人脸识别方式提取出来。
所提取的第一面部特征的种类可以根据完整面部图像中的面部特征来确定。也就是说,所提取的第一面部特征的种类是完整面部图像中的面部特征所包括的。例如,提取眼睛特征作为第一面部特征时,则在完整面部图像中的面部特征中至少包括有眼睛特征。
基于上述第一面部特征的种类与完整面部图像中的面部特征的关系,一 种实施方式中,所提取的第一面部特征的种类与所存储的完整面部图像中的面部特征的种类是一一对应。示例地,预先存储的完整面部图像中的特征包括眼睛特征、眉毛特征、鼻子特征、耳朵特征、嘴巴特征、下巴特征、额头特征;那么,所提取的第一面部特征包括眼睛特征、眉毛特征、鼻子特征、耳朵特征、嘴巴特征、下巴特征、额头特征。
可以预先设定所需提取的面部图像中的特征类型。示例地,预先设定仅提取五官特征:眼睛特征、眉毛特征、鼻子特征、耳朵特征以及嘴巴特征。预先存储的完整面部图像中的特征仅包括眼睛特征、眉毛特征、鼻子特征、耳朵特征和嘴巴特征,所提取的第一面部图像的第一面部特征也仅包括第一面部图像中的眼睛特征、眉毛特征、鼻子特征、耳朵特征和嘴巴特征。
在进行第一面部特征与完整面部图像中的特征匹配时,将所提取的第一面部特征与预先存储的完整面部图像中的特征进行一一比较,即同一种特征类型之间进行比较:第一面部特征中的眼睛特征与完整面部图像的特征中的眼睛特征进行比较,第一面部特征中的嘴巴特征与完整面部图像的特征中的嘴巴特征进行比较等。
第三步,统计所提取的第一面部特征中,与完整面部图像中的面部特征相匹配的第一面部特征的数量。
例如,当第一面部图像与完整面部图像进行特征匹配时,针对五官特征分别进行匹配,匹配的结果是:第一面部图像中的眉毛特征、眼睛特征和鼻子特征分别与完整面部图像中的眉毛特征、眼睛特征和鼻子特征是对应匹配。则统计出所提取的第一面部特征中,与完整面部图像中的面部特征相匹配的第一面部特征的数量为3。
在统计出相匹配的第一面部特征的数量之后,判断所统计出的数量是否小于第一预设阈值;如果小于,则判定第一面部图像与完整面部图像不匹配。
其中,第一预设阈值可以是自定义设定的。如果大于或者等于该第一预设阈值,则可以认为第一面部图像与完整面部图像相匹配。
例如,第一预设阈值为3,当所统计出的第一面部图像中与完整面部图像中相匹配的面部特征的数量为2时,则可以确定该第一面部图像与完整面部图 像不匹配。
S204,判定图像中不包括与预设的面部图像相符的图像。
在本实施例中,预设的面部图像为主播的完整面部图像。其中,完整面部图像可以是正面脸,还可以是非正面脸。正面脸和非正面脸均包括五官以及其他脸部特征,以便于可以作为完整面部特征的参考。
当在图像中没有检测到第一面部图像时,可以认为,视频图像中不存在任何人脸图像,此时,可以确定该图像中不存在主播的面部图像。进而可以确定出该图像中不包括与预设的面部图像相符的图像。
一种实施方式中,在检测到图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第二提示信息。其中,第二提示信息用于提示在直播视频中显示主播的面部,使得主播可以将最好的一面展现给观众,以获得更多观众的支持和喜爱。
第二提示信息可以为语音提示、振动提示、灯光提示、文字提示等多种信息类型中的任一种或多种。
其中,第二提示信息可以和第一提示信息相同,也可以不相同。在第一提示信息与第二提示信息不相同时,一种实现方式中,第一提示信息与第二提示信息可以分别采用不同类型的信息提示。例如,第一提示信息采用语音提示,第二提示信息采用振动提示。另一种实现方式中,第一提示信息与第二提示信息可以采用相同类型的信息提示,但是信息提示的内容各不相同。例如,第一提示信息与第二提示信息均采用语音提示,而第一提示信息的语音提示的内容可以为“有观众给您送礼物啦”,第二提示信息的语音提示的内容为“您没有出现在直播视频中”。
另外,因为第一提示信息和第二提示信息分别针对不同的情况下进行提示,可以设定第一提示信息和第二提示信息的优先级别。一种实施方式中,第一提示信息的优先级高于第二提示信息。在检测到主播的脸未正对显示屏后,且在检测是否发生预设事件之前,发出第二提示信息;当检测到发生预设事件时,发出第一提示信息。而此时,若第二提示信息还在持续提示时,第一提示信息会将第二提示信息覆盖,这样,仅会发出第一提示信息。
S205,检测是否发生预设事件,如果是,执行步骤S206。
S206,发出第一提示信息。
本实施例中,步骤S205和S206与上述图1的实施例中的步骤S103和S104相同,在此不做赘述。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
在上述图1及图1所对应的实施例的基础上,下面结合另一具体的实施例,当预设的面部图像为主播的眼部图像的情况下,对本申请实施例提供的一种基于直播的事件提醒方法进行介绍。
如图3所示,本申请实施例提供的一种基于直播的事件提醒方法,包括如下步骤。
S301,采集直播视频中的图像。
本实施例中,步骤S301与上述图1的实施例中的步骤S101相同,在此不做赘述。
S302,获取图像中主播的第一眼部图像。
其中,获取第一眼部图像的方式可以是通过人脸识别方式获得。通过人脸识别,可以将所获取的第一眼部图像进行特征提取。例如,通过人脸识别,所提取的眼部特征可以为:眼球的位置。
S303,判断第一眼部图像与眼部图像是否匹配;如果否,执行步骤S304。
一种实施方式中,首先,提取第一眼部图像的第一眼部特征。其中,第一眼部特征可以为眼球的位置特征,还可以包括眼睛的其他特征,在此不作限定。
将所提取的第一眼部特征与眼部图像中对应的眼部特征进行匹配;
其中,眼部图像可以是预先存储的,眼部图像中的眼部特征也可以是预先存储的。眼部特征可以包括多个类型的特征,例如,眼球的位置特征等,其中,预先存储的眼球位置特征为主播正视屏幕时的眼球位置。
统计所提取的第一眼部特征中,与眼部图像中的眼部特征相匹配的第一眼部特征的数量;判断数量是否小于第二预设阈值;如果小于,判定第一眼部图像与眼部图像不匹配。
其中,第二预设阈值可以是自定义设定的。如果大于或者等于该第二预设阈值,则可以认为第一面部图像与眼部图像相匹配。进而可以确定主播在直播过程中正视屏幕。
示例地,在预先存储的眼部图像中对应的特征为眼球位置特征时,所提取的第一眼部图像的第一眼部特征为眼球位置特征,将第一眼部图像中的眼球位置特征与预先存储的眼部图像中的眼球位置特征进行比较,如果第一眼部图像中的眼球位置特征与预先存储的眼部图像中的眼球位置特征相匹配,则可以确定图像中包括与预设的面部图像相符的图像,即主播正视屏幕;如果第一眼部图像中的眼球位置特征与预先存储的眼部图像中的眼球位置特征不匹配,则可以确定图像中不包括与预设的面部图像相符的图像。
另外,如果判断第一眼部图像与眼部图像相匹配,则可以判定主播出现在直播视频中,并且主播正视屏幕。
S304,判定图像中不包括与预设的面部图像相符的图像。
本实施例中,预设的面部图像为主播的眼部图像,当判断出第一眼部图像与预设的主播的眼部图像不匹配时,可以确定图像中不包括与预设的面部图像相符的图像,即主播在直播过程中没有正视屏幕。
一种实施方式中,在检测到图像中不包括与预设的面部图像相符的图像后,并且在检测是否发生预设事件之前,发出第三提示信息。
其中,第三提示信息用于提示主播正视屏幕,第三提示信息可以为语音提示、振动提示、灯光提示、文字提示等类型中的任一种或多种。
第三提示信息可以与第一提示信息相同,也可以不相同。在第一提示信息与第三提示信息不相同时,一种实现方式中,第一提示信息与第三提示信息可以分别采用不同类型的信息提示。另一种实现方式,第一提示信息与第三提示信息可以采用相同类型的信息提示,但是信息提示的内容各不相同。例如,第一提示信息与第三提示信息均采用语音提示,而第一提示信息的语音提示的内容可以为“有观众给您送礼物啦”,第三提示信息的语音提示的内容为“您正视屏幕”。
另外,因为第三提示信息和第一提示信息分别针对不同的情况下进行提示,可以设定第三提示信息和第一提示信息的优先级别。一种实施方式中,第一提示信息的优先级高于第三提示信息。在检测到主播的脸未正对显示屏后,且在检测是否发生预设事件之前,发出第三提示信息;当检测到发生预设事件时,发出第一提示信息。而此时,若第三提示信息还在持续提示时,第一提示信息会将第三提示信息覆盖,这样,仅会发出第一提示信息。
S305,检测是否发生预设事件,如果是,执行步骤S306。
S306,发出第一提示信息。
本实施例中,步骤S305和S306与上述实施例中的步骤S103和S104相同,在此不做赘述。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
下面结合另一具体的实施例,对本申请实施例提供的一种基于直播的事件提醒方法进行介绍。
相应于上述图1对应的实施例,本申请实施例提供的一种基于直播的事件提醒方法,如图4所示,包括如下步骤。
S401,采集直播视频中的图像。
S402,检测图像中是否包括与预设的面部图像相符的图像;如果否,执行步骤S403,如果是,执行步骤S405。
S403,检测是否发生预设事件,如果是,执行步骤S404。
S404,发出第一提示信息。
本实施例中,步骤S401至S404与上述实施例中的步骤S101至S104相同,在此不做赘述。
S405,获取图像中主播的第二面部图像。
其中,第二面部图像为:与预设的面部图像相符的图像。第二面部图像的获取可以通过人脸识别的方式,所获取的第二面部图像可以是图像中主播的正面脸图像,还可以是主播的部分脸图像,即可以是仅包括部分脸部特征的面部图像,例如,侧脸图像。
S406,计算第二面部图像的面积占图像的面积的百分比。
百分比的大小反映出第一人脸图像在屏幕上显示的图像大小:百分比越小,第一人脸图像在屏幕上显示出来的图像越小,说明了此时第一人脸图像对应的人与屏幕的距离也越远;百分比越大,第一人脸图像在屏幕上显示出来的图像越大,说明了此时第一人脸图像对应的人与屏幕的距离也越近。
S407,判断百分比是否大于第三预设阈值,如果否,执行步骤S408。
其中,第三预设阈值可以是预先设定的。
为了保证主播的脸在屏幕上显示出来的图像不会太小,可以预先设定最小的百分比,即第三预设阈值。只有在大于所设定的最小百分比时,才符合在屏幕上显示的图像的要求;当不大于所设定的最小百分比时,不符合在屏幕上显示的图像的要求,则发出提示信息以提示主播更靠近屏幕。
S408,发出第四提示信息。
第四提示信息用于提示调整主播与屏幕的距离,其中,第四提示信息可以为语音提示、振动提示、灯光提示、文字提示等不同类型的信息中的任一 种或多种。
其中,第四提示信息可以和第一提示信息、第二提示信息以及第三提示信息相同,也可以不相同。在第一提示信息、第二提示信息、第三提示信息以及第四提示信息均不相同时,一种实施方式,第一提示信息、第二提示信息、第三提示信息以及第四提示信息可以分别采用不同类型的信息提示,例如,第一提示信息采用语音提示,第二提示信息采用振动提示,第三提示信息采用文字提示,第四提示信息采用灯光提示。
另一种实施方式,第一提示信息、第二提示信息、第三提示信息以及第四提示信息任意两者之间可以采用相同类型的信息提示,但是信息提示的内容各不相同,例如,第一提示信息、第二提示信息及第三提示信息均采用语音提示,而第一提示信息的语音提示的内容可以为“有观众给您送礼物啦”,第二提示信息的语音提示的内容为“您没有在直播视频中”,第三提示信息的语音提示的内容可以为“请您正视屏幕”,第四提示信息的语音提示的内容可以为“请靠近显示屏”。
在直播过程中当主播距离摄像头比较远时,呈现在直播视频上的画面会比较小,甚至可能因为画面太小以至于不能识别主播,因此,为了避免这种情况,通过本申请实施例提供的技术方案,及时地提醒主播调整与屏幕之间的距离,从而使得主播可以更好的呈现给观众,进而获得观众的喜爱和支持。
结合上述方法实施例,本申请实施例还提供一种基于直播的事件提醒装置,如图5所示,装置包括:
采集模块510,用于采集直播视频中的图像;
第一检测模块520,用于检测图像中是否包括与预设的面部图像相符的图像;
第二检测模块530,用于当第一检测模块的检测结果为否时,检测是否发生预设事件;
第一提示模块540,用于当第二检测模块的检测结果为是时,发出第一提 示信息,第一提示信息用于提示发生预设事件的信息。
一种实施方式中,第一提示信息为以下信息类型中的一种或多种:语音提示、振动提示、灯光提示、文字提示。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
在图5的实施例的基础上,本申请实施例还提供另一具体实施例,一种实施方式中,预设的面部图像为主播的完整面部图像。
如图6所示,本申请实施例提供的一种基于直播的事件提醒装置,在上述图5以及图5所对应的实施例的基础上,第一检测模块520可以包括:
检测子模块521,用于检测图像中是否有第一面部图像;
第一判定子模块522,用于当检测子模块的检测结果为否时,判定图像中不包括与预设的面部图像相符的图像;
第一判断子模块523,用于当检测子模块的检测结果为是时,判断第一面部图像与完整面部图像是否匹配;如果否,触发第一判定子模块522。
一种实施方式中,第一判断子模块523包括:
第一提取单元,用于提取第一面部图像的第一面部特征;
第一匹配单元,用于将所提取的第一面部特征与完整面部图像中对应的面部特征进行匹配;
第一统计单元,用于统计所提取的第一面部特征中,与完整面部图像中的面部特征相匹配的第一面部特征的数量;
第一判断单元,用于判断数量是否小于第一预设阈值;
第一判定单元,用于当第一判断单元的判断结果为是时,判定第一面部图像与完整面部图像不匹配。
一种实施方式中,装置还包括:第二提示模块,用于在检测到图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第二提示信息,其中,第二提示信息用于提示在直播视频中显示主播的面部。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
在图5的基础上,本申请实施例还提供另一具体实施例,如图7所示,本申请实施例提供的一种基于直播的事件提醒装置,预设的面部图像为主播的眼部图像;
第一检测模块520可以包括:
获取子模块524,用于获取图像中主播的第一眼部图像;
第二判断子模块525,用于判断第一眼部图像与眼部图像是否匹配;
第二判定子模块526,用于当第二判断子模块的判断结果为否时,判定图像中不包括与预设的面部图像相符的图像。
一种实施方式中,第二判断子模块525包括:
第二提取单元,用于提取第一眼部图像的第一眼部特征;
第二匹配单元,用于将所提取的第一眼部特征与眼部图像中对应的眼部特征进行匹配;
第二统计单元,用于统计所提取的第一眼部特征中,与眼部图像中的眼部特征相匹配的第一眼部特征的数量;
第二判断单元,用于判断数量是否小于第二预设阈值;
第二判定单元,用于当第二判断单元的判断结果为是时,判定第一眼部图像与眼部图像不匹配。
一种实施方式中,装置还包括:
第三提示模块,用于在检测到图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第三提示信息,其中,第三提示信息用于提示主播正视屏幕。
通过本申请实施例提供的技术方案,在直播过程中当主播未正视屏幕或者直播视频中没有主播、且有预设事件发生时,可以有效地提醒主播,从而使得主播知道事件的发生。这样,主播才能对所发生的事件做出及时的反馈,提高观众互动的积极性,从而可以获得更多观众的支持以及接收到更多的礼物。
在图5的基础上,本申请实施例还提供另一具体实施例,如图8所示,本申请实施例提供的一种基于直播的事件提醒装置,当检测到图像中包括与预设的面部图像相符的图像时,装置还可以包括:
获取模块550,用于获取图像中主播的第二面部图像,第二面部图像为:与面部图像相符的图像;
计算模块560,用于计算第二面部图像的面积占图像的面积的百分比;
判断模块570,用于判断百分比是否大于第三预设阈值;
第四提示模块580,用于判断模块570的判断结果为否时,发出第四提示信息,其中,第四提示信息用于提示调整主播与屏幕的距离。
在直播过程中当主播距离摄像头比较远时,呈现在直播视频上的画面会比较小,甚至可能因为画面太小以至于不能识别主播,因此,为了避免这种情况,通过本申请实施例提供的技术方案,及时地提醒主播调整与屏幕之间的距离,从而使得主播可以更好的呈现给观众,进而获得观众的喜爱和支持。
对于装置实施例而言,由于其基本相似于方法实施例,所以描述地比较简单,相关之处参见方法实施例的部分说明即可。
本申请实施例还提供了一种电子设备,如图9所示,包括处理器910、通信接口920、存储器930和通信总线940,其中,处理器910,通信接口920,存储器930通过通信总线940完成相互间的通信,
存储器930,用于存放计算机程序;
处理器910,用于执行存储器930上所存放的程序时,实现如下步骤:
采集直播视频中的图像;
检测图像中是否包括与预设的面部图像相符的图像;
当检测到图像中不包括与预设的面部图像相符的图像时,检测是否发生预设事件;
如果发生所述预设事件,发出第一提示信息,第一提示信息用于提示发生预设事件的信息。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
本申请实施例提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一所述的基于直播的事件提醒方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来 将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于基于直播的事件提醒装置、电子设备和机器可读存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (22)

  1. 一种基于直播的事件提醒方法,其特征在于,所述方法包括:
    采集直播视频中的图像;
    检测所述图像中是否包括与预设的面部图像相符的图像;
    当检测到所述图像中不包括与预设的面部图像相符的图像时,检测是否发生预设事件;
    如果发生所述预设事件,发出第一提示信息,所述第一提示信息用于提示发生所述预设事件。
  2. 根据权利要求1所述的方法,其特征在于,所述预设的面部图像为主播的完整面部图像。
  3. 根据权利要求2所述的方法,其特征在于,所述检测所述图像中是否包括与预设的面部图像相符的图像的步骤,包括:
    检测所述图像中是否有第一面部图像;
    如果没有,判定所述图像中不包括与所述预设的面部图像相符的图像;
    如果有,判断所述第一面部图像与所述完整面部图像是否匹配;
    若不匹配,判定所述图像中不包括与所述预设的面部图像相符的图像。
  4. 根据权利要求3所述的方法,其特征在于,所述判断所述第一面部图像与所述完整面部图像是否匹配的步骤,包括:
    提取所述第一面部图像的第一面部特征;
    将所提取的第一面部特征与所述完整面部图像中对应的面部特征进行匹配;
    统计所提取的第一面部特征中,与所述完整面部图像中的面部特征相匹配的第一面部特征的数量;
    判断所述数量是否小于第一预设阈值;
    如果小于,判定所述第一面部图像与所述完整面部图像不匹配。
  5. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第二提示信息,其中,所述第二提示信息用于提示在所述直播视频中显示所述主播的面部。
  6. 根据权利要求1所述的方法,其特征在于,所述预设的面部图像为主播的眼部图像;
    所述检测所述图像中是否包括与预设的面部图像相符的图像的步骤,包括:
    获取所述图像中所述主播的第一眼部图像;
    判断所述第一眼部图像与所述眼部图像是否匹配;
    如果不匹配,判定所述图像中不包括与预设的面部图像相符的图像。
  7. 根据权利要求6所述的方法,其特征在于,所述判断所述第一眼部图像与所述眼部图像是否匹配的步骤,包括:
    提取所述第一眼部图像的第一眼部特征;
    将所提取的第一眼部特征与所述眼部图像中对应的眼部特征进行匹配;
    统计所提取的第一眼部特征中,与所述眼部图像中的眼部特征相匹配的第一眼部特征的数量;
    判断所述数量是否小于第二预设阈值;
    如果小于,判定所述第一眼部图像与所述眼部图像不匹配。
  8. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第三提示信息,其中,所述第三提示信息用于提示所述主播正视屏幕。
  9. 根据权利要求1所述的方法,其特征在于,当检测到所述图像中包括与预设的面部图像相符的图像时,所述方法还包括:
    获取所述图像中主播的第二面部图像,所述第二面部图像为:与所述预设的面部图像相符的图像;
    计算所述第二面部图像的面积占所述图像的面积的百分比;
    判断所述百分比是否大于第三预设阈值;
    如果不大于所述第三预设阈值,发出第四提示信息,其中,所述第四提示信息用于提示调整主播与屏幕的距离。
  10. 根据权利要求1-9任意一项所述的方法,其特征在于,所述第一提示信息为以下信息类型中的一种或多种:语音提示、振动提示、灯光提示、文字提示。
  11. 一种基于直播的事件提醒装置,其特征在于,所述装置包括:
    采集模块,用于采集直播视频中的图像;
    第一检测模块,用于检测所述图像中是否包括与预设的面部图像相符的图像;
    第二检测模块,用于当所述第一检测模块的检测结果为否时,检测是否发生预设事件;
    第一提示模块,用于当所述第二检测模块的检测结果为是时,发出第一提示信息,所述第一提示信息用于提示发生所述预设事件。
  12. 根据权利要求11所述的装置,其特征在于,所述预设的面部图像为主播的完整面部图像。
  13. 根据权利要求12所述的装置,其特征在于,所述第一检测模块包括:
    检测子模块,用于检测所述图像中是否有第一面部图像;
    第一判定子模块,用于当所述检测子模块的检测结果为否时,判定所述图像中不包括与所述预设的面部图像相符的图像;
    第一判断子模块,用于当所述检测子模块的检测结果为是时,判断所述 第一面部图像与所述完整面部图像是否匹配;如果否,触发所述第一判定子模块。
  14. 根据权利要求13所述的装置,其特征在于,所述第一判断子模块包括:
    第一提取单元,用于提取所述第一面部图像的第一面部特征;
    第一匹配单元,用于将所提取的第一面部特征与所述完整面部图像中对应的面部特征进行匹配;
    第一统计单元,用于统计所提取的第一面部特征中,与所述完整面部图像中的面部特征相匹配的第一面部特征的数量;
    第一判断单元,用于判断所述数量是否小于第一预设阈值;
    第一判定单元,用于当所述第一判断单元的判断结果为是时,判定所述第一面部图像与所述完整面部图像不匹配。
  15. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    第二提示模块,用于在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第二提示信息,其中,所述第二提示信息用于提示在所述直播视频中显示所述主播的面部。
  16. 根据权利要求11所述的装置,其特征在于,所述预设的面部图像为主播的眼部图像;
    所述第一检测模块包括:
    获取子模块,用于获取所述图像中所述主播的第一眼部图像;
    第二判断子模块,用于判断所述第一眼部图像与所述眼部图像是否匹配;
    第二判定子模块,用于当所述第二判断子模块的判断结果为否时,判定所述图像中不包括与预设的面部图像相符的图像。
  17. 根据权利要求16所述的装置,其特征在于,所述第二判断子模块包括:
    第二提取单元,用于提取所述第一眼部图像的第一眼部特征;
    第二匹配单元,用于将所提取的第一眼部特征与所述眼部图像中对应的眼部特征进行匹配;
    第二统计单元,用于统计所提取的第一眼部特征中,与所述眼部图像中的眼部特征相匹配的第一眼部特征的数量;
    第二判断单元,用于判断所述数量是否小于第二预设阈值;
    第二判定单元,用于当所述第二判断单元的判断结果为是时,判定所述第一眼部图像与所述眼部图像不匹配。
  18. 根据权利要求16所述的装置,其特征在于,所述装置还包括:
    第三提示模块,用于在检测到所述图像中不包括与预设的面部图像相符的图像后,在检测是否发生预设事件之前,发出第三提示信息,其中,所述第三提示信息用于提示所述主播正视屏幕。
  19. 根据权利要求11所述的装置,其特征在于,当检测到所述图像中包括与预设的面部图像相符的图像时,所述装置还包括:
    获取模块,用于获取所述图像中主播的第二面部图像,所述第二面部图像为:与所述面部图像相符的图像;
    计算模块,用于计算所述第二面部图像的面积占所述图像的面积的百分比;
    判断模块,用于判断所述百分比是否大于第三预设阈值;
    第四提示模块,用于所述判断模块的判断结果为否时,发出第四提示信息,其中,所述第四提示信息用于提示调整主播与屏幕的距离。
  20. 根据权利要求11-19任意一项所述的装置,其特征在于,所述第一提示信息为以下信息类型中的一种或多种:语音提示、振动提示、灯光提示、文字提示。
  21. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的程序时,实现权利要求1-10任一所述的方法步骤。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-10任一所述的方法步骤。
PCT/CN2018/097992 2017-08-30 2018-08-01 一种基于直播的事件提醒方法及装置 WO2019042064A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/642,698 US11190853B2 (en) 2017-08-30 2018-08-01 Event prompting method and apparatus based on live broadcasting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710762637.8A CN107493515B (zh) 2017-08-30 2017-08-30 一种基于直播的事件提醒方法及装置
CN201710762637.8 2017-08-30

Publications (1)

Publication Number Publication Date
WO2019042064A1 true WO2019042064A1 (zh) 2019-03-07

Family

ID=60651209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097992 WO2019042064A1 (zh) 2017-08-30 2018-08-01 一种基于直播的事件提醒方法及装置

Country Status (3)

Country Link
US (1) US11190853B2 (zh)
CN (1) CN107493515B (zh)
WO (1) WO2019042064A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683263A (zh) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 直播指导方法、装置、设备及计算机可读存储介质
CN111970533A (zh) * 2020-08-28 2020-11-20 北京达佳互联信息技术有限公司 直播间的互动方法、装置及电子设备
CN113780217A (zh) * 2021-09-16 2021-12-10 中国平安人寿保险股份有限公司 直播辅助提示方法、装置、计算机设备及存储介质

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493515B (zh) * 2017-08-30 2021-01-01 香港乐蜜有限公司 一种基于直播的事件提醒方法及装置
CN115086736A (zh) 2018-05-08 2022-09-20 日本聚逸株式会社 动态图像分发***及其方法和记录介质
KR102585051B1 (ko) 2018-05-08 2023-10-04 그리 가부시키가이샤 액터의 움직임에 기초하여 생성되는 캐릭터 오브젝트의 애니메이션을 포함하는 동화상을 배신하는 동화상 배신 시스템, 동화상 배신 방법 및 동화상 배신 프로그램
US11128932B2 (en) * 2018-05-09 2021-09-21 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of actors
US11044535B2 (en) 2018-08-28 2021-06-22 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
CN109831695B (zh) * 2018-12-15 2022-09-20 深圳壹账通智能科技有限公司 直播控制方法、装置、电子设备及存储介质
WO2020131037A1 (en) * 2018-12-18 2020-06-25 Rovi Guides, Inc. Systems and methods for automated tracking on a handheld device using a remote camera
CN111385591A (zh) * 2018-12-28 2020-07-07 阿里巴巴集团控股有限公司 网络直播方法、直播处理方法、装置、直播服务器及终端设备
CN110572690B (zh) * 2019-09-29 2022-09-23 腾讯科技(深圳)有限公司 用在直播中的方法、装置及计算机可读存储介质
CN110830811B (zh) * 2019-10-31 2022-01-18 广州酷狗计算机科技有限公司 直播互动方法及装置、***、终端、存储介质
CN111079529B (zh) * 2019-11-07 2022-05-20 广州方硅信息技术有限公司 信息提示方法、装置、电子设备及存储介质
CN110933452B (zh) * 2019-12-02 2021-12-03 广州酷狗计算机科技有限公司 萌脸礼物显示方法、装置及存储介质
CN111182317B (zh) * 2019-12-13 2022-05-20 广州方硅信息技术有限公司 直播信息提示方法、装置、电子设备及存储介质
CN110943912A (zh) * 2019-12-30 2020-03-31 香港乐蜜有限公司 社交信息处理方法、装置和电子设备
JP2021182696A (ja) * 2020-05-19 2021-11-25 憲保 池田 双方向放送方法及びシステム
CN112752159B (zh) * 2020-08-25 2024-01-30 腾讯科技(深圳)有限公司 一种互动方法和相关装置
CN113301362B (zh) * 2020-10-16 2023-05-30 阿里巴巴集团控股有限公司 视频元素展示方法及装置
CN112312156A (zh) * 2020-11-06 2021-02-02 云南腾云信息产业有限公司 一种直播景象提醒方法、装置、设备及存储介质
CN112584224B (zh) * 2020-12-08 2024-01-02 北京字节跳动网络技术有限公司 信息显示及处理方法、装置、设备、介质
CN113157952B (zh) * 2021-04-29 2024-06-21 北京达佳互联信息技术有限公司 信息显示方法、装置、终端、服务器
CN113453034B (zh) * 2021-06-29 2023-07-25 上海商汤智能科技有限公司 数据展示方法、装置、电子设备以及计算机可读存储介质
CN113613054B (zh) * 2021-07-30 2023-05-09 北京达佳互联信息技术有限公司 信息提示方法、装置、电子设备及计算机可读存储介质
CN114095745A (zh) * 2021-11-16 2022-02-25 广州博冠信息科技有限公司 直播互动方法、装置、计算机存储介质和电子设备
CN114401412B (zh) * 2021-12-09 2024-03-01 北京达佳互联信息技术有限公司 直播预告的处理方法、装置、电子设备及存储介质
JP7071718B1 (ja) * 2021-12-27 2022-05-19 17Live株式会社 サーバ及び方法
JP7227551B1 (ja) 2022-07-14 2023-02-22 株式会社Mixi 情報処理装置、情報処理方法及びプログラム

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060179451A1 (en) * 2005-02-05 2006-08-10 Samsung Electronics Co., Ltd. Broadcast receiving apparatus and method for providing multilateral video communication
CN103458219A (zh) * 2013-09-02 2013-12-18 小米科技有限责任公司 一种视频通话面部调整方法、装置及终端设备
CN106303565A (zh) * 2016-08-12 2017-01-04 广州华多网络科技有限公司 视频直播的画质优化方法和装置
CN106658038A (zh) * 2016-12-19 2017-05-10 广州虎牙信息科技有限公司 基于视频流的直播交互方法及其相应的装置
CN107493515A (zh) * 2017-08-30 2017-12-19 乐蜜有限公司 一种基于直播的事件提醒方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160191995A1 (en) * 2011-09-30 2016-06-30 Affectiva, Inc. Image analysis for attendance query evaluation
US20130297460A1 (en) * 2012-05-01 2013-11-07 Zambala Lllp System and method for facilitating transactions of a physical product or real life service via an augmented reality environment
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition
CN104182741A (zh) * 2014-09-15 2014-12-03 联想(北京)有限公司 一种图像采集提示方法、装置及电子设备
WO2018027237A1 (en) * 2016-08-05 2018-02-08 Sportscastr.Live Llc Systems, apparatus, and methods for scalable low-latency viewing of broadcast digital content streams of live events
CN106303662B (zh) * 2016-08-29 2019-09-20 网易(杭州)网络有限公司 视频直播中的图像处理方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060179451A1 (en) * 2005-02-05 2006-08-10 Samsung Electronics Co., Ltd. Broadcast receiving apparatus and method for providing multilateral video communication
CN103458219A (zh) * 2013-09-02 2013-12-18 小米科技有限责任公司 一种视频通话面部调整方法、装置及终端设备
CN106303565A (zh) * 2016-08-12 2017-01-04 广州华多网络科技有限公司 视频直播的画质优化方法和装置
CN106658038A (zh) * 2016-12-19 2017-05-10 广州虎牙信息科技有限公司 基于视频流的直播交互方法及其相应的装置
CN107493515A (zh) * 2017-08-30 2017-12-19 乐蜜有限公司 一种基于直播的事件提醒方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683263A (zh) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 直播指导方法、装置、设备及计算机可读存储介质
CN111683263B (zh) * 2020-06-08 2022-06-03 腾讯科技(深圳)有限公司 直播指导方法、装置、设备及计算机可读存储介质
CN111970533A (zh) * 2020-08-28 2020-11-20 北京达佳互联信息技术有限公司 直播间的互动方法、装置及电子设备
WO2022042089A1 (zh) * 2020-08-28 2022-03-03 北京达佳互联信息技术有限公司 直播间的互动方法及装置
CN113780217A (zh) * 2021-09-16 2021-12-10 中国平安人寿保险股份有限公司 直播辅助提示方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
US11190853B2 (en) 2021-11-30
US20200204871A1 (en) 2020-06-25
CN107493515A (zh) 2017-12-19
CN107493515B (zh) 2021-01-01

Similar Documents

Publication Publication Date Title
WO2019042064A1 (zh) 一种基于直播的事件提醒方法及装置
JP6357589B2 (ja) 画像表示方法、装置、プログラムおよび記録媒体
JP2019519830A (ja) スマート端末を制御する方法、及びスマート端末
CN108243151B (zh) 一种自动动画播放方法、装置、客户端以及视频直播***
CN109040824A (zh) 视频处理方法、装置、电子设备和可读存储介质
WO2020259073A1 (zh) 图像处理方法及装置、电子设备和存储介质
CN107888965B (zh) 图像礼物展示方法及装置、终端、***、存储介质
US20140354538A1 (en) Method and apparatus for operating notification function in user device
WO2018228422A1 (zh) 一种发出预警信息的方法、装置及***
US9953221B2 (en) Multimedia presentation method and apparatus
KR101906748B1 (ko) 홍채 이미지 획득 방법 및 장치, 및 홍채 인식 장치
KR20170001430A (ko) 디스플레이 장치 및 이의 영상 보정 방법
CN110443330A (zh) 一种扫码方法、装置、移动终端以及存储介质
US20200364097A1 (en) Notification information output method, server and monitoring system
WO2020078215A1 (zh) 基于视频的信息获取方法和装置
WO2018094911A1 (zh) 一种多媒体文件的分享方法及终端设备
WO2017152592A1 (zh) 一种移动终端应用操作方法及移动终端
CN112487958A (zh) 手势控制方法及***
CN109981890B (zh) 一种提醒任务处理方法、终端及计算机可读存储介质
TW201626364A (zh) 自動還原丟失語音資訊的系統與方法
TWI757940B (zh) 視訊會議系統及其排除打擾的方法
CN115883959B (zh) 用于隐私保护的画面内容控制方法及相关产品
CN109274825A (zh) 一种消息提醒方法及装置
CN111010526A (zh) 一种视频通讯中的互动方法及装置
WO2023178921A1 (zh) 交互方法、装置、设备、存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18849575

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18849575

Country of ref document: EP

Kind code of ref document: A1