CN111176440A - Video call method and wearable device - Google Patents

Video call method and wearable device Download PDF

Info

Publication number
CN111176440A
CN111176440A CN201911154489.7A CN201911154489A CN111176440A CN 111176440 A CN111176440 A CN 111176440A CN 201911154489 A CN201911154489 A CN 201911154489A CN 111176440 A CN111176440 A CN 111176440A
Authority
CN
China
Prior art keywords
target
special effect
wearable device
video
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911154489.7A
Other languages
Chinese (zh)
Other versions
CN111176440B (en
Inventor
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201911154489.7A priority Critical patent/CN111176440B/en
Priority to CN202410083247.8A priority patent/CN117908677A/en
Publication of CN111176440A publication Critical patent/CN111176440A/en
Application granted granted Critical
Publication of CN111176440B publication Critical patent/CN111176440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A video call method and a wearable device comprise: in the process of carrying out video call between the wearable device and the video object, image information and voice information of a wearing user of the wearable device and the video object are comprehensively analyzed to obtain a current video atmosphere type, a target special effect model matched with the current video atmosphere is determined from a special effect model library, and a scene special effect corresponding to the target special effect model is controlled to be displayed on a display screen of the wearable device, so that the purpose of enhancing or relieving emotion is achieved. By implementing the embodiment of the application, the interestingness of the video call can be effectively enhanced, and the occurrence probability of the video call is favorably improved.

Description

Video call method and wearable device
Technical Field
The application relates to the technical field of wearable equipment, in particular to a video call method and wearable equipment.
Background
When a user utilizes the intelligent watch to carry out video call, the display screen of the intelligent watch usually only outputs the user side picture and the video object side picture in real time, so that the face-to-face communication effect is achieved, and along with the continuous development of the society, the mode of only displaying the pictures of the two sides of the video cannot meet the increasingly rich living demands of people, and the video call occurrence probability is not favorably improved.
Disclosure of Invention
The embodiment of the application discloses a video call method and wearable equipment, which are beneficial to improving the occurrence probability of video calls.
A first aspect of the present application discloses a video call method, including:
in the process of carrying out video call between wearable equipment and a video object, comprehensively analyzing image information and voice information of a wearing user of the wearable equipment and the video object to obtain the current video atmosphere type;
determining a target special effect model matched with the current video atmosphere from a special effect model library;
and controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device so as to achieve the purpose of enhancing or relieving the emotion.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after comprehensively analyzing image information and voice information of a wearing user of a wearable device and a video object to obtain a current video atmosphere type in a process of a video call between the wearable device and the video object, the method further includes:
determining a target color temperature and a target illumination of the lighting equipment wearing the environment where the user is located according to the current video atmosphere type;
and sending a parameter adjustment request carrying the target color temperature and the target illumination to the lighting equipment, so that the lighting equipment adjusts the color temperature to be the target color temperature and adjusts the illumination to be the target illumination.
As an optional implementation manner, in the first aspect of this embodiment of the present application, after determining, from the special effect model library, a target special effect model for the current video atmosphere matching, the method further includes:
acquiring a historical browsing record of the online video platform of the wearing user;
determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the historical browsing records;
the controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device includes:
and controlling the target scene special effect to be displayed on a display screen of the wearable device.
As an optional implementation manner, in the first aspect of the embodiment of the present application, the controlling the target scene special effect to be displayed on the display screen of the wearable device includes:
detecting instruction information for indicating a display area of the target scene special effect;
and controlling the special effect of the target scene to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the method further includes:
detecting whether a grouping request for the video object input by the wearing user is received or not when the video call is terminated;
if the grouping request is received, evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process;
setting identification information of a social account of the video object according to the intimacy index;
and determining a target group matched with the identification information from the group contained in the social account of the wearing user, and adding the social account of the video object to the target group.
A second aspect of the embodiments of the present application discloses a wearable device, including:
the analysis unit is used for comprehensively analyzing image information and voice information of a wearing user of the wearable device and a video object in the process of carrying out video call between the wearable device and the video object to obtain the current video atmosphere type;
the determining unit is used for determining a target special effect model matched with the current video atmosphere from a special effect model library;
and the display unit is used for controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device so as to achieve the purpose of enhancing or relieving the emotion.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the determining unit is further configured to, during a video call between the wearable device and a video object, comprehensively analyze image information and voice information of a wearing user of the wearable device and the video object, obtain a current video atmosphere type, and determine, according to the current video atmosphere type, a target color temperature and a target illuminance of a lighting device in an environment where the wearing user is located;
the wearable device further comprises:
and the sending unit is used for sending a parameter adjusting request carrying the target color temperature and the target illumination to the lighting equipment so as to enable the lighting equipment to adjust the color temperature of the lighting equipment to be the target color temperature and adjust the illumination of the lighting equipment to be the target illumination.
As an optional implementation manner, in a second aspect of embodiments of the present application, the wearable device further includes:
the obtaining unit is used for obtaining the historical browsing record of the online video platform of the wearing user after the determining unit determines the target special effect model matched with the current video atmosphere from a special effect model library; determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the historical browsing records;
the display unit is configured to control a display mode of the scene special effect corresponding to the target special effect model on a display screen of the wearable device specifically as follows:
the display unit is used for controlling the special effect of the target scene to be displayed on a display screen of the wearable device.
As an optional implementation manner, in a second aspect of the embodiment of the present application, a manner that the display unit is configured to control the target scene special effect to be displayed on the display screen of the wearable device is specifically:
the display unit is used for detecting instruction information of a display area for indicating the special effect of the target scene; and controlling the special effect of the target scene to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
As an optional implementation manner, in a second aspect of embodiments of the present application, the wearable device further includes:
the detection unit is used for detecting whether a grouping request which is input by the wearing user and aims at the video object is received or not when the video call is terminated;
the evaluation unit is used for evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process when the grouping request is received; setting identification information of a social account of the video object according to the intimacy index;
and the grouping unit is used for determining a target group matched with the identification information from the groups contained in the social account of the wearing user and adding the social account of the video object to the target group.
A third aspect of an embodiment of the present application discloses a wearable device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the steps of the video call method disclosed in the first aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, on which computer instructions are stored, where the computer instructions, when executed, cause a computer to perform the steps of the video call method disclosed in the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, in the process of carrying out video call between the wearable device and the video object, the image information and the voice information of a wearing user of the wearable device and the video object are comprehensively analyzed to obtain the current video atmosphere type, a target special effect model matched with the current video atmosphere is determined from a special effect model library, and a scene special effect corresponding to the target special effect model is controlled to be displayed on a display screen of the wearable device, so that the purpose of enhancing or relieving the emotion is achieved. Therefore, by implementing the embodiment of the application, in the video call process of the user, different scene special effects are added according to the video atmosphere change in the video call process, the interestingness of the video call can be effectively enhanced, and the probability of occurrence of the video call is favorably improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video call method disclosed in an embodiment of the present application;
fig. 2 is a schematic flow chart of another video call method disclosed in the embodiment of the present application;
fig. 3 is a schematic flow chart of another video call method disclosed in the embodiment of the present application;
fig. 4 is a modular schematic diagram of a wearable device disclosed in an embodiment of the present application;
fig. 5 is a modular schematic diagram of another wearable device disclosed in embodiments of the present application;
fig. 6 is a modular schematic diagram of yet another wearable device disclosed in embodiments of the present application;
fig. 7 is a modular schematic diagram of yet another wearable device disclosed in embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a video call method and wearable equipment, which are beneficial to improving the occurrence probability of video calls. The following detailed description is made with reference to the accompanying drawings.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a video call method disclosed in an embodiment of the present application, and as shown in fig. 1, the video call method may include the following steps:
101. in the process of carrying out video call between the wearable device and the video object, comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object to obtain the current video atmosphere type.
Optionally, before comprehensively analyzing image information and voice information of a wearing user of the wearable device and a video object to obtain a current video atmosphere type, the following steps may be further performed:
judging whether an arm swinging action input by a wearing user of the wearable equipment is received;
if the arm throwing action is received, acquiring the direction and the force of the arm throwing action;
judging whether the direction of the arm throwing action is a preset direction or not and judging whether the force of the arm throwing action is greater than the preset force or not;
when the direction of the arm swinging action is a preset direction and the force of the arm swinging action is a preset force, controlling to start a scene special effect mode, and continuously executing the image information and the voice information of the wearing user and the video object of the wearable device through comprehensive analysis to obtain the current video atmosphere type.
By implementing the method, the scene special effect mode of the video call can be efficiently started by detecting the arm swinging action of the wearing user of the wearable device.
It should be noted that, in the embodiment of the present application, the starting of the scene special effect mode of the video call of the wearable device may be realized by pressing a virtual button of the video call interface in addition to the arm swing of the wearing user of the wearable device, which is not limited in the embodiment of the present application.
In this embodiment of the application, the preset video atmosphere of the wearable device may include tension, thoughts, joy, and the like, each video atmosphere type is associated with a plurality of facial features and a plurality of keyword words, based on the description, in the process of performing a video call between the wearable device and a video object, the comprehensive analysis of the image information and the voice information of the wearing user of the wearable device and the video object may include:
analyzing image information of a wearing user of the wearable device and the video object to obtain facial features in the process of carrying out video call between the wearable device and the video object;
analyzing voice information of a wearing user of the wearable device and the video object, and extracting words used for indicating emotion;
and determining the current video atmosphere type from the video atmosphere preset by the wearable device according to the facial feature information and the words for indicating the emotion.
Optionally, in this embodiment of the application, after determining the current video atmosphere type from the video atmosphere preset by the wearable device according to the facial feature information and the word indicating the emotion, the following steps may be further performed:
acquiring physiological parameters of a wearing user of the wearable device; the physiological parameters of a wearing user of the wearable equipment at least comprise blood sugar, blood pressure, body temperature and respiratory frequency;
judging whether the physiological parameters of the wearing user are in a parameter range corresponding to the current video atmosphere type;
if the current video atmosphere type is within the parameter range corresponding to the current video atmosphere type, the step 102 is continuously executed.
By implementing the method, the current video atmosphere type is comprehensively determined according to the facial features, the words used for indicating the emotion and the physiological parameters of the wearing user of the wearable device, so that the determination precision of the current video atmosphere type is improved.
102. And determining a target special effect model matched with the current video atmosphere from the special effect model library.
103. And controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device so as to achieve the purpose of enhancing or relieving the emotion.
The method is implemented, the video atmosphere change in the video call process is analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call can be improved, the determination precision of the current video atmosphere type can be improved, and the efficient starting of the scene special effect mode of the video call can be realized.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another video call method disclosed in the embodiment of the present application, and the video call method shown in fig. 2 may include the following steps:
201. in the process of carrying out video call between the wearable device and the video object, comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object to obtain the current video atmosphere type.
202. And determining the target color temperature and the target illumination of the lighting equipment of the environment where the wearable equipment is worn by the user according to the current video atmosphere type.
203. And sending a parameter adjustment request carrying the target color temperature and the target illumination intensity to the lighting equipment of the environment where the wearable user of the wearable equipment is located, so that the lighting equipment of the environment where the wearable user of the wearable equipment is located can adjust the color temperature to be the target color temperature and adjust the illumination intensity to be the target illumination intensity.
Step 202 to step 203 are executed, and the illuminance and the color temperature of the lighting device in the environment where the wearing user is located are adjusted according to the current video atmosphere type, so that the mood can be enhanced or relieved in an auxiliary manner by means of the lighting device in the environment where the wearing user is located.
204. And determining a target special effect model matched with the current video atmosphere from the special effect model library.
For detailed descriptions of step 201 and step 204, please refer to the description of step 101 and step 102 in the first embodiment, which is not repeated herein.
205. Acquiring historical browsing records of an online video platform of a wearing user of the wearable device.
206. And determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the historical browsing records.
As an optional implementation manner, in the embodiment of the present application, after step 206, the following steps may also be performed:
acquiring physiological parameters of a video object; wherein the physiological parameters of the video object at least comprise blood sugar, blood pressure, body temperature and respiratory rate;
judging whether the physiological parameters of the video object are in the parameter range corresponding to the current video atmosphere type;
if the current video atmosphere type is within the parameter range corresponding to the current video atmosphere type, detecting whether a sending instruction aiming at the special effect of the target scene is received;
if the sending instruction is received, packaging the special effect of the target scene to obtain a file package;
and sending the file packet to a video object so as to display the special effect of the target scene on a video call interface of the video object.
By implementing the method, the emotion of the video object can be improved by means of the special effect of the target scene.
207. And controlling the special effect of the target scene to be displayed on a display screen of the wearable device.
The following describes steps 206 to 207 in detail by way of example:
assuming that a wearing user of the wearable device is a child, the current video atmosphere type obtained in step 201 is a cheerful type, and the target special effect model in step 204 is a special effect model indicating cheerful in a special effect model library, where the target special effect model includes scene special effects corresponding to a plurality of subjects, the wearing user online video platform mentioned in step 205 is an animation video platform, a historical browsing record of the wearing user online video platform is an animation watching record in a preset time period of the child, the duration of the preset time period may be one week, one month, or one quarter, an end time node of the preset time period may be a current time point, and based on the description, a target scene special effect is determined in the scene special effects corresponding to the target special effect model according to the historical browsing record, including: acquiring the animation video with the highest watching frequency according to the historical browsing record; determining an animation theme according to the name of the animation video with the highest watching frequency; determining a target theme matched with the animation theme from a plurality of themes corresponding to the target special effect model; and taking the scene special effect corresponding to the target theme as the target scene special effect. By implementing the method, the special effect of the target scene is determined according to the animation watching record in the preset time period of the children, so that the special effect of the target scene is more suitable for the interests of the children, and the interestingness of video call can be further improved.
As an optional implementation manner, in this embodiment, the controlling the target scene special effect to be displayed on the display screen of the wearable device may include: detecting instruction information for indicating a display area of a special effect of a target scene; and controlling the special effect of the target scene to be displayed on a display screen of the wearable device according to the instruction of the instruction information. By implementing the method, the flexible management and control of the display area of the special effect of the target scene can be realized.
By implementing the method, the change of the video atmosphere in the video call process is analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call is favorably improved, the determination precision of the current video atmosphere type can be improved, the efficient starting of the scene special effect mode of the video call can be realized, the emotion can be enhanced or relieved in an auxiliary mode by means of the lighting equipment which wears the environment where a user is located, the target scene special effect can be more attached to the interest of children, the flexible control on the display area of the target scene special effect can be realized, and the emotion of a video object can be improved by means of the target scene special effect.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flowchart of another video call method disclosed in the embodiment of the present application, and the video call method shown in fig. 3 may include the following steps:
for detailed descriptions of steps 301 to 307, please refer to the description of steps 201 to 207 in embodiment two, which is not described again in this embodiment.
308. When the video call is terminated, detecting whether a grouping request aiming at the video object input by a wearing user of the wearable device is received, if so, executing steps 309-311; if not, the flow is ended.
309. And evaluating the intimacy index of the wearing user of the wearable equipment and the video object according to all the scene special effects displayed in the video call process.
310. And setting identification information of the social account of the video object according to the intimacy index.
311. And determining a target group matched with the identification information from the group contained in the social account of the wearing user of the wearable device, and adding the social account of the video object to the target group.
And step 308 to step 311 are executed, automatic grouping of the video objects is realized according to all scene special effects displayed in the video call process, and management of social accounts of users wearing the wearable equipment is facilitated.
It should be noted that, in this embodiment of the application, the grouping for the video object may be determined according to all scene special effects displayed in the video call process, may also be manually selected and determined by a wearing user of the wearable device, and may also be determined according to grouping information sent by a terminal device associated with the wearable device, which is not limited in this embodiment of the application.
The following details are determined according to the grouping information transmitted by the terminal device associated with the wearable device regarding the grouping of the video objects:
in this embodiment of the application, if the wearing user of the wearable device is a young child, and the terminal device associated with the wearable device is a parent, the following steps may be further performed:
when the social account of the wearing user receives an account adding request, acquiring the data information of a request object;
sending the data information to a home terminal associated with the wearable device;
when an adding instruction fed back by a parent end is received, judging whether the adding instruction carries grouping information aiming at a request object;
and if the request objects exist, adding and grouping the request objects according to the indication of the grouping information.
When the wearable device is a low-age child, by implementing the method, parents can monitor the social range of the low-age child in real time so as to avoid the low-age child from being harmed by malicious social contact.
By implementing the method, the video atmosphere change in the video call process is analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call can be favorably improved, the determination precision of the current video atmosphere type can be improved, the efficient starting of the scene special effect mode of the video call can be realized, the emotion can be enhanced or relieved in an auxiliary mode by means of the lighting equipment of the environment where the wearing user is located, the target scene special effect can be more attached to the interest of children, the flexible control on the display area of the target scene special effect can be realized, the emotion of a video object can be improved by means of the target scene special effect, the management of the social account number of the wearing user of the wearable equipment is facilitated, and the malicious social injury of the young.
Example four
Referring to fig. 4, fig. 4 is a schematic view of a wearable device according to an embodiment of the present disclosure. As shown in fig. 4, the wearable device may include:
the analysis unit 401 is configured to comprehensively analyze image information and voice information of a wearing user of the wearable device and a video object to obtain a current video atmosphere type in a process of a video call between the wearable device and the video object.
Optionally, the analysis unit 401 is further configured to comprehensively analyze image information and voice information of a wearing user of the wearable device and a video object, and determine whether an arm-swinging action input by the wearing user of the wearable device is received before obtaining a current video atmosphere type; if the arm throwing action is received, acquiring the direction and the force of the arm throwing action; judging whether the direction of the arm throwing action is a preset direction or not and judging whether the force of the arm throwing action is greater than the preset force or not; when the direction of the arm swinging action is a preset direction and the force of the arm swinging action is a preset force, controlling to start a scene special effect mode, triggering to execute the image information and the voice information of the wearing user and the video object of the wearable device through comprehensive analysis, and obtaining the current video atmosphere type. According to the implementation method, the scene special effect mode of the video call is efficiently started by detecting the arm swinging action of the wearable device wearing the user.
It should be noted that, in the embodiment of the present application, the starting of the scene special effect mode of the video call of the wearable device may be realized by pressing a virtual button of the video call interface in addition to the arm swing of the wearing user of the wearable device, which is not limited in the embodiment of the present application.
In this embodiment of the application, the preset video atmosphere of the wearable device may include tension, thoughts, joy, and the like, each video atmosphere type is associated with a plurality of facial features and a plurality of keyword words, and based on the description, the analysis unit 401 is configured to comprehensively analyze image information and voice information of a wearing user of the wearable device and a video object during a video call between the wearable device and the video object, and specifically, a manner of obtaining a current video atmosphere type is as follows: the analysis unit 401 is configured to analyze image information of a wearing user of the wearable device and the video object to obtain a facial feature in a process of performing a video call between the wearable device and the video object; analyzing voice information of a wearing user of the wearable device and the video object, and extracting words for indicating emotion; and determining the current video atmosphere type from the video atmosphere preset by the wearable device according to the facial features and the words for indicating the emotion.
Optionally, in this embodiment of the application, the analysis unit 401 is further configured to, after determining a current video atmosphere type from a video atmosphere preset by the wearable device according to the facial features and the words used for indicating the emotion, acquire a physiological parameter of a wearing user of the wearable device; the physiological parameters of a wearing user of the wearable equipment at least comprise blood sugar, blood pressure, body temperature and respiratory frequency; judging whether the physiological parameters of the wearing user are in a parameter range corresponding to the current video atmosphere type; and the physiological parameter of the wearing user is in the parameter range corresponding to the current video atmosphere type, the trigger determining unit 402 executes the following operation of determining the target special effect model matched with the current video atmosphere from the special effect model library. According to the method, the current video atmosphere type is comprehensively determined according to the facial features, the words used for indicating the emotion and the physiological parameters of the wearing user of the wearable device, and the determination accuracy of the current video atmosphere type is improved.
A determining unit 402, configured to determine, from the special effect model library, a target special effect model that is matched with the current video atmosphere.
And the display unit 403 is configured to control the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device, so as to achieve the purpose of enhancing or relieving the emotion.
Through implementing above-mentioned wearable equipment, the video atmosphere change among the real-time analysis video conversation process adds different scene special effects, can effectively strengthen the interest of video conversation, is favorable to promoting the emergence probability of video conversation, can also improve the definite precision of current video atmosphere type, can also realize the high-efficient start-up of the scene special effects mode of video conversation.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic view of another wearable device disclosed in the embodiments of the present application. In the wearable device shown in fig. 5, the determining unit 402 may be further configured to, in the process that the wearable device performs a video call with a video object, comprehensively analyze image information and voice information of a wearing user of the wearable device and the video object by the analyzing unit 401, obtain a current video atmosphere type, and determine a target color temperature and a target illuminance of a lighting device in an environment where the wearing user of the wearable device is located according to the current video atmosphere type.
The wearable device may further include:
the sending unit 404 sends a parameter adjustment request carrying a target color temperature and a target illumination to the lighting device in the environment where the wearing user is located, so that the lighting device adjusts the color temperature to be the target color temperature and adjusts the illumination to be the target illumination.
Optionally, in this embodiment of the application, the wearable device may further include:
an obtaining unit 405, configured to obtain a historical browsing record of an online video platform of a wearing user of the wearable device after the determining unit 402 determines the target special effect model matched with the current video atmosphere from the special effect model library; and determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the historical browsing record.
The display unit 403 is configured to control a display mode of the scene special effect corresponding to the target special effect model on the display screen of the wearable device, specifically:
and a display unit 403, configured to control the target scene special effect to be displayed on a display screen of the wearable device.
Further optionally, the manner that the display unit 403 is used to control the target scene special effect to be displayed on the display screen of the wearable device specifically includes: a display unit 403 for detecting instruction information indicating a display area of a special effect of a target scene; and controlling the special effect of the target scene to be displayed on a display screen of the wearable device according to the instruction of the instruction information. By implementing the method, the flexible control of the display area of the special effect of the target scene can be realized.
As an optional implementation manner, in this embodiment of the application, the obtaining unit 405 is further configured to obtain, according to the historical browsing record, a physiological parameter of the video object after determining the target scene special effect in the scene special effects corresponding to the target special effect model; wherein the physiological parameters of the video object at least comprise blood sugar, blood pressure, body temperature and respiratory rate; judging whether the physiological parameters of the video object are in the parameter range corresponding to the current video atmosphere type; if the current video atmosphere type is within the parameter range corresponding to the current video atmosphere type, detecting whether a sending instruction aiming at the special effect of the target scene is received; if the sending instruction is received, packaging the special effect of the target scene to obtain a file package; and sending the file packet to a video object so as to display the special effect of the target scene on a video call interface of the video object. By implementing the method, the emotion of the video object can be improved by means of the special effect of the target scene.
In this embodiment of the application, for the description of the obtaining unit 405, please refer to the example in the second embodiment, which is not repeated herein, and based on the above description, the manner for the obtaining unit 405 to determine the target scene special effect in the scene special effect corresponding to the target special effect model according to the historical browsing record is specifically:
an obtaining unit 405, configured to obtain an animation video with the highest viewing frequency according to the historical browsing record; determining an animation theme according to the name of the animation video with the highest watching frequency; determining a target theme matched with the animation theme from a plurality of themes corresponding to the target special effect model; and taking the scene special effect corresponding to the target theme as the target scene special effect. By implementing the method, the special effect of the target scene is determined according to the animation watching record in the preset time period of the children, so that the special effect of the target scene is more suitable for the interests of the children, and the interestingness of video call can be further improved.
Through implementing above-mentioned wearable equipment, the video atmosphere of real-time analysis video conversation in-process changes, add different scene special effects, can effectively strengthen the interest of video conversation, be favorable to promoting the emergence probability of video conversation, can also improve the definite precision of current video atmosphere type, can also realize the high-efficient start-up of the scene special effect mode of video conversation, can also assist the supplementary reinforcing of lighting apparatus or the release mood of the environment of wearing the user place, can also make the target scene special effect more laminate children interest, can also realize the nimble management and control to the display area of target scene special effect, can also improve the mood of video object with the help of target scene special effect.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic block diagram of another wearable device disclosed in the embodiments of the present application. Wherein, the wearable device shown in fig. 6 is optimized by the wearable device shown in fig. 5, and the wearable device shown in fig. 6 may further include:
a detecting unit 406, configured to detect whether a grouping request for the video object input by a wearing user of the wearable device is received when the video call is terminated.
An evaluation unit 407, configured to evaluate, when receiving the grouping request, an affinity index between a wearing user of the wearable device and the video object according to all scene special effects displayed in a video call process; and setting identification information of the social account of the video object according to the intimacy index.
A grouping unit 408, configured to determine a target group matching the identification information from the group included in the social account of the wearing user of the wearable device, and add the social account of the video object to the target group.
The grouping unit 408 automatically groups the video objects according to all scene special effects displayed in the video call process, which is helpful for managing the social account of the wearing user of the wearable device.
It should be noted that, in this embodiment of the application, the grouping for the video object may be determined according to all scene special effects displayed in the video call process, may also be manually selected and determined by a wearing user of the wearable device, and may also be determined according to grouping information sent by a terminal device associated with the wearable device, which is not limited in this embodiment of the application.
The following details are determined according to the grouping information transmitted by the terminal device associated with the wearable device regarding the grouping of the video objects:
in this embodiment of the application, if the wearing user of the wearable device is a young child, the terminal device associated with the wearable device is a parent, and the grouping unit 408 is further configured to obtain the information of the request object when the social account of the wearing user receives the account addition request; sending the data information to a home terminal associated with the wearable device; when an adding instruction fed back by a parent end is received, judging whether the adding instruction carries grouping information aiming at a request object; if the request objects exist, the request objects are grouped according to the indication of the grouping information. When the wearable device is a low-age child, by implementing the method, parents can monitor the social range of the low-age child in real time so as to avoid the low-age child from being harmed by malicious social contact.
Through implementing above-mentioned wearable equipment, the video atmosphere change among the real-time analysis video conversation process, add different scene special effects, can effectively strengthen the interest of video conversation, be favorable to promoting the emergence probability of video conversation, can also improve the definite precision of current video atmosphere type, can also realize the high-efficient start-up of the scene special effect mode of video conversation, can also assist with the lighting apparatus of the environment that wears the user and strengthen or alleviate the mood, can also make the target scene special effect more laminate children interest, can also realize the nimble management and control to the display area of target scene special effect, can also improve the mood of video object with the help of target scene special effect, still help managing wearable equipment's the user's of wearing social account, can also avoid the young children to be vicious social injury.
Referring to fig. 7, fig. 7 is a schematic block diagram of another wearable device disclosed in the embodiments of the present application. As shown in fig. 7, the wearable device may include:
memory 701 storing executable program code
A processor 702 coupled to the memory;
the processor 702 calls the executable program code stored in the memory 701 to execute the steps of the video call method described in any one of fig. 1 to 3.
It should be noted that, in this embodiment of the application, the wearable device shown in fig. 7 may further include components that are not displayed, such as a speaker module, a light projection module, a battery module, a wireless communication module (such as a mobile communication module, a WIFI module, a bluetooth module, and the like), a sensor module (such as a proximity sensor, and the like), an input module (such as a microphone, a key), and a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired headset interface, and the like).
The embodiment of the application discloses a computer-readable storage medium, on which computer instructions are stored, and the computer instructions make a computer execute the steps of the video call method described in any one of fig. 1 to 3 when the computer instructions are executed.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The video call method and the wearable device disclosed in the embodiments of the present application are described in detail above, and specific examples are applied in the description to explain the principle and the implementation of the present application, and the description of the embodiments above is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A video call method, comprising:
in the process of carrying out video call between wearable equipment and a video object, comprehensively analyzing image information and voice information of a wearing user of the wearable equipment and the video object to obtain the current video atmosphere type;
determining a target special effect model matched with the current video atmosphere from a special effect model library;
and controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device so as to achieve the purpose of enhancing or relieving the emotion.
2. The method of claim 1, wherein after the image information and the voice information of the wearing user of the wearable device and the video object are comprehensively analyzed during the video call between the wearable device and the video object, and the current video atmosphere type is obtained, the method further comprises:
determining a target color temperature and a target illumination of the lighting equipment wearing the environment where the user is located according to the current video atmosphere type;
and sending a parameter adjustment request carrying the target color temperature and the target illumination to the lighting equipment, so that the lighting equipment adjusts the color temperature to be the target color temperature and adjusts the illumination to be the target illumination.
3. The method of claim 1 or 2, wherein after determining the target special effects model for the current video atmosphere match from a library of special effects models, the method further comprises:
acquiring a historical browsing record of the online video platform of the wearing user;
determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the historical browsing records;
the controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device includes:
and controlling the target scene special effect to be displayed on a display screen of the wearable device.
4. The method of claim 3, wherein the controlling the target scene special effect to be displayed on a display screen of the wearable device comprises:
detecting instruction information for indicating a display area of the target scene special effect;
and controlling the special effect of the target scene to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
5. The method of claim 1, further comprising:
detecting whether a grouping request for the video object input by the wearing user is received or not when the video call is terminated;
if the grouping request is received, evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process;
setting identification information of a social account of the video object according to the intimacy index;
and determining a target group matched with the identification information from the group contained in the social account of the wearing user, and adding the social account of the video object to the target group.
6. A wearable device, comprising:
the analysis unit is used for comprehensively analyzing image information and voice information of a wearing user of the wearable device and a video object in the process of carrying out video call between the wearable device and the video object to obtain the current video atmosphere type;
the determining unit is used for determining a target special effect model matched with the current video atmosphere from a special effect model library;
and the display unit is used for controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device so as to achieve the purpose of enhancing or relieving the emotion.
7. The wearable device of claim 6, wherein the determining unit is further configured to, during a video call between the wearable device and a video object, the analyzing unit comprehensively analyzes image information and voice information of a wearing user of the wearable device and the video object, and after obtaining a current video atmosphere type, determines a target color temperature and a target illumination intensity of a lighting device in an environment where the wearing user is located according to the current video atmosphere type;
the wearable device further comprises:
and the sending unit is used for sending a parameter adjusting request carrying the target color temperature and the target illumination to the lighting equipment so as to enable the lighting equipment to adjust the color temperature of the lighting equipment to be the target color temperature and adjust the illumination of the lighting equipment to be the target illumination.
8. The wearable device of claim 6 or 7, further comprising:
the obtaining unit is used for obtaining the historical browsing record of the online video platform of the wearing user after the determining unit determines the target special effect model matched with the current video atmosphere from a special effect model library; determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the historical browsing records;
the display unit is configured to control a display mode of the scene special effect corresponding to the target special effect model on a display screen of the wearable device specifically as follows:
the display unit is used for controlling the special effect of the target scene to be displayed on a display screen of the wearable device.
9. The wearable device according to claim 8, wherein the display unit is configured to control a manner in which the target scene special effect is displayed on the display screen of the wearable device, specifically:
the display unit is used for detecting instruction information of a display area for indicating the special effect of the target scene; and controlling the special effect of the target scene to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
10. The wearable device of claim 6, further comprising:
the detection unit is used for detecting whether a grouping request which is input by the wearing user and aims at the video object is received or not when the video call is terminated;
the evaluation unit is used for evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process when the grouping request is received; setting identification information of a social account of the video object according to the intimacy index;
and the grouping unit is used for determining a target group matched with the identification information from the groups contained in the social account of the wearing user and adding the social account of the video object to the target group.
CN201911154489.7A 2019-11-22 2019-11-22 Video call method and wearable device Active CN111176440B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911154489.7A CN111176440B (en) 2019-11-22 2019-11-22 Video call method and wearable device
CN202410083247.8A CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911154489.7A CN111176440B (en) 2019-11-22 2019-11-22 Video call method and wearable device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410083247.8A Division CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device

Publications (2)

Publication Number Publication Date
CN111176440A true CN111176440A (en) 2020-05-19
CN111176440B CN111176440B (en) 2024-03-19

Family

ID=70655380

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410083247.8A Pending CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device
CN201911154489.7A Active CN111176440B (en) 2019-11-22 2019-11-22 Video call method and wearable device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410083247.8A Pending CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device

Country Status (1)

Country Link
CN (2) CN117908677A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112327720A (en) * 2020-11-20 2021-02-05 北京瞰瞰科技有限公司 Atmosphere management method and system
CN112565913A (en) * 2020-11-30 2021-03-26 维沃移动通信有限公司 Video call method and device and electronic equipment
CN114422742A (en) * 2022-01-28 2022-04-29 深圳市雷鸟网络传媒有限公司 Call atmosphere improving method and device, intelligent device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102082870A (en) * 2010-12-28 2011-06-01 东莞宇龙通信科技有限公司 Method and device for managing contacts as well as mobile terminal
CN104703043A (en) * 2015-03-26 2015-06-10 努比亚技术有限公司 Video special effect adding method and device
CN108052670A (en) * 2017-12-29 2018-05-18 北京奇虎科技有限公司 A kind of recommendation method and device of camera special effect
CN108401129A (en) * 2018-03-22 2018-08-14 广东小天才科技有限公司 Video call method, device, terminal and storage medium based on wearable device
CN108882454A (en) * 2018-07-20 2018-11-23 佛山科学技术学院 A kind of intelligent sound identification interaction means of illumination and system based on emotion judgment
CN109933666A (en) * 2019-03-18 2019-06-25 西安电子科技大学 A kind of good friend's automatic classification method, device, computer equipment and storage medium
CN109996026A (en) * 2019-04-23 2019-07-09 广东小天才科技有限公司 Video special effect interaction method, device, equipment and medium based on wearable equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102082870A (en) * 2010-12-28 2011-06-01 东莞宇龙通信科技有限公司 Method and device for managing contacts as well as mobile terminal
CN104703043A (en) * 2015-03-26 2015-06-10 努比亚技术有限公司 Video special effect adding method and device
CN108052670A (en) * 2017-12-29 2018-05-18 北京奇虎科技有限公司 A kind of recommendation method and device of camera special effect
CN108401129A (en) * 2018-03-22 2018-08-14 广东小天才科技有限公司 Video call method, device, terminal and storage medium based on wearable device
CN108882454A (en) * 2018-07-20 2018-11-23 佛山科学技术学院 A kind of intelligent sound identification interaction means of illumination and system based on emotion judgment
CN109933666A (en) * 2019-03-18 2019-06-25 西安电子科技大学 A kind of good friend's automatic classification method, device, computer equipment and storage medium
CN109996026A (en) * 2019-04-23 2019-07-09 广东小天才科技有限公司 Video special effect interaction method, device, equipment and medium based on wearable equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112327720A (en) * 2020-11-20 2021-02-05 北京瞰瞰科技有限公司 Atmosphere management method and system
CN112565913A (en) * 2020-11-30 2021-03-26 维沃移动通信有限公司 Video call method and device and electronic equipment
CN114422742A (en) * 2022-01-28 2022-04-29 深圳市雷鸟网络传媒有限公司 Call atmosphere improving method and device, intelligent device and storage medium

Also Published As

Publication number Publication date
CN111176440B (en) 2024-03-19
CN117908677A (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN108363706B (en) Method and device for man-machine dialogue interaction
EP3817395A1 (en) Video recording method and apparatus, device, and readable storage medium
US8847884B2 (en) Electronic device and method for offering services according to user facial expressions
CN111176440A (en) Video call method and wearable device
RU2293445C2 (en) Method and device for imitation of upbringing in mobile terminal
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN109982124A (en) User's scene intelligent analysis method, device and storage medium
CN109240786B (en) Theme changing method and electronic equipment
CN109327737A (en) TV programme suggesting method, terminal, system and storage medium
CN104133851A (en) Audio similarity detecting method, audio similarity detecting device and electronic equipment
EP3162053B1 (en) Lifelog camera and method of controlling same using voice triggers
CN108833721B (en) Emotion analysis method based on call, user terminal and system
CN109819167A (en) A kind of image processing method, device and mobile terminal
CN107451185B (en) Recording method, reading system, computer readable storage medium and computer device
CN110177239B (en) Video call method based on remote control and wearable device
CN111176435A (en) User behavior-based man-machine interaction method and sound box
CN108388338B (en) Control method and system based on VR equipment
CN109525791A (en) Information recording method and terminal
CN113157174B (en) Data processing method, device, electronic equipment and computer storage medium
CN110636170B (en) Voice call control method and related product thereof
JP2011223369A (en) Conversation system for patient with cognitive dementia
CN111821694A (en) Loss prevention method and device for new game user, electronic equipment and storage medium
CN111696536A (en) Voice processing method, apparatus and medium
CN111698600A (en) Processing execution method and device and readable medium
JP2005303734A (en) Communication equipment and server system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant