CN108134913B - Intelligent adjustment method for video call and wearable device - Google Patents

Intelligent adjustment method for video call and wearable device Download PDF

Info

Publication number
CN108134913B
CN108134913B CN201711396679.0A CN201711396679A CN108134913B CN 108134913 B CN108134913 B CN 108134913B CN 201711396679 A CN201711396679 A CN 201711396679A CN 108134913 B CN108134913 B CN 108134913B
Authority
CN
China
Prior art keywords
wearable device
video call
party
front camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711396679.0A
Other languages
Chinese (zh)
Other versions
CN108134913A (en
Inventor
杨婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201711396679.0A priority Critical patent/CN108134913B/en
Publication of CN108134913A publication Critical patent/CN108134913A/en
Application granted granted Critical
Publication of CN108134913B publication Critical patent/CN108134913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

An intelligent adjustment method for video call and wearable equipment comprise the following steps: when the child user and the parent user carry out video call, the wearable device can open a front camera to record video images of the child user in real time, whether complete facial information of the child user is contained in the video images is detected, if not, the LED lamp in the direction in which the wearable device needs to move is turned on according to the position of the contained child facial information in the video images, the wearable device is moved according to the direction of the turned-on LED lamp, and the LED lamp in the turned-on state is turned off until the wearable device detects that the complete facial information of the child user is contained in the video images. By implementing the embodiment of the invention, when the child user and the parent user carry out video call and the video image of the parent user is displayed in a full screen mode in the wearable device, the video image of the child user can be completely displayed in the device of the parent user, and the video call effect can be further improved.

Description

Intelligent adjustment method for video call and wearable device
Technical Field
The invention relates to the technical field of electronic equipment, in particular to an intelligent adjustment method for video calls and wearable equipment.
Background
With the development of electronic equipment technology, more and more child watches have the function of video call. When a child user uses a watch to carry out video call with a parent user, because the watch screen is small and difficult to simultaneously accommodate video images of the child user and the parent user, the video images of the parent user are usually only displayed on the watch, but the child user does not know whether the video images of the child user are completely presented in equipment of the parent user, the child user can only move the watch through self-feeling, the face of the child user is completely shot by a camera of the watch as far as possible, and the effect of the video call is not affected well.
Disclosure of Invention
The embodiment of the invention discloses an intelligent adjustment method of a video call and wearable equipment, which can improve the effect of the video call.
The first aspect of the embodiment of the invention discloses an intelligent adjustment method for video call, which comprises the following steps:
when the wearable device and a third-party device carry out video call, the wearable device outputs a third-party image acquired by the third-party device and transmits a local image shot by a front camera of the wearable device to the third-party device;
the wearable device detects whether the local image contains a human face image;
if yes, the wearable device judges whether the face image of the person is complete;
if the image is not complete, the wearable device outputs guiding information for guiding the wearable device to move according to the face image of the person, so that the front camera shoots a local image containing the face image of the complete person.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
before the wearable device and the third-party device carry out video call, the wearable device detects whether an input opening instruction for carrying out video call with the third-party device is received;
if yes, the wearable device controls an infrared sensor in the front camera to detect whether the front camera is shielded;
if the front camera is shielded, the wearable device outputs prompt information for prompting not to shield the front camera;
and if the front camera is not shielded, the wearable device responds to the opening instruction and opens the video call function in the wearable device so as to enable the wearable device to carry out video call with the third-party device.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
after the wearable device detects that an input opening instruction for video call with the third-party device is received, the wearable device controls an infrared sensor in the wearable device to detect whether the wearable device is in a worn state;
if not, the wearable device sends preset information used for indicating that the wearable device is not worn to the third-party device, and if so, the infrared sensor in the front camera is controlled to detect whether the front camera is shielded or not.
As an optional implementation manner, in the first aspect of the embodiments of the present invention, the outputting, by the wearable device, guidance information for guiding the wearable device to move according to the image of the face of the person includes:
the wearable device determines a center coordinate position of the person face image as a first center coordinate position;
the wearable device determines a center coordinate position of the local image as a second center coordinate position;
the wearable device determines a direction in which the first center coordinate position points to the second center coordinate position as a target direction;
the wearable device determines any LED lamp pointed by the target direction from a plurality of LED lamps preset at the periphery of a screen of the wearable device as a target LED lamp, lights the target LED lamp and takes the lighted target LED lamp as guide information; wherein the target direction is a direction directing the wearable device to move.
As an optional implementation manner, in the first aspect of the embodiments of the present invention, the outputting, by the wearable device, guidance information for guiding the wearable device to move according to the image of the face of the person includes:
the wearable device determines a center coordinate position of the person face image as a first center coordinate position;
the wearable device determines a center coordinate position of the local image as a second center coordinate position;
the wearable device determines a direction in which the first center coordinate position points to the second center coordinate position as a target direction;
the wearable device determines a distance between the first center coordinate position and the second center coordinate position as a target distance;
the wearable device outputs direction information for directing the wearable device to move the target distance in the target direction.
A second aspect of an embodiment of the present invention discloses a wearable device, including:
the first output unit is used for outputting a third-party image acquired by a third-party device when the wearable device and the third-party device carry out video call;
the transmission unit is used for transmitting the local image shot by the front camera of the wearable device to the third-party device;
a first detection unit configured to detect whether the local image includes a person face image;
a judging unit, configured to judge whether the personal face image is complete after the first detecting unit detects that the local image includes the personal face image;
and the second output unit is used for outputting guiding information for guiding the wearable device to move according to the facial image of the person after the judging unit judges that the facial image of the person is incomplete, so that the front camera shoots a local image containing a complete facial image of the person.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the wearable device further includes:
the second detection unit is used for detecting whether an input opening instruction for carrying out video call with the third-party equipment is received or not before the wearable equipment carries out video call with the third-party equipment;
the first control unit is used for controlling an infrared sensor in the front camera to detect whether the front camera is shielded or not after the second detection unit detects that an input starting instruction for carrying out video call with the third-party equipment is received;
the third output unit is used for outputting prompt information for prompting not to shield the front camera after the infrared sensor in the front camera is controlled by the first control unit to detect that the front camera is shielded;
and the response unit is used for responding to the starting instruction and starting a video call function in the wearable device after the first control unit controls the infrared sensor in the front camera to detect that the front camera is not shielded, so that the wearable device and the third-party device carry out video call.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the wearable device further includes:
the second control unit is used for controlling the infrared sensor in the wearable device to detect whether the wearable device is in a worn state or not after the second detection unit detects that an input opening instruction for carrying out video call with the third-party device is received;
a transmitting unit, configured to transmit preset information indicating that the wearable device is not worn to the third-party device after the second control unit controls the infrared sensor in the wearable device to detect that the wearable device is not worn;
the first control unit is specifically configured to control an infrared sensor in the front camera to detect whether the front camera is shielded or not after the second control unit controls the infrared sensor in the wearable device to detect that the wearable device is in a worn state.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the second output unit includes:
a first determining subunit configured to determine, as a first center coordinate position, a center coordinate position of the personal face image after the judging unit judges that the personal face image is incomplete;
the first determining subunit is further configured to determine a center coordinate position of the local image as a second center coordinate position;
the first determining subunit is further configured to determine, as a target direction, a direction in which the first center coordinate position points to the second center coordinate position;
the lighting sub-unit is used for determining any LED lamp pointed by the target direction from a plurality of LED lamps preset on the periphery of the screen of the wearable device as a target LED lamp, lighting the target LED lamp and using the lighted target LED lamp as guide information; wherein the target direction is a direction directing the wearable device to move.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the second output unit includes:
a second determining subunit configured to determine, as a first center coordinate position, a center coordinate position of the personal face image after the determining unit determines that the personal face image is incomplete;
the second determining subunit is further configured to determine a center coordinate position of the local image as a second center coordinate position;
the second determining subunit is further configured to determine, as a target direction, a direction in which the first center coordinate position points to the second center coordinate position;
the second determining subunit is further configured to determine a distance between the first center coordinate position and the second center coordinate position as a target distance;
an output subunit, configured to output guidance information for guiding the wearable device to move the target distance toward the target direction.
A third aspect of an embodiment of the present invention discloses a wearable device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the intelligent adjustment method for video call disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute the method for intelligently adjusting a video call disclosed in the first aspect of the present invention.
A fifth aspect of an embodiment of the present invention discloses a computer program product, which, when running on a computer, enables the computer to execute the intelligent adjustment method for video calls disclosed in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when a child user uses the wearable device to carry out video call with a parent user, the wearable device can start the front camera to record the video image of the child user in real time, detect whether the video image contains the complete facial information of the child user, and if not, turn on the LED lamp in the direction in which the wearable device needs to move according to the position of the child facial information contained in the video image so as to remind the child user to move the wearable device according to the direction of the turned-on LED lamp, and turn off the LED lamp in the turned-on state until the wearable device detects that the video image contains the complete facial information of the child user. In summary, by implementing the embodiments of the present invention, when the child user and the parent user perform a video call and the video image of the parent user is displayed in a full screen in the wearable device, the video image of the child user can be completely displayed in the device of the parent user, so that the video call effect can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an intelligent adjustment method for video calls according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another method for intelligently adjusting a video call according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another method for intelligently adjusting a video call according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a wearable device disclosed in the embodiment of the invention;
FIG. 5 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
FIG. 7 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
fig. 8 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses an intelligent adjustment method of a video call and wearable equipment, which can improve the effect of the video call. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an intelligent adjustment method for video calls according to an embodiment of the present invention. Wherein the wearable device wearer may be a child user and the third party may be a parent user. The intelligent adjustment method for video calls shown in fig. 1 may include the following steps:
101. when the wearable device and the third-party device carry out video call, the wearable device outputs a third-party image acquired by the third-party device, and transmits a local image shot by a front camera of the wearable device to the third-party device.
In the embodiment of the invention, when the video call function of the wearable device is in the open state, the wearable device can acquire the third-party image output by the third-party device in real time and open the front camera, and the wearable device can also output the third-party image in real time and transmit the local image shot by the front camera of the wearable device to the third-party device in real time. The wearable device can output the third-party image acquired in real time on a screen of the wearable device; the method and the device for screen projection can also detect whether the bluetooth in the wearable device is in a connection state in real time, if so, the wearable device can identify the electronic device connected with the wearable device in the bluetooth mode and judge whether the electronic device belongs to the screen projectable electronic device, and if so, the wearable device can project the third-party image to the electronic device for output after the screen projection operation of the user. As another optional implementation manner, after the wearable device detects an opening operation for opening a video call function, it may first detect whether a trusted device (e.g., a personal terminal such as a home personal computer, a tablet computer, etc.) exists within a preset range, and if so, the wearable device may transfer the video call to the trusted device, so that the effect of the video call is better, and the user may perform the video call through a front-facing camera or other cameras of the trusted device. Therefore, the step 101 can be executed to display the third-party images as many as possible by outputting the third-party images acquired by the third-party device, so that the effect of the video call can be improved, and the step 101 can also transmit the local images shot by the front camera of the wearable device to the third-party device, so that the function of interaction between the wearable device and the third device is realized.
102. The wearable device detects whether the local image contains the image of the face of the person, if yes, step 103 is executed, and if not, step 102 is executed.
In the embodiment of the invention, the wearable device can not only transmit the local image shot by the front camera to the third-party device, but also detect the face image of the person of the local image. The wearable device can detect the human face image in the local image through a Haar classifier, and if the human face image is detected to exist in the local image, whether the human face image is complete can be continuously detected in real time; if the wearable device detects that the facial image of the person does not exist in the local image (for example, any human face feature does not exist in the local image, that is, the wearable device may default that the facial image does not exist in the local image), the wearable device may continue to perform real-time detection on the continuously acquired latest local image and output a prompt message to inform that the wearable device wearer is not in the shooting range. The wearable device can identify whether the local image contains the facial features of the person by detecting the positions of the feature points of the face (such as eyes, mouth, nose and the like), and when the feature points (wherein the number of the feature points can be multiple) of any part (such as eyes) of the face are detected, the wearable device defaults that the facial image of the person exists in the local image. Therefore, step 102 can be executed to intelligently monitor whether the local image shot by the front camera in the video call contains the human face image or not by detecting the human face image of the local image.
In the embodiment of the invention, the Haar classifier comprises Haar features, an Integral Image (Integral Image) method, AdaBoost and cascade. The Haar-like features (Haar-like features) are a kind of digital image features used for object recognition, and are also a first kind of real-time face detection operator. The most important advantage of the haar feature is fast computation.
103. The wearable device judges whether the face image of the person is complete, if not, step 104 is executed, and if yes, the process is ended.
In the embodiment of the invention, after the wearable device detects that the local image contains the person face image, the wearable device can judge whether the person face image is complete. For example, the wearable device may compare the feature points of the eyes with the preset complete facial feature points in the wearable device according to the feature points of the face (e.g., the eyes) detected from the local image, and if the matching degree between the feature points of the eyes and the preset complete facial feature points is smaller than the preset matching degree (e.g., 98%), the wearable device may default that the facial image of the person is incomplete; if the current facial feature connecting point detected by the wearable device in the local image is the complete facial feature point, that is, the matching degree of the current facial feature connecting point and the preset complete facial feature point is greater than a preset matching degree (for example, 98%), the wearable device defaults that the face image of the person is complete. Therefore, step 103 can be executed to perform intelligent face monitoring on the wearable device wearer during the video call process by detecting the integrity of the image of the person's face.
104. The wearable device outputs guiding information for guiding the wearable device to move according to the face image of the person, so that the front camera shoots a local image containing the face image of the whole person.
In the embodiment of the present invention, after the wearable device determines that the face image of the person is incomplete, the wearable device may acquire, according to the feature point of the face (for example, eyes) detected in step 103, an eye center point coordinate in the local image, and then use a direction in which the eye center point coordinate points to the center point coordinate of the local image as a direction in which the wearable device needs to move, where the wearable device may further turn on an LED lamp corresponding to the direction in which the wearable device needs to move (where a plurality of LED lamps may be disposed around a screen of the wearable device) as the guidance information, and may also output, as the guidance information, the direction in which the wearable device needs to move in a form of voice or text. The execution of step 104 can improve the effect of the video call by outputting the guide information for guiding the wearable device to move according to the image of the face of the person without outputting the local image on the screen of the wearable device.
Therefore, by implementing the method described in fig. 1, the wearable device can display the third-party image as much as possible by outputting the third-party image acquired by the third-party device, so that the effect of video call can be improved; the wearable device can also transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the wearable device can also intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image or not through detecting the human face image of the local image; the wearable device can also realize the purpose of intelligently monitoring the face of a wearable device wearer in the video call process through the integrity detection of the face image of the person; the wearable device can also output guiding information for guiding the wearable device to move according to the facial image of the person under the condition that the local image is not output on the screen of the wearable device, and the effect of video call is improved. Therefore, implementing the method described in fig. 1 enables the video image of the child user to be completely presented in the device of the parent user when the wearable device wearer (e.g., the child user) makes a video call with a third party (e.g., the parent user) and the video image of the parent user is displayed in the wearable device in a full screen mode, thereby improving the effect of the video call.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another method for intelligently adjusting a video call according to an embodiment of the present invention. Wherein the wearable device wearer may be a child user and the third party may be a parent user. The intelligent adjustment method for video calls shown in fig. 2 may include the following steps:
201. before the wearable device and the third-party device carry out video call, the wearable device detects whether an input starting instruction for carrying out video call with the third-party device is received, if so, step 202 is executed, and if not, the process is ended.
In the embodiment of the invention, when the wearable device does not start the video call, if the wearable device detects the start instruction for starting the video call function, the wearable device starts the video call function. The opening instruction may be a preset legal voice or a shortcut key press, and the embodiment of the present invention is not limited.
202. The wearable device controls an infrared sensor in the front camera to detect whether the front camera is shielded, if so, step 203 is executed, and if not, step 204 is executed.
In the embodiment of the invention, after the wearable device detects that the input starting instruction for carrying out video call with the third-party device is received, the wearable device can control the infrared sensor in the front camera to detect whether the front camera is shielded or not. Wherein, the infrared reflection principle that uses among the infrared sensor, when certain part of human body is in the infrared ray region, the infrared ray that the infrared emission pipe launches can be sheltered from and reflect infrared receiving tube by certain part of human body, and wearable equipment can judge through the monitoring to infrared sensor whether leading camera is sheltered from.
203. Wearable equipment output is used for the suggestion not to shelter from the suggestion information of leading camera.
In the embodiment of the invention, after the wearable device controls the infrared sensor in the front camera to detect that the front camera is shielded, the wearable device can output prompt information to inform a user not to shield the front camera of the wearable device, so that the front camera can normally shoot local images in the video call process of the wearable device, and the call effect of the video call is ensured. Therefore, step 203 can be executed to improve the user experience by outputting the prompt information for prompting not to block the front camera.
204. The wearable device responds to the opening instruction and opens the video call function in the wearable device so that the wearable device and the third-party device can carry out video call.
In the embodiment of the present invention, after the wearable device controls the infrared sensor in the front-facing camera to detect that the front-facing camera is not shielded, the wearable device may respond to the start instruction and start the video call function in the wearable device to trigger execution of step 205.
In the embodiment of the present invention, the method for intelligently adjusting a video call further includes step 205 to step 208, and for the description of step 205 to step 208, please refer to the detailed description of step 101 to step 104 in the first embodiment, which is not described again in the embodiment of the present invention.
As an alternative embodiment, when a third party (e.g., a parent user) wants to ensure the safety of a current wearable device wearer (e.g., a child user), that is, when the wearable device receives a video call start request sent by a third party device of the parent user (where the wearable device may set the parent third party device as a legal third party device in advance), the wearable device starts a video call and prompts the child user that the video call is started. This alternative embodiment can enhance the parental control over the safety of the child.
As another optional implementation manner, no matter whether the wearable device is in a video call state or not, the front-facing camera in the wearable device may acquire the local image in real time, and when the wearable device detects that the person face image in the local image is not the face image of the child user (where the wearable device may enter the face image of the child user in advance, and the wearable device can determine whether the person face image in the local image is the face image of the child user), the person face image and warning information indicating that the child user is in an unsafe environment are sent to the third-party device of the parent user, so that personal safety of the child user is improved.
As can be seen, by implementing the method described in fig. 2, the wearable device can display the third-party image as much as possible by outputting the third-party image acquired by the third-party device, so that the effect of video call can be improved; the wearable device can also transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the wearable device can also intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image or not through detecting the human face image of the local image; the wearable device can also realize the purpose of intelligently monitoring the face of a wearable device wearer in the video call process through the integrity detection of the face image of the person; the wearable device can also output guiding information for guiding the wearable device to move according to the facial image of the person under the condition that the local image is not output on the screen of the wearable device, so that the effect of video call is improved; wearable equipment can also be used for the suggestion not to shelter from the suggestion information of leading camera through the output, has improved user experience. Therefore, implementing the method described in fig. 2 enables the video image of the child user to be completely presented in the device of the parent user when the wearable device wearer (e.g., the child user) makes a video call with a third party (e.g., the parent user) and the video image of the parent user is displayed in a full screen in the wearable device, further improving the effect of the video call.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating another method for intelligently adjusting a video call according to an embodiment of the present invention. Wherein the wearable device wearer may be a child user and the third party may be a parent user. The intelligent adjustment method for video calls shown in fig. 3 may include the following steps:
301. before the wearable device and the third-party device carry out video call, the wearable device detects whether an input starting instruction for carrying out video call with the third-party device is received, if so, step 302 is executed, and if not, the process is ended.
302. The wearable device controls an infrared sensor in the wearable device to detect whether the wearable device is in a worn state, if not, step 303 is executed, and if yes, step 304 is executed.
In the embodiment of the invention, after the wearable device detects that the input opening instruction for video call with the third-party device is received, the wearable device can control an infrared sensor in the wearable device (wherein the infrared sensor is arranged on one side of the wearable device close to the skin of the child user) to detect whether the child user wears the wearable device. The infrared sensor may perform measurement by using physical properties of infrared rays, and the infrared sensor may use a thermosensitive monitoring element or a photoelectric detection element, which is not limited in the embodiments of the present invention.
303. The wearable device sends preset information used for indicating that the wearable device is not worn to the third-party device.
In the embodiment of the invention, when the wearable device is not in the worn state, the wearable device sends preset information for indicating that the wearable device is not worn to the third-party device of the parent user. Therefore, step 303 can be executed to protect the personal safety of the child user and enhance the child control of the parent by sending preset information indicating that the wearable device is not worn to the third-party device.
304. The wearable device controls an infrared sensor in the front camera to detect whether the front camera is shielded, if so, step 305 is executed, and if not, step 306 is executed.
305. Wearable equipment output is used for the suggestion not to shelter from the suggestion information of leading camera.
306. The wearable device responds to the opening instruction and opens the video call function in the wearable device so that the wearable device and the third-party device can carry out video call.
In the embodiment of the present invention, after the wearable device controls the infrared sensor in the front-facing camera to detect that the front-facing camera is not shielded, the wearable device may respond to the start instruction and start the video call function in the wearable device, so as to trigger execution of step 307.
In the embodiment of the present invention, the method for intelligently adjusting a video call further includes steps 307 to 310, and for the description of steps 307 to 310, please refer to the detailed description of steps 101 to 104 in the first embodiment, which is not described again in the embodiment of the present invention.
In the embodiment of the present invention, when the wearable device starts a video call, the wearable device may perform H264 hard coding on a local video (where the local video includes at least one local image) shot by a front camera on yuv420 by using a MediaRecorder, may also call a local H264 coding library (JNI) to code a frame of yuv420 data and send the encoded frame of yuv420 data, may also compress a frame of data by using a GZIP library and send the compressed frame of data, and may also compress the frame of data by using a JPEG method and transmit the compressed frame of data.
As an alternative embodiment, the wearable device outputs guide information for guiding the wearable device to move according to the image of the face of the person, including:
the wearable device determines the central coordinate position of the person face image as a first central coordinate position;
the wearable device determines a central coordinate position of the local image as a second central coordinate position;
the wearable device determines a direction in which the first center coordinate position points to the second center coordinate position as a target direction;
the wearable device determines any one LED lamp pointed by a target direction from a plurality of LED lamps preset at the periphery of a screen of the wearable device as a target LED lamp, lights the target LED lamp and uses the lighted target LED lamp as guide information; wherein the target direction is a direction directing the wearable device to move.
Therefore, the optional embodiment can be implemented to indicate the moving direction by turning on the LED lamp, so that the usability of the wearable device is improved for the child user, the child user can more easily master how to use the wearable device for video call, and the user experience is improved.
As another alternative embodiment, the wearable device outputs, according to the image of the face of the person, guide information for guiding the wearable device to move, including:
the wearable device determines the central coordinate position of the person face image as a first central coordinate position;
the wearable device determines a central coordinate position of the local image as a second central coordinate position;
the wearable device determines a direction in which the first center coordinate position points to the second center coordinate position as a target direction;
the wearable device determines the distance between the first center coordinate position and the second center coordinate position as a target distance;
the wearable device outputs guidance information for guiding the wearable device to move a target distance in a target direction.
Therefore, by implementing the alternative embodiment, the audience of the wearable device can be more widely covered by outputting the specific moving direction and the specific moving distance, if the parent user needs to use the wearable device, the complete facial image of the person can be more quickly moved to the local image through the accurate moving direction and the accurate moving distance output by the wearable device, and the video call effect and the video call efficiency are improved.
As an optional implementation manner, when the wearable device detects a riding device, the wearable device defaults that a wearable device wearer is in a riding state, and establishes a connection with the riding device; if the wearable device receives a video call request sent by the third-party device, the wearable device can control the intelligent handle of the riding device to vibrate so as to remind a wearer of the wearable device to answer the video call; wearable equipment user can listen to the video telephone through tightly holding the intelligent hand (hold), and wearable equipment detects that the piezoelectric sensor data in the intelligent hand (hold) is greater than and predetermines numerical value, then wearable equipment responds the video call request to open the video call function. Wherein, if the equipment of riding contains the camera screen promptly, wearable equipment can be through the connection of establishing with the equipment of riding, with video conversation transfer to the equipment of riding, can the wearing equipment person of wearing can carry out video conversation under the state of riding. In addition, when the wearable device starts a video call function and detects that the local image shot by the front camera contains a person face image, the wearable device can correct the person face image according to the obtained face feature points. Therefore, the implementation of the optional embodiment can simplify the operation of answering the video phone when the wearable device wearer is in the riding state, and improve the user experience.
As can be seen, by implementing the method described in fig. 3, the wearable device can display the third-party image as much as possible by outputting the third-party image acquired by the third-party device, so that the effect of video call can be improved; the wearable device can also transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the wearable device can also intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image or not through detecting the human face image of the local image; the wearable device can also realize the purpose of intelligently monitoring the face of a wearable device wearer in the video call process through the integrity detection of the face image of the person; the wearable device can also output guiding information for guiding the wearable device to move according to the facial image of the person under the condition that the local image is not output on the screen of the wearable device, so that the effect of video call is improved; the wearable device can also improve user experience by outputting prompt information for prompting not to shield the front camera; the wearable device can also protect the personal safety of a child user and strengthen the control of parents on the child by sending preset information for indicating that the wearable device is not worn to third-party equipment; the wearable device can also indicate the moving direction by turning on the LED lamp, so that the usability of the wearable device is improved for the child user, the child user can more easily master how to use the wearable device for video call, and the user experience is improved; the wearable device can output specific moving direction and moving distance, so that the audience of the wearable device is wider, if a parent user needs to use the wearable device, the complete facial image of a person can be moved to a local image more quickly through the accurate moving direction and moving distance output by the wearable device, and the video call effect and the video call efficiency are improved; the wearable equipment can also simplify the operation of answering video calls when the wearable equipment wearer is in the state of riding, and user experience is improved. Therefore, the method described in fig. 3 can be implemented to further improve the effect of the video call.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present invention. Wherein the wearable device wearer may be a child user and the third party may be a parent user. As shown in fig. 4, the wearable device may include:
the first output unit 401 is configured to output a third-party image acquired by a third-party device when the wearable device and the third-party device perform a video call.
In this embodiment of the present invention, after the first output unit 401 outputs the third-party image acquired by the third-party device, the transmission unit 402 is triggered to start.
In the embodiment of the present invention, when the video call function is in an on state, the first output unit 401 may acquire, in real time, a third-party image output by a third-party device in real time and turn on the front camera, and the first output unit 401 may also output, in real time, the third-party image and transmit, in real time, a local image captured by the front camera of the wearable device to the third-party device. The first output unit 401 may output the third-party image acquired in real time on a screen of the wearable device; the bluetooth of the wearable device may also be detected in real time whether to be in a connected state, if so, the first output unit 401 may identify the electronic device connected to the wearable device via bluetooth, and determine whether the electronic device belongs to a screen-projectable electronic device, and if so, the first output unit 401 may project the third-party image onto the electronic device for output after the screen-projectable operation is performed by the user, which is not limited in the embodiment of the present invention. As another optional implementation manner, after detecting the start operation of starting the video call function, whether a trusted device (e.g., a personal terminal such as a home personal computer, a tablet computer, etc.) exists within a preset range may be detected, and if the trusted device exists, the first output unit 401 may transfer the video call to the trusted device, so that the effect of the video call is better, and a user may perform the video call through a front-facing camera or other cameras of the trusted device. Therefore, the first output unit 401 can display as many third-party images as possible by outputting the third-party images acquired by the third-party device, and the effect of video call can be improved.
A transmitting unit 402, configured to transmit the local image captured by the front camera of the wearable device to a third-party device.
In this embodiment of the present invention, after the first output unit 401 outputs the third-party image acquired by the third-party device, the first detection unit 403 is triggered to start. The transmission unit 402 can transmit the local image shot by the front camera of the wearable device to the third-party device, so that the function of interaction between the wearable device and the third device is realized.
In this embodiment of the present invention, when a video call is started, the transmission unit 402 may perform H264 hard coding on a local video (where the local video includes at least one local image) shot by a front camera on yuv420 by using a MediaRecorder, and then transmit the coded video, may also call a local H264 coding library (JNI) to code a frame of yuv420 data and then transmit the coded video, may also compress a frame of data by using a GZIP library and then transmit the compressed frame of data, and may also compress a frame of data by using a JPEG method and transmit the compressed frame of data.
A first detection unit 403, configured to detect whether the local image includes a person face image.
In this embodiment of the present invention, the transmission unit 402 may not only transmit the local image captured by the front camera to the third-party device, but also the first detection unit 403 may perform human face image detection on the local image. The first detecting unit 403 may detect a human face image in the local image through a Haar classifier, and if it is detected that the human face image exists in the local image, may continue to detect whether the human face image is complete in real time; if the wearable device detects that there is no human face image in the local image (for example, there is no human face feature in the local image, that is, the first detection unit 403 may default that there is no human face image in the local image), the first detection unit 403 may continue to perform real-time detection on the continuously acquired latest local image and output a prompt message to inform that the wearable device wearer is not in the shooting range. The first detection unit 403 may identify whether the feature points of the face (e.g., eyes, mouth, nose, etc.) include the facial features of the person by detecting the feature point positions of the face, and when the feature points (where the feature points may be multiple) of any part (e.g., eyes) of the face are detected, the first detection unit 403 defaults that the facial image of the person exists in the local image. Therefore, the first detection unit 403 can intelligently monitor whether the local image captured by the front camera during the video call contains the human face image by detecting the human face image of the local image.
In the embodiment of the invention, the Haar classifier comprises Haar features, an Integral Image (Integral Image) method, AdaBoost and cascade. The Haar-like features (Haar-like features) are a kind of digital image features used for object recognition, and are also a first kind of real-time face detection operator. The most important advantage of the haar feature is fast computation.
A judging unit 404, configured to judge whether the person face image is complete after the first detecting unit 403 detects that the person face image is included in the local image.
In this embodiment of the present invention, after the first detecting unit 403 detects that the local image includes the human face image, the determining unit 404 may determine whether the human face image is complete. For example, the determining unit 404 may compare the feature points of the eyes with the preset complete face feature points in the wearable device according to the feature points of the face (e.g., the eyes) detected from the local image, where the matching degree between the feature points of the eyes and the preset complete face feature points is smaller than the preset matching degree (e.g., 98%), that is, the determining unit 404 defaults that the face image of the person is incomplete; if the current facial feature connecting point detected in the local image by the determining unit 404 is the complete facial feature point, that is, the matching degree between the current facial feature connecting point and the preset complete facial feature point is greater than the preset matching degree (for example, 98%), the determining unit 404 defaults that the face image of the person is complete. Therefore, the execution judgment unit 404 can perform intelligent face monitoring on the wearable device wearer during the video call by detecting the integrity of the image of the person's face.
And a second output unit 405, configured to output, according to the person face image, guidance information for guiding the wearable device to move after the determination unit 404 determines that the person face image is incomplete, so that the front-facing camera captures a local image including the complete person face image.
In this embodiment of the present invention, after the determining unit 404 determines that the image of the face of the person is incomplete, the second output unit 405 may acquire coordinates of an eye center point in the local image according to the detected feature points of the face (for example, eyes), and then use a direction in which the coordinates of the eye center point to the coordinates of the center point of the local image as a direction in which the wearable device needs to move, where the second output unit 405 may further turn on an LED lamp corresponding to the direction in which the wearable device needs to move (where a plurality of LED lamps may be disposed around a screen of the wearable device) as the guiding information, or may output the direction in which the wearable device needs to move as the guiding information in a form of voice or text. The execution of the second output unit 405 can improve the effect of the video call by outputting guide information for guiding the wearable device to move according to the human face image without outputting the local image on the screen of the wearable device.
As can be seen, in the wearable device described in fig. 4, the first output unit 401 can display the third-party image as many as possible by outputting the third-party image acquired by the third-party device, so as to improve the effect of video call; the transmission unit 402 can transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the first detection unit 403 can intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image by detecting the human face image of the local image; the judgment unit 404 can realize the purpose of intelligent face monitoring on the wearable device wearer in the video call process by detecting the integrity of the person face image; the second output unit 405 can improve the effect of the video call by outputting guidance information for guiding the wearable device to move according to the person's face image without outputting the local image on the screen of the wearable device. Therefore, the wearable device described in fig. 4 can be implemented to enable the video image of the child user to be completely presented in the device of the parent user when the wearable device wearer (e.g., the child user) makes a video call with a third party (e.g., the parent user) and the video image of the parent user is displayed in the wearable device in a full screen mode, thereby improving the effect of the video call.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. Wherein, the wearable device shown in fig. 5 is optimized by the wearable device shown in fig. 4. Compared to the wearable device shown in fig. 4, the wearable device shown in fig. 5 may further include:
the second detecting unit 406 is configured to detect whether an input start instruction for performing a video call with a third party device is received before the wearable device performs the video call with the third party device.
In the embodiment of the present invention, when the video call is not started, if the second detection unit 406 detects the start instruction for starting the video call function, the video call function is started. The opening instruction may be a preset legal voice or a shortcut key press, and the embodiment of the present invention is not limited.
The first control unit 407 is configured to control an infrared sensor in the front camera to detect whether the front camera is blocked after the second detection unit 406 detects that an input start instruction for performing a video call with a third-party device is received.
In this embodiment of the present invention, after the second detecting unit 406 detects that an input start instruction for performing a video call with a third-party device is received, the first controlling unit 407 may control the infrared sensor in the front camera to detect whether the front camera is blocked. Wherein, the infrared reflection principle that uses among the infrared sensor, when certain part of human body is in the infrared region, the infrared ray that the infrared emission pipe launches can be sheltered from and reflect the infrared receiving tube by certain part of human body, and first control unit 407 can judge through the monitoring to infrared sensor that leading camera is sheltered from or not.
And a third output unit 408, configured to output a prompt message for prompting not to block the front camera after the first control unit 407 controls the infrared sensor in the front camera to detect that the front camera is blocked.
In the embodiment of the present invention, after the first control unit 407 controls the infrared sensor in the front camera to detect that the front camera is blocked, the third output unit 408 may output a prompt message to inform the user not to block the front camera of the wearable device, so that the front camera can normally shoot a local image during the video call of the wearable device, and a call effect of the video call is ensured. Therefore, executing the third output unit 408 can improve the user experience by outputting the prompt information for prompting not to obscure the front camera.
The response unit 409 is configured to respond to the start instruction and start a video call function in the wearable device after the first control unit 407 controls the infrared sensor in the front camera to detect that the front camera is not shielded, so that the wearable device and the third-party device perform a video call.
In the embodiment of the present invention, after the third output unit 408 controls the infrared sensor in the front camera to detect that the front camera is not shielded, the response unit 409 may respond to the start instruction and start the video call function in the wearable device to trigger the first output unit 401 to start.
The second control unit 410 is configured to control the infrared sensor in the wearable device to detect whether the wearable device is in a worn state after the second detection unit 406 detects that the input start instruction for performing the video call with the third-party device is received.
In this embodiment of the present invention, after the response unit 409 detects that an input start instruction for performing a video call with a third-party device is received, the second control unit 410 may control an infrared sensor in the wearable device (where the infrared sensor is disposed on a side of the wearable device close to the skin of the child user) to detect whether the child user wears the wearable device. The infrared sensor may perform measurement by using physical properties of infrared rays, and the infrared sensor may use a thermosensitive monitoring element or a photoelectric detection element, which is not limited in the embodiments of the present invention.
A sending unit 411, configured to send preset information indicating that the wearable device is not worn to the third-party device after the second control unit 410 controls the infrared sensor in the wearable device to detect that the wearable device is not in a worn state.
In this embodiment of the present invention, when the wearable device is not worn, the sending unit 411 sends preset information indicating that the wearable device is not worn to the third-party device of the parent user. Therefore, the execution transmitting unit 411 can protect personal safety of the child user and enhance the management and control of parents on the child by transmitting preset information indicating that the wearable device is not worn to the third-party device.
The first control unit 407 is specifically configured to control, after the second control unit 410 controls the infrared sensor in the wearable device to detect that the wearable device is in a worn state, the infrared sensor in the front camera to detect whether the front camera is blocked.
As an optional implementation manner, when the wearable device detects a riding device, the second detecting unit 406 defaults that the wearable device wearer is in a riding state, and establishes a connection with the riding device; if the second detection unit 406 receives a video call request sent by a third-party device, the second detection unit 406 may control the intelligent handle of the riding device to vibrate to remind a wearable device wearer to answer the video call; the wearable device user can listen to the video phone by gripping the smart handle, that is, if the second detection unit 406 detects that the data of the piezoelectric sensor in the smart handle is greater than the preset value, the response unit 409 responds to the video call request and starts the video call function. Wherein, if the equipment of riding contains the camera screen promptly, first output unit 401 can be through the connection of establishing with the equipment of riding, with video conversation turn to the equipment of riding, can wear equipment person of wearing can carry out video conversation under the state of riding. In addition, when the video call function is turned on and it is detected that the local image captured by the front camera includes a human face image, the second output unit 405 may correct the human face image according to the acquired facial feature points. Therefore, the implementation of the optional embodiment can simplify the operation of answering the video phone when the wearable device wearer is in the riding state, and improve the user experience.
As can be seen, in the wearable device described in fig. 5, the first output unit 401 can display the third-party image as many as possible by outputting the third-party image acquired by the third-party device, so as to improve the effect of video call; the transmission unit 402 can transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the first detection unit 403 can intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image by detecting the human face image of the local image; the judgment unit 404 can realize the purpose of intelligent face monitoring on the wearable device wearer in the video call process by detecting the integrity of the person face image; the second output unit 405 can improve the effect of the video call by outputting guidance information for guiding the wearable device to move according to the person face image without outputting the local image on the screen of the wearable device; the third output unit 408 can improve user experience by outputting prompt information for prompting not to block the front camera; the sending unit 411 can protect the personal safety of the child user and enhance the management and control of parents on the child by sending preset information indicating that the wearable device is not worn to the third-party device; the optional implementation mode can simplify the operation of answering the video telephone when the wearable device wearer is in a riding state, and improves the user experience. Therefore, the wearable device described in fig. 5 can be implemented to enable the video image of the child user to be completely presented in the device of the parent user when the wearable device wearer (e.g., the child user) makes a video call with a third party (e.g., the parent user) and the video image of the parent user is displayed in the wearable device in a full screen mode, so that the effect of the video call is further improved.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. The wearable device shown in fig. 6 is optimized by the wearable device shown in fig. 5. In comparison with the wearable device shown in fig. 5, in the wearable device shown in fig. 6, the second output unit 405 includes:
a first determination sub-unit 4051 configured to determine the center coordinate position of the personal face image as a first center coordinate position after the determination unit 404 determines that the personal face image is incomplete.
The first determining sub-unit 4051 is further configured to determine the center coordinate position of the local image as the second center coordinate position.
The first determining subunit 4051 is further configured to determine, as the target direction, a direction in which the first center coordinate position points to the second center coordinate position.
The lighting sub-unit 4052 is configured to determine, from a plurality of LED lamps preset around the screen of the wearable device, any LED lamp pointed by the target direction as a target LED lamp and light the target LED lamp, and use the lighted target LED lamp as the guidance information; wherein the target direction is a direction directing the wearable device to move.
As can be seen, in the wearable device described in fig. 6, the first output unit 401 can display the third-party image as many as possible by outputting the third-party image acquired by the third-party device, so as to improve the effect of video call; the transmission unit 402 can transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the first detection unit 403 can intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image by detecting the human face image of the local image; the judgment unit 404 can realize the purpose of intelligent face monitoring on the wearable device wearer in the video call process by detecting the integrity of the person face image; the second output unit 405 can improve the effect of the video call by outputting guidance information for guiding the wearable device to move according to the person face image without outputting the local image on the screen of the wearable device; the third output unit 408 can improve user experience by outputting prompt information for prompting not to block the front camera; the sending unit 411 can protect the personal safety of the child user and enhance the management and control of parents on the child by sending preset information indicating that the wearable device is not worn to the third-party device; the lighting sub-unit 4052 can indicate the moving direction by turning on the LED lamp, so that the usability of the wearable device is improved for the child user, the child user can more easily grasp how to use the wearable device for video call, and the user experience is improved. Therefore, implementing the wearable device described in fig. 6 can improve the effect of video call even further.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. The wearable device shown in fig. 7 is optimized by the wearable device shown in fig. 5. In comparison with the wearable device shown in fig. 5, in the wearable device shown in fig. 7, the second output unit 405 includes:
a second determination sub-unit 4053 configured to determine the center coordinate position of the personal face image as the first center coordinate position after the determination unit 404 determines that the personal face image is incomplete.
The second determining sub-unit 4053 is further configured to determine the center coordinate position of the local image as the second center coordinate position.
The second determining subunit 4053 is further configured to determine, as the target direction, a direction in which the first center coordinate position points to the second center coordinate position.
The second determining subunit 4053 is further configured to determine, as the target distance, the distance between the first center coordinate position and the second center coordinate position.
The output subunit 4054 is configured to output guidance information for guiding the wearable device to move the target distance in the target direction.
As can be seen, in the wearable device described in fig. 7, the first output unit 401 can display the third-party image as many as possible by outputting the third-party image acquired by the third-party device, so as to improve the effect of video call; the transmission unit 402 can transmit the local image shot by the front camera of the wearable device to a third-party device, so that the function of interaction between the wearable device and the third device is realized; the first detection unit 403 can intelligently monitor whether the local image shot by the front camera in the video call process contains the human face image by detecting the human face image of the local image; the judgment unit 404 can realize the purpose of intelligent face monitoring on the wearable device wearer in the video call process by detecting the integrity of the person face image; the second output unit 405 can improve the effect of the video call by outputting guidance information for guiding the wearable device to move according to the person face image without outputting the local image on the screen of the wearable device; the third output unit 408 can improve user experience by outputting prompt information for prompting not to block the front camera; the sending unit 411 can protect the personal safety of the child user and enhance the management and control of parents on the child by sending preset information indicating that the wearable device is not worn to the third-party device; output subunit 4054 can make the audience face of this wearable equipment more extensive through exporting specific moving direction and moving distance, if the head of a family user needs to use this wearable equipment, then can realize faster in moving complete people's facial image to local image through the accurate moving direction and the moving distance of wearable equipment output, improved video conversation's effect and video conversation's efficiency. Therefore, implementing the wearable device described in fig. 7 can improve the effect of video call even further.
Example eight
Referring to fig. 8, fig. 8 is a schematic structural diagram of another wearable device according to an embodiment of the disclosure. As shown in fig. 8, the wearable device may include:
a memory 801 in which executable program code is stored;
a processor 802 coupled with the memory 801;
the processor 802 calls the executable program code stored in the memory 801 to execute the intelligent adjustment method for video call in any one of fig. 1 to 3.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one of the intelligent adjustment methods of video calls in figures 1-3.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
In the above embodiments, the implementation may be wholly or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optics, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer readable medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (which may be, for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (such as an optical disk), a semiconductor medium (such as a solid state disk), or the like. In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-described method of the embodiments of the present application.
The above embodiments are only used for illustrating the technical solutions of the present application and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (6)

1. An intelligent adjustment method for video calls, the method comprising:
when the wearable device and a third-party device carry out video call, the wearable device outputs a third-party image acquired by the third-party device and transmits a local image shot by a front camera of the wearable device to the third-party device;
the wearable device detects whether the local image contains a human face image;
if yes, the wearable device judges whether the face image of the person is complete;
if not, the wearable device determines the central coordinate position of the image of the face of the person as a first central coordinate position;
the wearable device determines a center coordinate position of the local image as a second center coordinate position;
the wearable device determines a direction in which the first center coordinate position points to the second center coordinate position as a target direction;
the wearable device determines a distance between the first center coordinate position and the second center coordinate position as a target distance;
the wearable device outputs guiding information for guiding the wearable device to move the target distance towards the target direction, so that the front camera shoots a local image containing a complete person face image;
the wearable device outputting direction information for directing the wearable device to move the target distance in the target direction, including:
the wearable device determines any LED lamp pointed by the target direction from a plurality of LED lamps preset at the periphery of a screen of the wearable device as a target LED lamp, lights the target LED lamp and takes the lighted target LED lamp as guide information; wherein the target direction is a direction directing the wearable device to move.
2. The method of claim 1, further comprising:
before the wearable device and the third-party device carry out video call, the wearable device detects whether an input opening instruction for carrying out video call with the third-party device is received;
if yes, the wearable device controls an infrared sensor in the front camera to detect whether the front camera is shielded;
if the front camera is shielded, the wearable device outputs prompt information for prompting not to shield the front camera;
and if the front camera is not shielded, the wearable device responds to the opening instruction and opens the video call function in the wearable device so as to enable the wearable device to carry out video call with the third-party device.
3. The method of claim 2, further comprising:
after the wearable device detects that an input opening instruction for video call with the third-party device is received, the wearable device controls an infrared sensor in the wearable device to detect whether the wearable device is in a worn state;
if not, the wearable device sends preset information used for indicating that the wearable device is not worn to the third-party device, and if so, the infrared sensor in the front camera is controlled to detect whether the front camera is shielded or not.
4. A wearable device, characterized in that the wearable device comprises:
the first output unit is used for outputting a third-party image acquired by a third-party device when the wearable device and the third-party device carry out video call;
the transmission unit is used for transmitting the local image shot by the front camera of the wearable device to the third-party device;
a first detection unit configured to detect whether the local image includes a person face image;
a judging unit, configured to judge whether the personal face image is complete after the first detecting unit detects that the local image includes the personal face image;
a second output unit, configured to output, according to the person face image, guidance information for guiding the wearable device to move after the determination unit determines that the person face image is incomplete, so that the front-facing camera captures a local image including a complete person face image;
the second output unit includes:
a second determining subunit configured to determine, as a first center coordinate position, a center coordinate position of the personal face image after the determining unit determines that the personal face image is incomplete;
the second determining subunit is further configured to determine a center coordinate position of the local image as a second center coordinate position;
the second determining subunit is further configured to determine, as a target direction, a direction in which the first center coordinate position points to the second center coordinate position;
the second determining subunit is further configured to determine a distance between the first center coordinate position and the second center coordinate position as a target distance;
an output subunit, configured to output guidance information for guiding the wearable device to move the target distance to the target direction;
the lighting sub-unit is used for determining any LED lamp pointed by the target direction from a plurality of LED lamps preset on the periphery of the screen of the wearable device as a target LED lamp, lighting the target LED lamp and using the lighted target LED lamp as guide information; wherein the target direction is a direction directing the wearable device to move.
5. The wearable device of claim 4, further comprising:
the second detection unit is used for detecting whether an input opening instruction for carrying out video call with the third-party equipment is received or not before the wearable equipment carries out video call with the third-party equipment;
the first control unit is used for controlling an infrared sensor in the front camera to detect whether the front camera is shielded or not after the second detection unit detects that an input starting instruction for carrying out video call with the third-party equipment is received;
the third output unit is used for outputting prompt information for prompting not to shield the front camera after the infrared sensor in the front camera is controlled by the first control unit to detect that the front camera is shielded;
and the response unit is used for responding to the starting instruction and starting a video call function in the wearable device after the first control unit controls the infrared sensor in the front camera to detect that the front camera is not shielded, so that the wearable device and the third-party device carry out video call.
6. The wearable device of claim 5, further comprising:
the second control unit is used for controlling the infrared sensor in the wearable device to detect whether the wearable device is in a worn state or not after the second detection unit detects that an input opening instruction for carrying out video call with the third-party device is received;
a transmitting unit, configured to transmit preset information indicating that the wearable device is not worn to the third-party device after the second control unit controls the infrared sensor in the wearable device to detect that the wearable device is not worn;
the first control unit is specifically configured to control an infrared sensor in the front camera to detect whether the front camera is shielded or not after the second control unit controls the infrared sensor in the wearable device to detect that the wearable device is in a worn state.
CN201711396679.0A 2017-12-21 2017-12-21 Intelligent adjustment method for video call and wearable device Active CN108134913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711396679.0A CN108134913B (en) 2017-12-21 2017-12-21 Intelligent adjustment method for video call and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711396679.0A CN108134913B (en) 2017-12-21 2017-12-21 Intelligent adjustment method for video call and wearable device

Publications (2)

Publication Number Publication Date
CN108134913A CN108134913A (en) 2018-06-08
CN108134913B true CN108134913B (en) 2021-06-01

Family

ID=62392163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711396679.0A Active CN108134913B (en) 2017-12-21 2017-12-21 Intelligent adjustment method for video call and wearable device

Country Status (1)

Country Link
CN (1) CN108134913B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881778B (en) * 2018-07-09 2021-11-05 广东小天才科技有限公司 Video output method based on wearable device and wearable device
CN108900770B (en) * 2018-07-17 2021-01-22 广东小天才科技有限公司 Method and device for controlling rotation of camera, smart watch and mobile terminal
CN108921125A (en) * 2018-07-18 2018-11-30 广东小天才科技有限公司 A kind of sitting posture detecting method and wearable device
CN110175254B (en) * 2018-09-30 2022-11-25 广东小天才科技有限公司 Photo classified storage method and wearable device
CN109040652A (en) * 2018-10-08 2018-12-18 北京小鱼在家科技有限公司 Method of adjustment, device, video call device and the storage medium of camera
CN110177235B (en) * 2018-11-06 2020-12-25 广东小天才科技有限公司 Video synchronization method based on wearable device and wearable device
CN111757039B (en) * 2019-05-09 2022-03-25 广东小天才科技有限公司 Video call method of wearable device and wearable device
CN111064483A (en) * 2019-12-03 2020-04-24 广东***职业学院 Wearable monitoring device and monitoring method
CN112311942A (en) * 2020-11-27 2021-02-02 珠海读书郎网络教育有限公司 Incoming call answering method of telephone watch

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101296356A (en) * 2007-04-24 2008-10-29 Lg电子株式会社 Video communication terminal and method of displaying images
CN105652560A (en) * 2016-01-25 2016-06-08 广东小天才科技有限公司 Photographing method and system capable of automatically adjusting focal length
CN105791675A (en) * 2016-02-26 2016-07-20 广东欧珀移动通信有限公司 Terminal, imaging and interaction control method and device, and terminal and system thereof
CN106303353A (en) * 2016-08-17 2017-01-04 深圳市金立通信设备有限公司 A kind of video session control method and terminal
CN106845454A (en) * 2017-02-24 2017-06-13 张家口浩扬科技有限公司 The method and its device of a kind of image output feedback
US9729820B1 (en) * 2016-09-02 2017-08-08 Russell Holmes Systems and methods for providing real-time composite video from multiple source devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101296356A (en) * 2007-04-24 2008-10-29 Lg电子株式会社 Video communication terminal and method of displaying images
CN105652560A (en) * 2016-01-25 2016-06-08 广东小天才科技有限公司 Photographing method and system capable of automatically adjusting focal length
CN105791675A (en) * 2016-02-26 2016-07-20 广东欧珀移动通信有限公司 Terminal, imaging and interaction control method and device, and terminal and system thereof
CN106303353A (en) * 2016-08-17 2017-01-04 深圳市金立通信设备有限公司 A kind of video session control method and terminal
US9729820B1 (en) * 2016-09-02 2017-08-08 Russell Holmes Systems and methods for providing real-time composite video from multiple source devices
CN106845454A (en) * 2017-02-24 2017-06-13 张家口浩扬科技有限公司 The method and its device of a kind of image output feedback

Also Published As

Publication number Publication date
CN108134913A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
CN108134913B (en) Intelligent adjustment method for video call and wearable device
CN109144360B (en) Screen lighting method, electronic device, and storage medium
US9924090B2 (en) Method and device for acquiring iris image
KR20180109109A (en) Method of recognition based on IRIS recognition and Electronic device supporting the same
KR101898893B1 (en) User device of filed and remote-control system using 360-degree camera of the user device and virtual reality device
CN111917980B (en) Photographing control method and device, storage medium and electronic equipment
CN108959273B (en) Translation method, electronic device and storage medium
EP3246850A1 (en) Image sending method and apparatus, computer program and recording medium
KR20170094745A (en) Method for video encoding and electronic device supporting the same
CN112188074B (en) Image processing method and device, electronic equipment and readable storage medium
CN107249084A (en) Mobile terminal and routine call method, computer-readable recording medium
CN111131702A (en) Method and device for acquiring image, storage medium and electronic equipment
CN104777902A (en) Screen state control method
CN110826410B (en) Face recognition method and device
US20170244891A1 (en) Method for automatically capturing photograph, electronic device and medium
CN111176440B (en) Video call method and wearable device
JP4856585B2 (en) Personal identification communication system and program executed in personal identification communication system
CN109740557B (en) Object detection method and device, electronic equipment and storage medium
CN110139064B (en) Video call method of wearable device and wearable device
CN112133296A (en) Full-duplex voice control method, device, storage medium and voice equipment
CN110544335B (en) Object recognition system and method, electronic device, and storage medium
CN110177233B (en) Video transmission method for protecting user privacy and wearable device
CN109660662B (en) Information processing method and terminal
CN110399780B (en) Face detection method and device and computer readable storage medium
CN110174924B (en) Friend making method based on wearable device and wearable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant