CN112911154B - Snapshot method, server and computer storage medium - Google Patents

Snapshot method, server and computer storage medium Download PDF

Info

Publication number
CN112911154B
CN112911154B CN202110146060.4A CN202110146060A CN112911154B CN 112911154 B CN112911154 B CN 112911154B CN 202110146060 A CN202110146060 A CN 202110146060A CN 112911154 B CN112911154 B CN 112911154B
Authority
CN
China
Prior art keywords
audio
determining
video data
snapshot
family
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110146060.4A
Other languages
Chinese (zh)
Other versions
CN112911154A (en
Inventor
曹冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110146060.4A priority Critical patent/CN112911154B/en
Publication of CN112911154A publication Critical patent/CN112911154A/en
Application granted granted Critical
Publication of CN112911154B publication Critical patent/CN112911154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a snapshot method, which is applied to a server, wherein the server is in communication connection with at least two electronic devices arranged in a family, and the snapshot method comprises the following steps: when the snapshot object is determined to be in the monitoring area of the camera in the electronic equipment, the current scene where the snapshot object is located is determined according to the operation information of the electronic equipment, and when the current scene is determined to be the preset scene, the camera is controlled to snapshot the snapshot object, so that the audio and video data of the snapshot object are obtained and stored. The embodiment of the application also provides a server and a computer storage medium.

Description

Snapshot method, server and computer storage medium
Technical Field
The application relates to a snapshot technology of audio and video in an intelligent home platform, in particular to a snapshot method, a server and a computer storage medium.
Background
Generally, in the existing camera capturing technology, a target object is captured with audio and video by means of monitoring, target detection, tracking, capturing and the like of a camera on the surrounding environment, or the camera is triggered and started to capture the audio and video by means of some sensors, however, the audio and video obtained by the capturing method is not the audio and video which a user needs to capture, or the audio and video which the user needs to capture is not captured, so that redundancy or loss of the obtained audio and video is caused; therefore, the existing camera snapshot method has the technical problem of low snapshot efficiency.
Disclosure of Invention
The embodiment of the application provides a snapshot method, a server and a computer storage medium, and the snapshot efficiency of audio and video can be improved.
The technical scheme of the application is realized as follows:
the embodiment of the application provides a snapshot method, which is applied to a server, wherein the server is in communication connection with at least two pieces of electronic equipment arranged in a family, and the snapshot method comprises the following steps:
when the snapshot object is determined to be in the monitoring area of the camera in the electronic equipment, determining the current scene of the snapshot object according to the operation information of the electronic equipment;
and when the current scene is determined to be a preset scene, controlling the camera to shoot the shot object, and obtaining and storing audio and video data of the shot object.
The embodiment of the application provides a server, the server has communication connection with at least two electronic equipment that set up in the family, includes:
the determining module is used for determining a current scene where the snapshot object is located according to the running information of the electronic equipment when the snapshot object is determined to be located in the monitoring area of the camera in the electronic equipment;
and the snapshot module is used for controlling the camera to snapshot the snapshot object when the current scene is determined to be a preset scene, so as to obtain and store the audio and video data of the snapshot object.
An embodiment of the present application further provides a server, where the server includes: the snapshot system comprises a processor and a storage medium storing instructions executable by the processor, wherein the storage medium depends on the processor to execute operations through a communication bus, and when the instructions are executed by the processor, the snapshot method of one or more of the above embodiments is executed.
The embodiment of the application provides a computer storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the snapshot method of one or more embodiments.
The embodiment of the application provides a snapshot method, a server and a computer storage medium, wherein the method is applied to the server, the server is in communication connection with at least two pieces of electronic equipment arranged in a family, and the method comprises the following steps: when the snapshot object is determined to be in the monitoring area of the camera in the electronic equipment, determining the current scene where the snapshot object is located according to the operation information of the electronic equipment, and when the current scene is determined to be a preset scene, controlling the camera to snapshot the snapshot object to obtain and store audio and video data of the snapshot object; that is to say, in the embodiment of the present application, when determining that a snapshot object is located in a monitoring area of a camera in an electronic device, a server does not directly control the camera to start snapshot, but further determines a current scene where the snapshot object is located according to operation information of the electronic device, and only when the current scene is a preset scene, the server controls the camera to snapshot the snapshot object, so as to obtain audio and video data of the snapshot object.
Drawings
Fig. 1 is a schematic flowchart of an alternative snapshot method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an example of an alternative smart home system according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of an example of an alternative snapshot method provided in an embodiment of the application;
fig. 4 is a first schematic structural diagram of a server according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Example one
An embodiment of the present application provides a snapshot method, where the method is applied to a server, where the server is in communication connection with at least two electronic devices disposed in a home, and fig. 1 is a schematic flow diagram of an optional snapshot method provided in an embodiment of the present application, and with reference to fig. 1, the snapshot method may include:
s101: when the snapshot object is determined to be in the monitoring area of the camera in the electronic equipment, determining the current scene of the snapshot object according to the operation information of the electronic equipment;
at present, the audio and video obtained by the existing snapshot method is not the audio and video which the user needs to capture, or the audio and video which the user needs to capture is not snapshot, so that the technical problems of redundancy or deficiency of the obtained audio and video and low snapshot efficiency are caused.
In order to improve the snapshot efficiency, the server determines that the snapshot object is located in a monitoring area of a camera in the electronic device, for example, the snapshot object is a child in a family, the camera is arranged in a living room, the monitoring area is the living room, and then when the server determines that the child is located in the living room area, the server acquires the operation information of the electronic device and determines the current scene where the snapshot object is located according to the operation information of the electronic device.
In order to determine whether the snapshot object is located in the monitoring area of the camera in the electronic device, various methods may be used for determining whether the snapshot object is located in the monitoring area of the camera in the electronic device, and in an alternative embodiment, determining that the snapshot object is located in the monitoring area of the camera in the electronic device may include:
determining the position of a wearable device of the electronic device according to the signal strength of the wearable device; wherein the snapshotted subject wears a wearable device;
when the position of the wearable device is within the range of the monitoring area, it is determined that the snapshot object is in the monitoring area of the camera in the electronic device.
Because the server is in communication connection with the electronic device, the server may acquire Signal Strength of the electronic device, for example, received Signal Strength Indication (RSSI), and the server may acquire distances from the wearable device to at least three network devices in the electronic device according to the RSSI and then determine the position of the wearable device according to a distance formula between the two points; since the wearable device is worn on the body of the snapshot object, the server can acquire the position of the snapshot object.
Then, the server can judge whether the position of the wearable device is within the range of the monitoring area, and after judgment, when the server determines that the position of the wearable device is within the range of the monitoring area, the server determines that the snapshot object is in the monitoring area of the camera in the electronic device.
For example, in order to realize the snapshot of the child, here, the wearable device is a bracelet watch, the bracelet watch is worn on the body of the child, when the server determines that the bracelet watch is in the monitoring area, here, the server may calculate the position of the bracelet watch through the signal strength of Wireless Fidelity (WIFI) or bluetooth, and only when the bracelet watch is in the monitoring area, it is determined that the snapshot object is in the monitoring area.
In addition, in order to determine whether the snapshot object is in the monitoring area of the camera in the electronic device, in an alternative embodiment, determining that the snapshot object is in the monitoring area of the camera in the electronic device may include:
when the sensing equipment in the electronic equipment detects the object, the camera is started;
when the picture shot by the camera contains the snapshot object, determining that the snapshot object is in the monitoring area of the camera in the electronic equipment.
The method comprises the steps that sensing equipment is arranged in a monitoring area, whether an object exists or not is detected through the sensing equipment, when the sensing equipment detects that the object exists, a camera is started firstly, shooting is carried out through the camera, whether a shot object exists in a shot picture or not is judged, if yes, the shot object is located in the monitoring area of the camera in the electronic equipment, and if not, the shot object is not located in the monitoring area of the camera in the electronic equipment.
For example, a human body sensor is arranged in the monitoring area, when the human body sensor detects that a target object exists, a camera is started to capture a picture, and when the target object in the picture contains a snapshot object, the snapshot object is determined to be in the monitoring area.
In order to determine the current scene of the snapshot object, in an alternative embodiment, determining the current scene of the snapshot object according to the operation information of the electronic device may include:
obtaining family entrance and exit information from entrance guard equipment in the electronic equipment;
when the family entering and exiting information indicates that no person is at home, determining that the current scene is in a leaving mode;
when the family entrance and exit information indicates that a person is at home, it is determined that the current scene is in the at-home mode.
Specifically, the electronic device includes all electronic devices arranged in a home, for example, an intelligent household appliance, a sensor, a network device, and the like, where, in order to determine a current scene, the server first obtains home entrance and exit information from the access control device in the electronic device, and in practical applications, the access control device may be a visual doorbell arranged on an entrance door, and information of each person entering and exiting the entrance door may be obtained through the visual doorbell, that is, the home entrance and exit information.
When each person in the family access information has both the access information and the exit information, the server indicates that no person is at home, so that the server determines that the current scene is in the away mode, and when the family access information has a person with only the access information and no exit information, the server determines that the current scene is in the home mode.
Since the at-home mode can be further divided into a plurality of modes, in an alternative embodiment, when the home entrance information indicates that there is a person in the home, determining that the current scene is the at-home mode may include:
when the family access information indicates that a person is in the family, and the family access information indicates that the family has access information of non-family members, acquiring the operation information of security equipment from the security equipment of the electronic equipment;
when the operation information of the security equipment does not contain alarm information and the operation information of the access control equipment does not contain alarm information, determining that the current scene is in a family mode and a visitor-meeting mode;
and when the family access information indicates that people exist in the family, and the family access information indicates that all the family members access information, determining that the current scene is in a non-guest-meeting mode of a family mode.
Specifically, when a person is in the home and the person-out information of the home indicates that entry information of non-family persons exists in the home, the server may obtain operation information from security equipment in the home, for example, the door and window detection equipment is security equipment, and the server obtains the operation information of the door and window detection equipment from the door and window detection equipment, so as to determine whether an entering non-family member is a guest, and for an illegally-intruded person, the security equipment or the access control equipment may send alarm information.
When the family entrance and exit information indicates that people are at home and all the people are family members, the server determines that the current scene is in a non-meeting mode of a family mode.
That is to say, here, the server can determine that it is the visitor mode and the non-visitor mode through the security system and the access control system.
Further, the classification may be performed for the non-guest-meeting mode, and in an optional embodiment, when the home entrance and exit information indicates that there is a person in the home, the determining that the current scene is the home mode may include:
when the family access information indicates that a person is at home and the obtained decibel value of the sound detector in the electronic equipment is smaller than or equal to a specified threshold value, determining that the current scene is a sleep mode in a family mode;
when the family entering and exiting information indicates that a person is at home and at least one cooking device in the obtained electronic devices is running, determining that the current scene is a catering mode in the family mode;
when the family access information indicates that a person is at home and at least one entertainment device in the acquired electronic devices is running, determining that the current scene is an entertainment mode in the family mode.
Specifically, when the family entrance and exit information indicates that a person is at home and a decibel value detected by a volume detector in the electronic equipment is smaller than or equal to a specified threshold, the decibel value at home is lower, and the server determines that the current scene is a sleep mode in the home mode; wherein the designated threshold value is a preset value.
When the family entrance and exit information indicates that a person is in the family, and at least one cooking device in the electronic device is running, for example, an electric rice cooker is running, which indicates that a user is cooking at home, the server determines that the current scene is a dining mode of the family mode.
When the family entrance and exit information indicates that a person is in the family, and at least one entertainment device is running in the electronic device, for example, a smart television is running, or Virtual Reality (VR)/Augmented Reality (AR) glasses are running, and the like, the server determines that the current scene is in an entertainment mode of the at-home mode.
Here, it should be noted that the server may further determine a scene mode of each time period every day according to the history record when determining the current scene, and may further perform determination by combining the history record when determining the current scene, so as to accurately determine the current scene where the snapshot object is located, which is beneficial to accurately controlling the camera to snapshot the snapshot object.
In practical application, the smart home platform is connected with all electronic devices in a home, current states of all electronic devices and latest notification information of all electronic devices can be acquired, the smart home platform is firstly divided into mutually independent modes according to living habits (such as time work and activities), for example, a leaving mode (specifically: no person in the home, security protection can be started)/a home mode (specifically: person in the home, the home mode is subdivided into a sleep mode/an entertainment mode/a dining mode), and whether a person in the home can be identified through video monitoring at a door.
When the intelligent family platform identifies a family mode (specifically, a family is at home), if a stranger is identified to be at home, and security alarms such as illegal intrusion (specifically, the illegal intrusion can be detected through a door and window detection device) and clamping (door camera monitoring) are not provided, the stranger can be judged to be a guest, and then the intelligent family is in a guest meeting scene.
S102: and when the current scene is determined to be the preset scene, controlling the camera to shoot the shot object, and obtaining the audio and video data of the shot object.
The current scene where the snapshot object is located can be determined through S101, and the server controls the camera to snapshot the snapshot object only when the current scene is judged to be the preset scene, so that audio and video data of the snapshot object are obtained.
However, to avoid taking a snapshot in an inappropriate scene, in an alternative embodiment, S102 may include:
and when the current scene is determined to be in the non-guest-meeting mode of the home mode, controlling the camera to shoot the shot object, and obtaining and storing audio and video data of the shot object.
That is to say, if the current scene is determined to be the non-guest-meeting mode of the home mode, it is indicated that all the home is family members at this time, the server controls the camera to snapshot the snapshot object to obtain the audio and video data of the snapshot object, and for the guest-meeting mode of the home mode or the off-home mode, the server prohibits opening the camera to snapshot, so that discomfort caused to the guest in the guest-meeting mode is avoided, and redundant audio and video data are snapshot in the off-home mode.
In order to prevent resources occupied by the captured redundant or invalid audiovisual data, in an optional embodiment, the method may further include:
determining invalid audio and video data from the audio and video data of the snapshot object;
and deleting invalid audio and video data from the audio and video data of the snapshot object.
Specifically, the server determines invalid audio and video data from the audio and video data of the snapshot object, and then deletes the invalid audio and video data from the audio and video of the snapshot object.
Further, in order to determine invalid audio/video data from the audio/video data of the snapshot object, in an optional embodiment, determining the invalid audio/video data from the audio/video data of the snapshot object may include:
determining the contrast of each frame of image in each piece of audio and video data of a snapshot object;
and determining the ratio of the number of image frames with the contrast ratio smaller than or equal to a second preset threshold to the total number of image frames of the corresponding audio and video data, and determining the corresponding audio and video data as invalid audio and video data when the ratio of the number of image frames to the total number of image frames of the corresponding audio and video data is smaller than a third preset threshold.
Specifically, the effectiveness of the audio and video is mainly finished by the quality evaluation of the audio and video, and the audio effectiveness judgment method can judge from two aspects of audio and video.
Firstly, for audio, the server may determine a signal-to-noise ratio of the audio and determine a speech content that can be recognized, but it should be noted that the final influence of the whole audio on the quality of the audio and video is small, and the percentage of the whole audio to the quality of the audio and video may be set at about 10%, that is, even if the audio is bad or missing, the whole quality is not greatly influenced.
For a video effectiveness judgment method, the ambiguity of a video specifically needs to perform ambiguity analysis on each frame image in an audio/video, and the specific analysis method is mainly implemented by using contrast, for example, a picture is divided into M × N blocks, then the contrast is calculated for each block, finally the average value of the M × N blocks is calculated, the contrast of each frame image is determined, then a ambiguity threshold (corresponding to the second preset threshold) is set according to experience, for example, 70 is calculated, the occupancy of the fuzzy frame in the whole audio/video is counted, and if the occupancy exceeds 50%, the audio/video is determined to be fuzzy, that is, an invalid audio/video.
In addition, in order to determine invalid audio/video data from the audio/video data of the snapshot object, in an optional embodiment, the determining of the invalid audio/video data from the audio/video data of the snapshot object includes:
carrying out face recognition on each frame of image in each audio and video data of the snapshot object;
and determining the ratio of the number of the image frames of the identified face region to the total number of the image frames of the corresponding audio and video data, and determining the corresponding audio and video data as invalid audio and video data when the ratio of the number of the image frames of the identified face region to the total number of the image frames of the corresponding audio and video data is less than or equal to a fourth preset threshold value.
For example, for the identification of the content of each frame of image in the audio/video data, it is specifically required to perform face identification on each frame of image in the audio/video, count the ratio of the number of image frames in the whole audio/video in which a face image can be identified, for example, less than 20%, and determine that the audio/video is invalid audio/video data.
The following describes, by way of example, the capturing method described in one or more of the above embodiments.
Fig. 2 is a schematic structural diagram of an example of an alternative smart home system provided in an embodiment of the present application, and as shown in fig. 2, the smart home system may include a smart home platform 21, a pan/tilt camera 22, a mobile phone/tablet 23, a sensing device 24, and a sensing device 25; the smart home platform 21 can acquire browsing information or screening information of the user on the mobile phone/tablet 23, and can also trigger monitoring + triggering + condition judgment, smart screening + analysis + prompting of the target by the sensing device 24 and/or the sensing device 25, so as to start or stop the pan/tilt camera 22 and perform target tracking and feedback on the snapshot object.
In this example, as a parent, many times, it is desirable to capture a certain instant behavior or words of a child, and the current method is to purchase a monitoring camera and capture a certain area for a long time, which causes two problems, if a camera is placed in a living room and a shooting is performed in the certain area, the guest is not comfortable when seeing the camera; in addition, the amount of video captured each day is very large, and it takes a lot of time to perform manual screening. To avoid the above phenomenon, fig. 3 is a schematic flowchart of an example of an alternative capturing method provided in an embodiment of the present application, and as shown in fig. 3, based on the smart home system, the capturing method may include:
s301: the smart home platform 21 determines whether the trigger condition is satisfied? If not, executing S302, and if so, executing S304;
in addition, it should be noted that the default pan/tilt head camera 22 is in a standby state, and rotates to a position where the shooting angle is 0, and the user sets the smart home platform 21 to trigger an event for a child as required, where the triggering conditions are as follows: when bracelet wrist-watch got into certain region/human response characteristic shows when getting into for child/when infrared monitoring target gets into etc..
S302: is the smart home platform 21 determining whether the pan/tilt camera 22 is capturing? If the snapshot is being performed, S303 is performed, and if the snapshot is not being performed, the process is terminated.
S303: the smart home platform 21 controls the pan-tilt camera 22 to stop capturing.
S304: the intelligent home platform 21 determines whether the current scene is a preset scene, if so, performs S305, and if not, performs S302;
specifically, the user selects scenario requirements such as: the capturing can be performed in other scenes than the guest mode, night mode, vacation mode, and the like.
S305: the intelligent home platform 21 acquires the operation information of the electronic device connected with the intelligent home platform, identifies the current scene, reminds the user when the audio and video data exceeds a certain value, analyzes the audio and video content and deletes invalid audio and video data.
S306: the intelligent home platform 21 controls the pan-tilt camera 22 to capture the video of the captured object;
s307: the smart home platform 21 performs video storage.
Specifically, the smart home platform 21 provides scene recognition capability, determines the mode of the current scene, and once the monitoring event is triggered, determines whether the current scene meets the condition again, and when the current scene meets the condition, the pan/tilt camera 22 is turned on, and the angle is adjusted to turn on the snapshot and the storage. The smart home platform 21 can continuously monitor the stop trigger event and the scene condition, for example, the scene is switched to the guest-meeting mode/the target leaves, etc., and all the capturing will be stopped, and the rotational capturing angle of the pan-tilt camera is 0 position.
The intelligent home platform 21 analyzes the effectiveness of the stored video content, and if the proportion of the target in the shot audio/video is insufficient or the shielding is serious, the intelligent home platform can judge that the invalid audio/video can be automatically deleted; when the audio/video storage size/quantity/duration reaches a certain threshold value, a user can be reminded to screen, and links needing to be screened are synchronized to the user for browsing and operating.
In addition, when the newly stored audio and video file meets a certain condition, an event is triggered to remind a user to perform data brushing, the finally stored data is the data which the user wants to keep, and the user can browse and share at any time.
Utilize above-mentioned wisdom family system after, at first, the position to corner can be turned to in the acquiescence of cloud platform camera 22, avoids visitor's the awkward of feeling monitored like this in the time, because the perception ability who has combined profile recognition and smart machine among the smart home in addition, just can take a candid photograph when the scene that child appears. Then the wisdom family can carry out validity to the video of taking a candid photograph and differentiate, if not conform to the condition can automatic delete, provides accurate effectual content as far as possible like this and carries out last screening for the user, promotes the user and uses experience.
In this example, the characteristics of the pan-tilt camera 22 are fully utilized to capture a target object, the sensing capability of the intelligent device in the smart home platform is fused, the intention of the user is understood, the content that the user wants to capture is captured more accurately, the effectiveness of the captured content is analyzed intelligently, the user is reminded intelligently to filter and store the captured audio and video when a certain amount of the captured audio and video is stored, and better audio and video capturing and storing services are provided for the user.
In the example, the characteristics of the pan-tilt camera and the full understanding of the smart home on the user intention are combined, so that the captured audio and video is more accurate and effective, the user is helped to record the beauty moment in the home, the understanding of the smart home on the user intention is fully utilized in the example, the content concerned by the user is captured on the premise that the privacy of the user is guaranteed as far as possible, the content is identified after the capturing is finished, if the audio and video is invalid, the audio and video is automatically deleted, the redundant storage of invalid data is reduced, finally, the intelligent interaction is provided, the audio and video is screened for the user, and the satisfaction degree of the audio and video storage is guaranteed.
The embodiment of the application provides a snapshot method, which is applied to a server, wherein the server is in communication connection with at least two pieces of electronic equipment arranged in a family, and the snapshot method comprises the following steps: when the snapshot object is determined to be in the monitoring area of the camera in the electronic equipment, determining the current scene where the snapshot object is located according to the operation information of the electronic equipment, and when the current scene is determined to be a preset scene, controlling the camera to snapshot the snapshot object to obtain and store audio and video data of the snapshot object; that is to say, in the embodiment of the present application, when determining that a snapshot object is located in a monitoring area of a camera in an electronic device, a server does not directly control the camera to start snapshot, but further determines a current scene where the snapshot object is located according to operation information of the electronic device, and only when the current scene is a preset scene, the server controls the camera to snapshot the snapshot object, so as to obtain audio and video data of the snapshot object.
Example two
Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application, and as shown in fig. 4, an embodiment of the present application provides a server, where the server has a communication connection with at least two electronic devices disposed in a home, and the server includes: a determination module 41 and a snapshot module 42; wherein, the first and the second end of the pipe are connected with each other,
the determining module 41 is configured to determine, according to the operation information of the electronic device, a current scene where the snapshot object is located when the snapshot object is determined to be located in a monitoring area of a camera in the electronic device;
and the snapshot module 42 is configured to control the camera to snapshot the snapshot object when the current scene is determined to be the preset scene, so as to obtain and store audio and video data of the snapshot object.
Optionally, the determining module 41 determines that the snapshot object is in the monitoring area of the camera in the electronic device, including:
determining the position of a wearable device of the electronic device according to the signal strength of the wearable device; wherein the snapshotting object is worn with a wearable device;
when the position of the wearable device is within the range of the monitoring area, it is determined that the snapshot object is in the monitoring area of the camera in the electronic device.
Optionally, the determining module 41 determines that the snapshot object is in the monitoring area of the camera in the electronic device, including:
when the sensing equipment in the electronic equipment detects the object, the camera is started;
when the picture shot by the camera contains the snapshot object, determining that the snapshot object is in the monitoring area of the camera in the electronic equipment.
Optionally, the determining module 41 determines, according to the operation information of the electronic device, that the current scene where the snapshot object is located includes:
acquiring family access information from access control equipment in electronic equipment;
when the family entering and exiting information indicates that no person is at home, determining that the current scene is in a leaving mode;
when the family entrance and exit information indicates that a person is at home, it is determined that the current scene is in the at-home mode.
Optionally, when the home entrance information indicates that there is a person in the home, the determining module 41 determines that the current scene is in the home mode, including:
when the family access information indicates that a person is in the family, and the family access information indicates that the family has access information of non-family members, acquiring the operation information of security equipment from the security equipment of the electronic equipment;
when the operation information of the security equipment does not contain alarm information and the operation information of the access control equipment does not contain alarm information, determining that the current scene is in a family mode and a visitor-meeting mode;
and when the family access information indicates that people exist in the family, and the family access information indicates that all the family members access information, determining that the current scene is in a non-guest-meeting mode of a family mode.
Optionally, when it is determined that the current scene is the preset scene, the snapshot module 42 controls the camera to snapshot the snapshot object, and the audio and video data of the snapshot object is obtained, including:
and when the current scene is determined to be in the non-guest-meeting mode of the home mode, controlling the camera to shoot the shot object, and obtaining the audio and video data of the shot object.
Optionally, the server is further configured to:
determining invalid audio and video data from the audio and video data of the snapshot object;
and deleting invalid audio and video data from the audio and video data of the snapshot object.
Optionally, the server determines invalid audio/video data from the audio/video data of the snapshot object, including:
determining the contrast of each frame of image in each audio and video data of a snapshot object;
and determining the ratio of the number of image frames with the contrast ratio smaller than or equal to a first preset threshold to the total number of image frames of the corresponding audio and video data, and determining the corresponding audio and video data as invalid audio and video data when the ratio of the number of image frames to the total number of image frames of the corresponding audio and video data is smaller than a second preset threshold.
Optionally, the server determines invalid audio/video data from the audio/video data of the snapshot object, including:
carrying out face recognition on each frame of image in each audio and video data of the snapshot object;
and determining the ratio of the number of the image frames of the identified face region to the total number of the image frames of the corresponding audio and video data, and determining the corresponding audio and video data as invalid audio and video data when the ratio of the number of the image frames of the identified face region to the total number of the image frames of the corresponding audio and video data is less than or equal to a third preset threshold.
In practical applications, the determining module 41 and the capturing module 42 may be implemented by a processor located on a server, specifically, implemented by a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 5 is a schematic structural diagram of a server according to an embodiment of the present application, and as shown in fig. 5, an embodiment of the present application provides a server 500, including:
a processor 51 and a storage medium 52 storing instructions executable by the processor 51, wherein the storage medium 52 depends on the processor 51 to perform operations through a communication bus 53, and when the instructions are executed by the processor 51, the capturing method of the first embodiment is performed.
It should be noted that, in practical applications, the various components in the terminal are coupled together by a communication bus 53. It will be appreciated that the communication bus 53 is used to enable communications among the components of the connection. The communication bus 53 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled in figure 5 as communication bus 53.
The embodiment of the application provides a computer storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the snapshot method described in the first embodiment.
The computer-readable storage medium may be a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM).
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (10)

1. A snapshot method, applied to a server having communication connections with at least two electronic devices disposed in a home, the method comprising:
when the snapshot object is determined to be in the monitoring area of the camera in the electronic equipment, determining the current scene of the snapshot object according to the operation information of the electronic equipment;
when the current scene is determined to be a preset scene, controlling the camera to shoot the shot object, and obtaining and storing audio and video data of the shot object;
wherein the preset scene comprises: in a home mode, determining a current scene where the snap-shot object is located according to the operation information of the electronic device includes:
acquiring family access information from access control equipment in the electronic equipment;
when the family access information indicates that people exist in the family, and the family access information indicates that all the family members enter the information, determining that the current scene is in a non-guest-meeting mode of a family mode;
the method further comprises the following steps:
determining invalid audio and video data from the audio and video data of the snapshot object;
and deleting the invalid audio and video data from the audio and video data of the snapshot object.
2. The method of claim 1, wherein determining that the snap-shot object is in a monitoring area of a camera in the electronic device comprises:
determining a location of a wearable device of the electronic device according to a signal strength of the wearable device; wherein the snap-shot subject is wearing the wearable device;
when the position of the wearable device is within the range of the monitoring area, it is determined that the snapshot object is in the monitoring area of a camera in the electronic device.
3. The method of claim 1, wherein determining that the snap-shot object is in a monitoring area of a camera in the electronic device comprises:
when an object is detected through sensing equipment in the electronic equipment, starting the camera;
and when the picture shot by the camera contains the snapshot object, determining that the snapshot object is in the monitoring area of the camera in the electronic equipment.
4. The method according to any one of claims 1 to 3, wherein the determining the current scene where the snapshot object is located according to the operation information of the electronic device comprises:
acquiring family access information from access control equipment in the electronic equipment;
when the family entering and exiting information indicates that no person is at home, determining that the current scene is in a leaving mode;
when the family entrance and exit information indicates that a person is at home, determining that the current scene is in a home mode.
5. The method of claim 4, wherein determining that the current scene is in the at-home mode when the home entry information indicates that a person is present in the home comprises:
when the family access information indicates that people exist in the family, and the family access information indicates that non-family members exist in the family, acquiring the operation information of the security equipment from the security equipment of the electronic equipment;
and when the operation information of the security equipment does not contain alarm information and the operation information of the access control equipment does not contain alarm information, determining that the current scene is in a family mode.
6. The method according to claim 1, wherein the determining of invalid audio-video data from the audio-video data of the snap-shot object comprises:
determining the contrast of each frame of image in each piece of audio and video data of the snapshotted object;
and determining the ratio of the number of image frames with the contrast ratio smaller than or equal to a first preset threshold to the total number of image frames of the corresponding audio and video data, and determining the corresponding audio and video data as the invalid audio and video data when the ratio of the number of image frames to the total number of image frames of the corresponding audio and video data is smaller than a second preset threshold.
7. The method according to claim 1, wherein the determining of invalid audio-video data from the audio-video data of the snap-shot object comprises:
carrying out face recognition on each frame of image in each audio and video data of the snapshot object;
and determining the ratio of the number of the image frames of the identified face region to the total number of the image frames of the corresponding audio and video data, and determining the corresponding audio and video data as the invalid audio and video data when the ratio of the number of the image frames of the identified face region to the total number of the image frames of the corresponding audio and video data is less than or equal to a third preset threshold.
8. A server having communication connections with at least two electronic devices disposed in a home, comprising:
the determining module is used for determining a current scene where the snapshot object is located according to the running information of the electronic equipment when the snapshot object is determined to be located in the monitoring area of the camera in the electronic equipment;
the snapshot module is used for controlling the camera to snapshot the snapshot object when the current scene is determined to be a preset scene, so as to obtain and store audio and video data of the snapshot object; wherein the preset scene comprises: a non-guest-meeting mode in a home mode;
the determining module is further used for determining the current scene where the snapshot object is located according to the running information of the electronic equipment, wherein the current scene comprises the scene; acquiring family access information from access control equipment in the electronic equipment; when the family access information indicates that people exist in the family, and the family access information indicates that all the family members enter the information, determining that the current scene is in a non-meeting mode of a family mode;
the snapshot module is also used for determining invalid audio and video data from the audio and video data of the snapshot object; and deleting the invalid audio and video data from the audio and video data of the snapshot object.
9. A server, characterized in that the server comprises: a processor and a storage medium storing processor-executable instructions for performing operations on the processor via a communication bus, the instructions when executed by the processor performing the method of any of claims 1 to 7 above.
10. A computer storage medium having stored thereon executable instructions which, when executed by one or more processors, perform the method of capturing of any one of claims 1 to 7.
CN202110146060.4A 2021-02-02 2021-02-02 Snapshot method, server and computer storage medium Active CN112911154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110146060.4A CN112911154B (en) 2021-02-02 2021-02-02 Snapshot method, server and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110146060.4A CN112911154B (en) 2021-02-02 2021-02-02 Snapshot method, server and computer storage medium

Publications (2)

Publication Number Publication Date
CN112911154A CN112911154A (en) 2021-06-04
CN112911154B true CN112911154B (en) 2022-10-18

Family

ID=76122576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110146060.4A Active CN112911154B (en) 2021-02-02 2021-02-02 Snapshot method, server and computer storage medium

Country Status (1)

Country Link
CN (1) CN112911154B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040105A (en) * 2021-11-17 2022-02-11 北京市商汤科技开发有限公司 Video quick snapshot method, system, face recognition module and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811595A (en) * 2015-04-08 2015-07-29 合肥君正科技有限公司 Network monitor camera and working method thereof
CN106288212A (en) * 2016-08-22 2017-01-04 珠海格力电器股份有限公司 Household safe means of defence and device and air-conditioner
CN106331586A (en) * 2015-06-16 2017-01-11 杭州萤石网络有限公司 Smart household video monitoring method and system
CN106610591A (en) * 2015-10-20 2017-05-03 刘国梁 Wireless smart home security system
CN110430393A (en) * 2019-07-13 2019-11-08 恒大智慧科技有限公司 A kind of monitoring method and system
CN111240217A (en) * 2020-01-08 2020-06-05 深圳绿米联创科技有限公司 State detection method and device, electronic equipment and storage medium
CN111552189A (en) * 2020-04-20 2020-08-18 星络智能科技有限公司 Method for starting scene mode, intelligent home controller and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811595A (en) * 2015-04-08 2015-07-29 合肥君正科技有限公司 Network monitor camera and working method thereof
CN106331586A (en) * 2015-06-16 2017-01-11 杭州萤石网络有限公司 Smart household video monitoring method and system
CN106610591A (en) * 2015-10-20 2017-05-03 刘国梁 Wireless smart home security system
CN106288212A (en) * 2016-08-22 2017-01-04 珠海格力电器股份有限公司 Household safe means of defence and device and air-conditioner
CN110430393A (en) * 2019-07-13 2019-11-08 恒大智慧科技有限公司 A kind of monitoring method and system
CN111240217A (en) * 2020-01-08 2020-06-05 深圳绿米联创科技有限公司 State detection method and device, electronic equipment and storage medium
CN111552189A (en) * 2020-04-20 2020-08-18 星络智能科技有限公司 Method for starting scene mode, intelligent home controller and storage medium

Also Published As

Publication number Publication date
CN112911154A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
US10750131B1 (en) Adjustable movement detection doorbell
US11854356B1 (en) Configurable motion detection and alerts for audio/video recording and communication devices
US7676145B2 (en) Camera configurable for autonomous self-learning operation
CN104777749B (en) Window control method, apparatus and system
US7817914B2 (en) Camera configurable for autonomous operation
US11184529B2 (en) Smart recording system
CN104135642A (en) Intelligent monitoring method and relevant equipment
CN105794191A (en) Recognition data transmission device
CN105282490A (en) Novel empty nester smart home interaction system and method
US20200035086A1 (en) Communicating with Law Enforcement Agencies Using Client Devices That Are Associated with Audio/Video Recording and Communication Devices
CN110543102A (en) method and device for controlling intelligent household equipment and computer storage medium
US20180025229A1 (en) Method, Apparatus, and Storage Medium for Detecting and Outputting Image
KR20160032004A (en) Security and/or monitoring devices and systems
WO2007067335A1 (en) Automatic capture modes
US9288452B2 (en) Apparatus for controlling image capturing device and shutter
CN110933478A (en) Security protection system
WO2023280273A1 (en) Control method and system
CN112911154B (en) Snapshot method, server and computer storage medium
EP4080467A1 (en) Electric monitoring system using video notification
CN111860218A (en) Early warning method and device, storage medium and electronic device
CN104378596B (en) A kind of method and device carrying out distance communicating with picture pick-up device
KR20190085376A (en) Aapparatus of processing image and method of providing image thereof
KR101967430B1 (en) Smart Digital Door lock and Its Control Method
CN111353454A (en) Data processing method and device and electronic equipment
CN113327400B (en) Fire hidden danger monitoring method, device and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant