CN109035686B - Loss prevention alarm method and device - Google Patents

Loss prevention alarm method and device Download PDF

Info

Publication number
CN109035686B
CN109035686B CN201810753310.9A CN201810753310A CN109035686B CN 109035686 B CN109035686 B CN 109035686B CN 201810753310 A CN201810753310 A CN 201810753310A CN 109035686 B CN109035686 B CN 109035686B
Authority
CN
China
Prior art keywords
image
target
candidate
person
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810753310.9A
Other languages
Chinese (zh)
Other versions
CN109035686A (en
Inventor
丁杰
毛亮
章成锋
贾钧翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201810753310.9A priority Critical patent/CN109035686B/en
Publication of CN109035686A publication Critical patent/CN109035686A/en
Application granted granted Critical
Publication of CN109035686B publication Critical patent/CN109035686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0208Combination with audio or video communication, e.g. combination with "baby phone" function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0261System arrangements wherein the object is to detect trespassing over a fixed physical boundary, e.g. the end of a garden

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Child & Adolescent Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a loss prevention alarm method and a loss prevention alarm device, wherein the method comprises the following steps: acquiring target characteristics corresponding to the target person image; acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a figure image in the real-time monitoring image to obtain a figure image set; determining a candidate character image subset from the character image set according to preset character candidate characteristics; determining whether the candidate person image subset contains the target person image according to the target feature; and if the candidate character image subset does not contain the target character image, sending alarm prompt information. The identity recognition can be carried out through the age and the face recognition, and whether the person under guardianship is in a safe area or not is judged, so that whether the person under guardianship is safe or not is determined, and the accuracy of the identity recognition can be effectively improved.

Description

Loss prevention alarm method and device
Technical Field
The embodiment of the invention relates to the technical field of mobile terminals, in particular to a loss prevention alarm method and device.
Background
With the rapid development of mobile terminals, their functions are becoming more powerful. In practical application, the mobile terminal can be used for supervising the children so as to prevent the children from being lost.
In the prior art, patent CN201510437405.6 adopts a GPS (Global positioning System)/LBS (Location Based Service) to determine whether a child is in a safe area. If not, sending information to the parents; and judging whether the expression of the child is abnormal or not through a face recognition technology, and if so, determining that the child is in a dangerous state.
However, GPS/LBS location requires hardware device support, with higher cost and lower accuracy; in addition, environmental information such as age and illumination may cause a low face recognition accuracy.
Disclosure of Invention
The invention provides a loss prevention alarm method and device, which aim to solve the problem of loss prevention of the existing children.
According to a first aspect of the present invention, there is provided a loss prevention alarm method, the method comprising:
acquiring target characteristics corresponding to the target person image;
acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a figure image in the real-time monitoring image to obtain a figure image set;
determining a candidate character image subset from the character image set according to preset character candidate characteristics;
determining whether the candidate person image subset contains the target person image according to the target feature;
and if the candidate character image subset does not contain the target character image, sending alarm prompt information.
According to a second aspect of the present invention, there is provided a loss prevention alarm device comprising:
the information acquisition module is used for acquiring target characteristics corresponding to the target person image;
the figure image identification module is used for acquiring a real-time monitoring image corresponding to the target monitoring equipment, identifying figure images in the real-time monitoring image and obtaining a figure image set;
the candidate person image determining module is used for determining a candidate person image subset from the person image set according to preset person candidate characteristics;
a target person image determination module for determining whether the subset of candidate person images includes the target person image according to the target feature;
and the first alarm prompting module is used for sending alarm prompting information if the candidate character image subset does not contain the target character image.
According to a third aspect of the present invention, there is provided an electronic apparatus comprising:
a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the aforementioned loss prevention alarm method when executing the program.
According to a fourth aspect of the present invention, there is provided a readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned loss prevention alarm method.
The embodiment of the invention provides an alarm method and device for preventing loss, wherein the method comprises the following steps: acquiring target characteristics corresponding to the target person image; acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a figure image in the real-time monitoring image to obtain a figure image set; determining a candidate character image subset from the character image set according to preset character candidate characteristics; determining whether the candidate person image subset contains the target person image according to the target feature; and if the candidate character image subset does not contain the target character image, sending alarm prompt information. The identity recognition can be carried out through the age and the face recognition, and whether the person under guardianship is in a safe area or not is judged, so that whether the person under guardianship is safe or not is determined, and the accuracy of the identity recognition can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating specific steps of a method for alarming for loss prevention according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating specific steps of a loss prevention alarm method according to a second embodiment of the present invention;
fig. 3 is a structural diagram of a loss prevention alarm device according to a third embodiment of the present invention;
fig. 4 is a structural diagram of an alarm device for preventing loss according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating specific steps of a method for alarming to prevent loss according to an embodiment of the present invention is shown.
Step 101, obtaining a target characteristic corresponding to a target person image.
Wherein the target person image is a monitored target person. In the embodiment of the invention, the target character image is the character image of the monitored child, and can also be other objects needing monitoring, such as the monitored old. In addition, the target person image may be selected by the guardian.
The target features may include facial features.
For the facial features, it can be extracted from the target person image. The embodiment of the present invention does not limit the extraction algorithm.
Further, the target features may also include, but are not limited to: one or more of height, garment color, garment type, age. The height, the color of clothes, the type of clothes, the age, etc. may be extracted from the image of the target person, or may be set at the same time when the target person is specified, and stored in the system database.
It will be appreciated that the above process can be performed by the guardian on his own mobile terminal, or can be selected by a device provided by the guardian in the field. For example, example 1: a guardian downloads a monitoring application on a mobile terminal; then opening a setting interface of the monitoring application, setting information such as height, age, wearing, color and the like of the monitored child, and obtaining facial features and the like by scanning facial images on a facial feature recognition interface; example 2: the guardian makes a selection on the equipment provided at the site and enters the relevant information.
Of course, the height, age, wear and color, facial features described above can be modified.
Step 102, acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a person image in the real-time monitoring image to obtain a person image set.
Wherein the target monitoring device can be specified by the guardian. In practical application, the cameras around the position can be displayed according to the position selected by the user or the current position of positioning for selection; and may also be selected from a list of all target monitoring devices. The embodiment of the invention does not limit the selection mode of the target monitoring equipment. It is understood that the target monitoring device is a monitoring device concerned by the user, i.e. a monitoring device for monitoring the safety area of the guardian child.
The real-time mode is realized by periodically acquiring monitoring images. It can be understood that the shorter the period of acquiring the monitoring image is, the more the monitoring image is acquired, the more it can be found whether the child is safe in time, but the higher the image processing complexity is; the longer the period of acquiring the monitoring image is, the less the acquired monitoring image is, the less the children can not be found in time whether the children are safe, but the lower the image processing complexity is. Therefore, a reasonable period needs to be set according to an actual application scene, whether children are safe or not can be found in time, and the processing complexity can be reduced as much as possible.
Specifically, the person image set may be obtained by a face recognition technique.
In the embodiment of the invention, the children can be monitored according to the public monitoring equipment, so that the children are prevented from being lost. In practical application, the image data monitored by the public monitoring equipment can be imported into a third-party database, or can be directly read from an original database of the public monitoring equipment.
It can be understood that the embodiment of the present invention relates to a third-party database, a monitoring application, a monitoring device, a database corresponding to the monitoring application, and a mobile terminal, where the third-party database is a database corresponding to the monitoring application, and the mobile terminal is a terminal supporting the operation of the monitoring application. Specifically, firstly, a user specifies monitored child information and target monitoring equipment through a monitoring application on a mobile terminal; then, the monitoring application judges whether the child is in a safe state in real time according to the monitoring data of the target monitoring equipment in the third-party database; and if the mobile terminal is in a dangerous state, sending alarm prompt information to the mobile terminal or the guardian through a target terminal set by the mobile terminal.
Step 103, determining a candidate character image subset from the character image set according to preset character candidate characteristics.
In the embodiment of the invention, in order to reduce the interference of the computational complexity and other factors on the identification, the person images which do not meet the conditions are removed from the person image set before the identification. For example, images of persons whose ages are not children are excluded, so that the candidate features are features of determined ages. Of course, other person images that apparently do not meet the specified condition may be eliminated, so that the candidate feature is a feature that determines the specified condition.
Specifically, firstly, for each person image in the person image set, extracting a person candidate feature of the task image as a reference candidate feature; then, comparing the reference candidate feature with the character candidate feature, and if the reference candidate feature is consistent with the character candidate feature, taking the character image as a candidate character image; if not, the task image is not the candidate character image.
Step 104, determining whether the candidate character image subset contains the target character image according to the target feature.
Specifically, first, for each candidate personal image in the subset of candidate personal images, a reference feature thereof is extracted; then, comparing the reference feature with the target feature; if the candidate character images are consistent, the representative candidate character image subset comprises the target character image, and the judgment of other candidate character images is stopped; and if not, continuously judging whether the reference feature of the next candidate person image is consistent with the target feature.
It can be understood that if the reference features of all the person images in the candidate person image subset are inconsistent with the target feature, the representative candidate person image subset does not include the target person image; if the reference feature of at least one of the candidate character images is consistent with the target feature, the representative candidate character image subset contains the target character image.
In practical application, if the target characteristics comprise height, the height is allowed to be consistent when the height difference is in a small range, so that inaccurate judgment caused by errors is avoided. In addition, in order to judge more accurately, only when the reference features are respectively consistent with the corresponding target features, the representative candidate character image subset is judged to contain the target character image; otherwise it is not included. For example, only if the height, age, wear and color, and facial features are all consistent, then the subset of candidate person images is determined to contain the target person image; otherwise, if one of the entries is inconsistent, it is not included.
And 105, if the candidate character image subset does not contain the target character image, sending alarm prompt information.
In practical application, if the candidate character image subset does not contain the target character image, the monitored child is not in the designated area, so that alarm prompt is needed; the subset of candidate character images containing the target character image is representative of the guarded child in the designated area, such that no alert prompt is required.
Wherein, the alarm prompt information includes but is not limited to: text alarm, sound alarm, vibration alarm, etc. The embodiment of the invention does not limit the specific mode of the alarm prompt information.
In practical applications, if the candidate personal image subset at the next time contains the target personal image, the alarm prompting message is stopped.
In summary, an embodiment of the present invention provides an alarm method for preventing loss, where the method includes: acquiring target characteristics corresponding to the target person image; acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a figure image in the real-time monitoring image to obtain a figure image set; determining a candidate character image subset from the character image set according to preset character candidate characteristics; determining whether the candidate person image subset contains the target person image according to the target feature; and if the candidate character image subset does not contain the target character image, sending alarm prompt information. The identity recognition can be carried out through the age and the face recognition, and whether the person under guardianship is in a safe area or not is judged, so that whether the person under guardianship is safe or not is determined, and the accuracy of the identity recognition can be effectively improved.
Example two
Referring to fig. 2, a flowchart of specific steps of an alarm method for preventing loss according to a second embodiment of the present invention is shown.
Step 201, obtaining a monitoring image corresponding to a target monitoring device, and classifying people in the monitoring image to obtain an initial people image set.
The target monitoring device may refer to the detailed description of step 102, which is not described herein again.
Specifically, the people in the monitored image are classified, that is: and identifying the figure images in the monitored images, and classifying the images of the same person in different angles and different states to the figure. The classification algorithm is mature and will not be described in detail in the embodiments of the present invention.
Optionally, in another embodiment of the present invention, step 201 includes sub-steps 2011:
in the sub-step 2011, people in the monitored image are classified by an unsupervised clustering method.
The classification algorithm may use an unsupervised clustering method for fast learning clustering without the need to label data in advance.
Step 202, receiving a target person image selected from the set of initial person images and extracting a target feature from the target person image.
According to the embodiment of the invention, each character image in the character image set can be displayed on the mobile terminal, so that the guardian can select one of the character images as the target character image.
Optionally, in another embodiment of the present invention, the target feature includes not only a human face feature, but also a height feature and/or a clothes color feature.
It will be appreciated that the extraction algorithm of the target feature is related to the specific form of the target feature. For example, when the target feature is height, the positions of the head and feet of the person are identified from the target person image, and a reference frame is determined so that the height of the task image can be calculated; when the target feature is age, the height and the face feature can be comprehensively determined; at present, it is a relatively mature technology to identify wear and color, and the embodiments of the present invention are not described herein again.
It is understood that steps 201 to 202 are processes of specifying a target person image for the guardian and extracting a target feature.
Step 203, acquiring a request for confirming the target person sent by the mobile terminal. In practical application, the guardian can also upload the image on the mobile terminal as the target person image. Therefore, the guardian can shoot images for the target person or select images to upload from the photo album.
It is to be understood that the target person request carries the uploaded target person image.
Step 204, obtaining the target characteristics of the target person image corresponding to the request of the confirmation target person.
The algorithm for extracting the target feature may refer to the detailed description in step 101, and is not described herein again.
It should be noted that steps 201 to 202 and steps 203 to 204 are two methods for obtaining the target feature of the target person image, respectively, and in practical applications, one of the two methods may be selected.
Step 205, acquiring a real-time monitoring image corresponding to the target monitoring device, and identifying a person image in the real-time monitoring image to obtain a person image set.
This step can refer to the detailed description of step 102, and is not described herein again.
Step 206, for each person image in the person image set, determining feature points of the person image, and extracting pixel values of the feature points to obtain a first candidate feature.
The pixel value can be represented by RGB (Red Green Blue ) three-color channel. Of course, it can also be represented by other formats, for example, CMYK (Cyan Magenta Yellow Black). The embodiment of the present invention does not limit the color mode for representing the pixel value.
In practical applications, in order to reduce the computational complexity, the human image may be transformed first, so as to reduce the dimensionality, and then the pixel values are extracted. For example, the 512 × 128 image is first converted into 256 × 64 image, and then the pixel values are extracted, so that the number of pixels is reduced, and the computational complexity can be effectively reduced.
And step 207, calculating cosine values of angles formed by every two feature points of each person image in the person image set to obtain second candidate features.
Wherein, the feature points include but are not limited to: eyes, nose, ears, mouth, eyebrows, facial contours.
The algorithm for identifying the five sense organs is quite mature, and the algorithm adopted by the embodiment of the invention is not limited.
Specifically, any two characteristic pixel points PiAnd PjCorresponding second candidate feature SVi,jThe specific calculation formula of (2) is as follows:
SVi,j=cos(θi,j) (1)
wherein, thetai,jIs a characteristic pixel point PiAnd PjRespectively, the included angle between the connecting lines of the reference points.
And 208, extracting local binary information of the character images from the character images in the character image set to obtain a third candidate feature.
The binarization is to divide the brightness value of each pixel point of the image into white or black according to a threshold value, that is, the brightness value is set to 0 or 255. For example, luminance values of 0 to 126 are divided into 0, and values of 127 to 255 are divided into 255.
It is understood that the algorithm for extracting the local binarization information may be a pixel value method, an average value method, a histogram method, or the like. In the pixel value method, pixel points with pixel values of 0 to 126 are set as 0 black, and pixel points with pixel values of 127 to 255 are set as 255; the average value method comprises the steps of firstly, calculating the average value of pixel values of all pixel points in an image, setting the pixel points with the pixel values smaller than or equal to the average value as 0, and setting the pixel points with the pixel values larger than the average value as 255; the histogram method firstly determines two positions with the largest pixel value in an image, then takes the pixel value with the smallest pixel value in the middle of the two positions as a threshold, sets the pixel point with the pixel value less than or equal to the threshold as 0, and sets the pixel point with the pixel value greater than the threshold as 255.
It can be understood that each pixel point of the character image corresponds to a binarized value.
Step 209, for each person image in the person image set, extracting histogram information of directional gradients of the person image to obtain a fourth candidate feature.
Among them, Histogram of Oriented Gradients (HOG) is used to describe local texture features of an image.
Specifically, first, the image is divided into small regions of equal size, for example, 20 × 20 small regions; then, the histogram of the gradient direction of these small regions is calculated respectively, and then a certain number of small regions form a larger region, for example, a large region comprises 2 × 2 small regions, and finally, the feature vector of the histogram of the directional gradient of the large region forms the feature vector of the histogram of the directional gradient of the whole image. Thus, the feature vector uniquely describes the image.
Step 210, for each person image in the person image set, inputting the first candidate feature, the second candidate feature, the third candidate feature and the fourth candidate feature into an age identification model obtained through pre-training, so as to obtain an age classification.
Wherein age categories include, but are not limited to: the method comprises the following steps of firstly classifying adults and minors, and secondly classifying the old and the non-old.
The age identification model may determine whether the person is a minor/old person through the candidate features. It is understood that the input is the first candidate feature, the second candidate feature, the third candidate feature and the fourth candidate feature, and the output is the age classification.
In the embodiment of the invention, the age identification model can be obtained by training the image sample labeled with the age classification in advance.
Optionally, in another embodiment of the present invention, the age identification model is obtained by training through the following steps:
sub-step a1, obtaining a person image sample set, each person image sample in the person image sample set being labeled with an age classification.
The person image sample set may be an image monitored by a monitoring device, or may also be a collected network picture, and the like, which is not limited in the embodiment of the present invention.
Specifically, the samples are labeled as positive and negative samples, for example: minor and non-minor.
In sub-step a2, for each human image sample in the human image sample set, the feature points of the human image are determined, and the pixel values of the feature points are extracted to obtain a first candidate feature sample.
This step can refer to the detailed description of step 206, which is not repeated herein.
And a substep A3, calculating cosine values of angles formed by every two feature points for each human image sample in the human image sample set to obtain a second candidate feature sample.
This step can refer to the detailed description of step 207, which is not described herein.
And a substep a4, extracting local binarization information of the person image samples from each person image sample in the person image sample set to obtain a third candidate feature sample.
This step can refer to the detailed description of step 208, which is not repeated herein.
In sub-step a5, for each human image sample in the human image sample set, extracting histogram information of directional gradients of the human image sample to obtain a fourth candidate feature sample.
This step can refer to the detailed description of step 209, which is not repeated herein.
And a substep A6, training by adopting a preset training model to obtain an age identification model according to the first candidate feature sample, the second candidate feature sample, the third candidate feature sample, the fourth candidate feature sample and the age classification of each person image sample in the person image sample set.
The training model may be an RNN (Recurrent Neural Networks) model, a CNN (Convolutional Neural Networks), or the like.
Specifically, for each person image sample, inputting a first candidate feature sample, a second candidate feature sample, a third candidate feature sample and a fourth candidate feature sample into a cyclic network model to obtain an estimated age classification; then, comparing the estimated age classification of each person image in the person image sample set with the labeled age classification, and if the estimated age classification of the person images which are more than or equal to a certain proportion is consistent with the labeled age classification, finishing the training, wherein the training model at the moment is an age identification model; and if the age classification estimated by the person image larger than or equal to a certain proportion is not consistent with the labeled age classification, adjusting parameters of the training model, and continuing training until the age classification estimated by the person image larger than or equal to a certain proportion is consistent with the labeled age classification.
Step 211, adding the person image with the age classified as the target age classification into the candidate person image subset.
The target age classification is different according to the age classification mode, and is classified into a minor/non-minor and an old/non-old.
It is understood that there is no precedence order for the personal images in the subset of candidate personal images.
In practical applications, the person images classified as non-minor/non-elderly may also be removed from the set of person images, so that the remaining person images constitute the subset of candidate task images.
Step 212, determining whether the candidate character image subset contains the target character image according to the target feature.
This step can refer to the detailed description of step 104, and will not be described herein.
Step 213, if the candidate character image subset does not include the target character image, sending an alarm prompt message to the mobile terminal or a target terminal set by the mobile terminal.
Specifically, the alarm information can be sent to a preset target terminal, the target terminal can be a mobile phone, an intelligent bracelet and other devices, and the alarm information can also be sent to a mobile phone corresponding to a preset mobile phone number.
Step 214, a second real-time monitoring image corresponding to the second target monitoring device is obtained.
The embodiment of the invention can determine whether the target person image is in the monitoring area of other target monitoring equipment from the images monitored by other target monitoring equipment.
Step 215, if the second real-time monitoring image contains the target person image, sending a second alarm prompt message.
Specifically, the second alarm prompt message may include: and one or more of second target monitoring equipment information, corresponding area information, the time of the target person appearing and the like are used for prompting the current area of the target person of the guardian and helping the guardian to quickly locate the target person. In summary, an embodiment of the present invention provides an alarm method for preventing loss, where the method includes: acquiring target characteristics corresponding to the target person image; acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a figure image in the real-time monitoring image to obtain a figure image set; determining a candidate character image subset from the character image set according to preset character candidate characteristics; determining whether the candidate person image subset contains the target person image according to the target feature; and if the candidate character image subset does not contain the target character image, sending alarm prompt information. The identity recognition can be carried out through the age, the wearing and the face recognition of the children/the old people, and whether the children/the old people are in a safe area or not is judged, so that whether the children/the old people are safe or not is determined, and the accuracy of the identity recognition can be effectively improved.
EXAMPLE III
Referring to fig. 3, a structural diagram of an alarm device for preventing loss according to a third embodiment of the present invention is shown, which is as follows.
The information obtaining module 301 is configured to obtain a target feature corresponding to the target person image.
The person image identification module 302 is configured to obtain a real-time monitoring image corresponding to a target monitoring device, and identify a person image in the real-time monitoring image to obtain a person image set.
A candidate person image determining module 303, configured to determine a candidate person image subset from the person image set according to a preset person candidate feature.
A target person image determination module 304, configured to determine whether the candidate person image subset includes the target person image according to the target feature.
A first alarm prompting module 305, configured to send an alarm prompting message if the candidate person image subset does not include the target person image.
In summary, an embodiment of the present invention provides an alarm device for preventing loss, where the device includes: the information acquisition module is used for acquiring target characteristics corresponding to the target person image; the figure image identification module is used for acquiring a real-time monitoring image corresponding to the target monitoring equipment, identifying figure images in the real-time monitoring image and obtaining a figure image set; the candidate person image determining module is used for determining a candidate person image subset from the person image set according to preset person candidate characteristics; a target person image determination module for determining whether the subset of candidate person images includes the target person image according to the target feature; and the first alarm prompting module is used for sending alarm prompting information if the candidate character image subset does not contain the target character image. The identity recognition can be carried out through the age and the face recognition, and whether the person under guardianship is in a safe area or not is judged, so that whether the person under guardianship is safe or not is determined, and the accuracy of the identity recognition can be effectively improved.
Example four
Referring to fig. 4, a structural diagram of an alarm device for preventing loss according to a fourth embodiment of the present invention is shown, which is as follows.
The information obtaining module 401 is configured to obtain a target feature corresponding to the target person image. Optionally, in an embodiment of the present invention, the information obtaining module 401 includes:
the person image set generation submodule 4011 is configured to obtain a monitoring image corresponding to a target monitoring device, and classify persons in the monitoring image to obtain an initial person image set.
The first target feature extraction sub-module 4012 is configured to receive a target person image selected from the initial person image set, and extract a target feature from the target person image.
And a confirm target character request submodule 4013, configured to obtain a confirm target character request sent by the mobile terminal.
The second target feature extraction sub-module 4014 is configured to obtain a target feature of the target person image corresponding to the request for determining the target person.
Optionally, in an embodiment of the present invention, the target feature includes not only a human face feature, but also a height feature and/or a clothes color feature.
The person image identification module 402 is configured to obtain a real-time monitoring image corresponding to a target monitoring device, and identify a person image in the real-time monitoring image to obtain a person image set.
A candidate person image determining module 403, configured to determine a candidate person image subset from the person image set according to a preset person candidate feature. Optionally, in an embodiment of the present invention, the candidate person image determining module 403 includes:
the first candidate feature extraction sub-module 4031 is configured to determine, for each human image in the human image set, a feature point of the human image, and extract a pixel value of each feature point to obtain a first candidate feature.
And the second candidate feature calculation submodule 4032 is configured to calculate, for each character image in the character image set, a cosine value of an angle formed by every two feature points, so as to obtain a second candidate feature.
And a third candidate feature extraction sub-module 4033, configured to extract, for each character image in the character image set, local binarization information of the character image, so as to obtain a third candidate feature.
A fourth candidate feature extraction sub-module 4034, configured to extract, for each character image in the character image set, directional gradient histogram information of the character image, so as to obtain a fourth candidate feature.
And an age classification sub-module 4035, configured to, for each person image in the person image set, input the first candidate feature, the second candidate feature, the third candidate feature, and the fourth candidate feature into an age identification model obtained through pre-training, so as to obtain an age classification.
A candidate person image adding sub-module 4036 for adding the person image whose age is classified as the target age classification to the candidate person image sub-set.
A target person image determination module 404, configured to determine whether the candidate person image subset includes the target person image according to the target feature.
A first alarm prompting module 405, configured to send alarm prompting information if the candidate person image subset does not include the target person image. Optionally, the first alarm prompting module 405 includes:
and the first alarm prompting sub-module 4051 is configured to send alarm prompting information to the mobile terminal or a target terminal set by the mobile terminal.
And a second real-time monitoring image obtaining module 406, configured to obtain a second real-time monitoring image corresponding to a second target monitoring device.
And the second alarm prompting module 407 is configured to send second alarm prompting information if the second real-time monitoring image includes the target person image.
Optionally, in another embodiment of the present invention, the person image set generating sub-module 4011 includes:
and the classification unit is used for classifying the people in the monitored image by using an unsupervised clustering method.
In summary, an embodiment of the present invention provides an alarm device for preventing loss, where the device includes: the information acquisition module is used for acquiring target characteristics corresponding to the target person image; the figure image identification module is used for acquiring a real-time monitoring image corresponding to the target monitoring equipment, identifying figure images in the real-time monitoring image and obtaining a figure image set; the candidate person image determining module is used for determining a candidate person image subset from the person image set according to preset person candidate characteristics; a target person image determination module for determining whether the subset of candidate person images includes the target person image according to the target feature; and the first alarm prompting module is used for sending alarm prompting information if the candidate character image subset does not contain the target character image. Identity recognition can be carried out through age, wearing and child face recognition, so that whether children/old people are safe or not is determined, and the accuracy of identity recognition can be effectively improved.
An embodiment of the present invention further provides an electronic device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the aforementioned loss prevention alarm method when executing the program.
Embodiments of the present invention further provide a readable storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to execute the aforementioned loss prevention alarm method.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of a loss prevention alert device according to embodiments of the present invention. The present invention may also be embodied as an apparatus or device program for carrying out a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A loss prevention alarm method, the method comprising:
acquiring target characteristics corresponding to the target person image;
acquiring a real-time monitoring image corresponding to a target monitoring device, and identifying a figure image in the real-time monitoring image to obtain a figure image set;
determining a candidate character image subset from the character image set according to preset character candidate characteristics;
determining whether the candidate person image subset contains the target person image according to the target feature;
and if the candidate character image subset does not contain the target character image, sending alarm prompt information.
2. The method of claim 1, wherein the step of obtaining the target feature corresponding to the target person image comprises:
acquiring a monitoring image corresponding to target monitoring equipment, and classifying people in the monitoring image to obtain an initial people image set;
a target personal image selected from the initial set of personal images is received and a target feature is extracted from the target personal image.
3. The method of claim 2, wherein the step of classifying the person in the monitored image comprises:
and classifying the people in the monitored image by using an unsupervised clustering method.
4. The method of claim 1, wherein the candidate features of the person include a first candidate feature, a second candidate feature, a third candidate feature and a fourth candidate feature, and the step of determining the subset of candidate images from the set of candidate images according to the predetermined candidate features of the person comprises:
for each character image in the character image set, determining the characteristic points of the character image, and extracting the pixel values of the characteristic points to obtain a first candidate characteristic;
calculating cosine values of angles formed by every two feature points for all the character images in the character image set to obtain second candidate features;
for each figure image in the figure image set, extracting local binarization information of the figure image to obtain a third candidate feature;
for each character image in the character image set, extracting the direction gradient histogram information of the character image to obtain a fourth candidate feature;
for each person image in the person image set, inputting the first candidate feature, the second candidate feature, the third candidate feature and the fourth candidate feature into an age identification model obtained through pre-training to obtain an age classification;
adding the person image of which the age is classified as the target age classification into a candidate person image subset.
5. The method of claim 1, wherein the target features include not only human face features but also height features and/or clothing color features.
6. The method of claim 1, wherein the step of obtaining the target feature corresponding to the target person image comprises: acquiring a request for confirming a target character sent by a mobile terminal;
acquiring target characteristics of a target person image corresponding to the request for confirming the target person; then, the step of sending the alarm prompt message includes:
and sending alarm prompt information to the mobile terminal or a target terminal set by the mobile terminal.
7. The method of claim 6, further comprising:
acquiring a second real-time monitoring image corresponding to second target monitoring equipment;
and if the second real-time monitoring image contains the target person image, sending second alarm prompt information.
8. A loss-prevention warning device, the device comprising:
the information acquisition module is used for acquiring target characteristics corresponding to the target person image;
the figure image identification module is used for acquiring a real-time monitoring image corresponding to the target monitoring equipment, identifying figure images in the real-time monitoring image and obtaining a figure image set;
the candidate person image determining module is used for determining a candidate person image subset from the person image set according to preset person candidate characteristics;
a target person image determination module for determining whether the subset of candidate person images includes the target person image according to the target feature;
and the first alarm prompting module is used for sending alarm prompting information if the candidate character image subset does not contain the target character image.
9. The apparatus of claim 8, wherein the information obtaining module comprises:
the person image set generation submodule is used for acquiring a monitoring image corresponding to target monitoring equipment and classifying persons in the monitoring image to obtain an initial person image set;
and the target characteristic extraction sub-module is used for receiving the target person image selected from the initial person image set and extracting the target characteristic from the target person image.
10. The apparatus of claim 8, wherein the human candidate features include a first candidate feature, a second candidate feature, a third candidate feature and a fourth candidate feature, and the candidate human image determination module comprises:
the first candidate feature extraction submodule is used for determining feature points of the character images for each character image in the character image set and extracting pixel values of the feature points to obtain a first candidate feature;
the second candidate feature calculation submodule is used for calculating cosine values of angles formed by every two feature points of each character image in the character image set to obtain second candidate features;
a third candidate feature extraction submodule, configured to extract, for each character image in the character image set, local binarization information of the character image to obtain a third candidate feature;
a fourth candidate feature extraction sub-module, configured to extract, for each character image in the character image set, histogram information of directional gradients of the character image to obtain a fourth candidate feature;
the age classification submodule is used for inputting the first candidate feature, the second candidate feature, the third candidate feature and the fourth candidate feature into an age identification model obtained through pre-training for each person image in the person image set to obtain an age classification;
and the candidate person image adding submodule is used for adding the person image with the age classified as the target age classification into the candidate person image subset.
11. An electronic device, comprising:
processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor when executing the program implements a loss prevention alarm method according to one or more of claims 1-7.
12. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform a loss prevention alert method as claimed in one or more of the method claims 1-7.
CN201810753310.9A 2018-07-10 2018-07-10 Loss prevention alarm method and device Active CN109035686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810753310.9A CN109035686B (en) 2018-07-10 2018-07-10 Loss prevention alarm method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810753310.9A CN109035686B (en) 2018-07-10 2018-07-10 Loss prevention alarm method and device

Publications (2)

Publication Number Publication Date
CN109035686A CN109035686A (en) 2018-12-18
CN109035686B true CN109035686B (en) 2020-11-03

Family

ID=64640821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810753310.9A Active CN109035686B (en) 2018-07-10 2018-07-10 Loss prevention alarm method and device

Country Status (1)

Country Link
CN (1) CN109035686B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886101A (en) * 2018-12-29 2019-06-14 江苏云天励飞技术有限公司 Posture identification method and relevant apparatus
CN109887234B (en) * 2019-03-07 2023-04-18 百度在线网络技术(北京)有限公司 Method and device for preventing children from getting lost, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101617342B1 (en) * 2015-06-26 2016-05-02 배상윤 Application for caring children and method for operating the same
CN105046876A (en) * 2015-07-23 2015-11-11 中山大学深圳研究院 Child safety monitoring system based on image identification
CN107423674A (en) * 2017-05-15 2017-12-01 广东数相智能科技有限公司 A kind of looking-for-person method based on recognition of face, electronic equipment and storage medium
CN107239744B (en) * 2017-05-15 2020-12-18 深圳奥比中光科技有限公司 Method and system for monitoring human body incidence relation and storage device
CN107845234A (en) * 2017-11-27 2018-03-27 浙江卓锐科技股份有限公司 A kind of anti-anti- method of wandering away of system and scenic spot of wandering away in scenic spot

Also Published As

Publication number Publication date
CN109035686A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
EP3163543B1 (en) Alarming method and device
US20190188456A1 (en) Image processing device, image processing method, and computer program product
US20070116364A1 (en) Apparatus and method for feature recognition
US8290277B2 (en) Method and apparatus for setting a lip region for lip reading
JP2017062778A (en) Method and device for classifying object of image, and corresponding computer program product and computer-readable medium
CN112464690A (en) Living body identification method, living body identification device, electronic equipment and readable storage medium
CN109035686B (en) Loss prevention alarm method and device
US20110280442A1 (en) Object monitoring system and method
WO2019202587A1 (en) A method and apparatus for swimmer tracking
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN115223204A (en) Method, device, equipment and storage medium for detecting illegal wearing of personnel
CN115049675A (en) Generation area determination and light spot generation method, apparatus, medium, and program product
CN114821647A (en) Sleeping post identification method, device, equipment and medium
US20240104769A1 (en) Information processing apparatus, control method, and non-transitory storage medium
CN111126102A (en) Personnel searching method and device and image processing equipment
CN112800923A (en) Human body image quality detection method and device, electronic equipment and storage medium
CN110660187B (en) Forest fire alarm monitoring system based on edge calculation
KR101711328B1 (en) Method for classifying children and adult by using head and body height in images obtained from camera such as CCTV
JP2023129657A (en) Information processing apparatus, control method, and program
CN114511877A (en) Behavior recognition method and device, storage medium and terminal
CN110866508A (en) Method, device, terminal and storage medium for recognizing form of target object
CN115273243B (en) Fall detection method, device, electronic equipment and computer readable storage medium
US20230230344A1 (en) Signal color determination device and signal color determination method
CN116779181A (en) Trip close-contact monitoring method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant