CN114783087A - Method and system for creating intelligent door lock - Google Patents

Method and system for creating intelligent door lock Download PDF

Info

Publication number
CN114783087A
CN114783087A CN202210407775.5A CN202210407775A CN114783087A CN 114783087 A CN114783087 A CN 114783087A CN 202210407775 A CN202210407775 A CN 202210407775A CN 114783087 A CN114783087 A CN 114783087A
Authority
CN
China
Prior art keywords
image
optical flow
door lock
intelligent door
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210407775.5A
Other languages
Chinese (zh)
Inventor
胡佳
滕以金
李锐
张晖
林俊豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Shandong Inspur Science Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Science Research Institute Co Ltd filed Critical Shandong Inspur Science Research Institute Co Ltd
Priority to CN202210407775.5A priority Critical patent/CN114783087A/en
Publication of CN114783087A publication Critical patent/CN114783087A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00896Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys specially adapted for particular uses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of microcontrollers and edge computing, and particularly provides a method for creating an intelligent door lock, which comprises a training stage and a using stage, wherein image information data are collected through an image sensor in the training stage, local training is carried out in a microcontroller, and a training result is stored in a local memory; in the using stage, data is read and preprocessed, dynamic target detection is carried out on the image through the pyramid Lucas-Kanade, foreground personnel information is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for recognition, personnel identity information is confirmed after recognition is finished, and whether unlocking is carried out is judged. Compared with the prior art, the intelligent door lock does not need the participation of a cloud and a network, solves the problem of easy information leakage in a networking state, ensures that the intelligent door lock is not easy to crack and damage, and ensures the safety of the intelligent door lock.

Description

Method and system for creating intelligent door lock
Technical Field
The invention relates to the technical field of microcontrollers and edge computing, and particularly provides a method and a system for creating an intelligent door lock.
Background
With the development of human beings and the progress of society, people are concerned about household burglary prevention, but various problems exist in the current door locks, the most mechanical locks exist in the market, but the mechanical locks are easy to be pried or opened by hard objects and risk of keys being copied, and once a homeowner is away from home for a long time or returns to home regularly and early, criminals can be helped.
Secondly, some intelligent locks such as fingerprint locks and coded locks in the market are easy to crack, and some intelligent door locks transmit the instruction of the intelligent mobile phone to the control chip through transmission methods such as WIFI (wireless fidelity) and the like, and then the control chip opens the door lock, so that the instruction of the mobile phone is interfered when being transmitted to the door lock again, even the secret key is intercepted, and the safety function of the door lock is greatly discounted, so that an illegal person can take advantage of the safety function.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for creating an intelligent door lock with strong practicability.
The invention further aims to provide the intelligent door lock device which is reasonable in design, safe and applicable.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method for creating an intelligent door lock is characterized by comprising a training stage and a using stage, wherein image information data are collected through an image sensor in the training stage, local training is carried out in a microcontroller, and a training result is stored in a local memory;
in the using stage, data is read and preprocessed, dynamic target detection is carried out on the image through the pyramid Lucas-Kanade, foreground personnel information is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for recognition, personnel identity information is confirmed after recognition is finished, and whether unlocking is carried out is judged.
Furthermore, when data is read and preprocessed, timely identification is carried out when personnel suddenly enter the visual field, and image graying preprocessing is carried out on the video image;
in graying, different weights are respectively assigned according to different importance of three primary colors of the color image R, G, B, and then the brightness values of the three primary colors are weighted and averaged to serve as an image gray value.
Further, the graying processing formula is as follows:
f(i,j)=0.29R(i,j)+0.58G(i,j)+0.13B(i,j)
wherein f (i, j) represents the pixel value of the grayed image at the position (i, j), and R (i, j), G (i, j), B (i, j) represent the components of the red, green and blue components at the position (i, j), respectively.
The Lucas-Kanade optical flow method needs to simultaneously satisfy three basic assumptions, firstly, the gray scale is constant when the same point in a set space moves among different frames along with time;
secondly, the position of the moving object between the adjacent frames does not change greatly;
finally, projecting adjacent points in space into the image to form the adjacent points, wherein the optical flow vectors of all the points are consistent; considering that the position of a point in space on a frame image is (x, y), the gray value is I (x, y, t), after the time dt, the point moves to the position (x + d)x,y+dy) At gray scale value corresponding to I (x + d)x,y+dy,t+dt)。
Further, two steps are carried out when the pyramid image is established, namely, a Gaussian filtering smooth image and downsampling;
the down-sampling is that each sampling reduces the resolution of each layer of image to half of the original resolution.
Further, calculating optical flow layer by layer from the top layer of the image pyramid by a Lucas-Kanade optical flow method, taking the optical flow estimation obtained by calculating the previous layer of image as the initial value of the optical flow estimation of the next layer of image, and continuously repeating the steps until the optical flow information of the bottom layer of the image pyramid is calculated;
suppose that the maximum displacement of pixel motion that the Lucas-Kanade optical flow method can handle originally is dmaxThen the maximum displacement of pixel motion that can be handled by the pyramid Lucas-Kanade optical flow method becomes dmaxfinal=(2L+1-1)dmax
Further, iteration is carried out for multiple times when the optical flow of each layer of image of the image pyramid is calculated, for two adjacent frame images I and J of a certain layer of image of the pyramid, the iteration frequency is set to be K (K is more than or equal to 1), and the optical flow is obtained after the K-1 (K is more than or equal to 2 and less than or equal to K) iteration is finished
Figure BDA0003602739350000031
Obtaining the image J as the movement vector of the image J when the k iteration is carried outkAnd is provided with
Figure BDA0003602739350000032
The objective of the kth iteration is to compute the optimization vector ηkTo minimize the following objective function:
Figure BDA0003602739350000033
wxand wyThe size of the neighborhood window is represented, and eta is obtained by solvingk=G-1bkFinally, the optical flow d after k iterations is obtainedk=dk-1kWhen the number of iterations K or the optimization variable eta is reachedkAnd when the optical flow is smaller than the set threshold value, the iteration is ended, and the calculation of the optical flow of the next layer of the pyramid image is started.
Further, when foreground personnel information is extracted, firstly, after optical flow information of a moving scene is obtained, before optical flow components are combined, threshold segmentation is respectively carried out on the optical flow in the horizontal direction and the vertical direction for once;
secondly, dynamically selecting a segmentation threshold value according to the optical flows of different motion scenes when the horizontal direction and the vertical direction are respectively segmented;
when the motion state of the moving object changes, the segmentation threshold is adjusted according to the actual optical flow, and finally the optical flow with the absolute value smaller than the segmentation threshold is taken as the background generated and removed, and the residual optical flow is reserved.
Furthermore, when the neural network identifies, the neural network takes the collected image data of each frame as input, compares the input image data with the face data in the local memory, and controls the electronic lock to switch the on-off state after the comparison is successful.
A system for creating an intelligent door lock is characterized by comprising a training stage and a using stage, wherein image information data are collected through an image sensor in the training stage, local training is carried out in a microcontroller, and a training result is stored in a local memory;
in the using stage, data are read and preprocessed, then dynamic target detection is carried out on the image through pyramid Lucas-Kanade, foreground personnel information is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for identification, personnel identity information is confirmed after identification is finished, and whether unlocking is carried out or not is judged.
Compared with the prior art, the method and the device for creating the intelligent door lock have the following outstanding beneficial effects:
the whole functions of the intelligent lock are realized locally, cloud and network participation are not needed, the problem that information leakage is easily caused in a networking state is solved, the intelligent lock is not easy to crack and damage, and the safety of the intelligent lock is ensured.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the embodiments or technical solutions in the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a training phase of a method of creating an intelligent door lock;
fig. 2 is a flow diagram of a use phase in a method of creating an intelligent door lock.
Detailed Description
The present invention will be described in further detail with reference to specific embodiments in order to better understand the technical solutions of the present invention. It should be apparent that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without making any creative effort belong to the protection scope of the present invention.
A preferred embodiment is given below:
as shown in fig. 1 and 2, the method for creating an intelligent door lock in this embodiment includes a training phase and a use phase, where image information data is collected by an image sensor in the training phase, local training is performed in a microcontroller, and a training result is stored in a local memory.
In the using stage, data is read and preprocessed, dynamic target detection is carried out on the image through the pyramid Lucas-Kanade, a moving target (foreground personnel information) is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for recognition, personnel identity information is confirmed after recognition is finished, and whether unlocking is carried out is judged.
In the use stage, the specific steps are as follows:
when data are read and preprocessed, and images of the frequently standing positions of unlocking personnel are collected, the images are in a static state under normal conditions and can be recognized conventionally.
In order to facilitate timely identification when a person suddenly enters a visual field, image graying pretreatment is firstly carried out on a video image before target detection is carried out so as to enhance image details and weaken noise influence. In graying, different weights are respectively assigned according to different importance of the three primary colors of the color image R, G, B, and then the brightness values of the three primary colors are weighted and averaged to be used as the image gray value. The graying processing formula of the text is as follows:
f(i,j)=0.29R(i,j)+0.58G(i,j)+0.13B(i,j)
where f (i, j) represents the pixel value of the grayed image at the position (i, j), and R (i, j), G (i, j), and B (i, j) represent the components of red, green, and blue at the position (i, j), respectively.
The Lucas-Kanade optical flow method needs to simultaneously satisfy three basic assumptions, firstly, the gray scale of the same point in a set space is constant when the same point moves among different frames along with time; secondly, the position of the moving object between the adjacent frames does not change greatly; finally, the adjacent points in the space are projected into the image and are also the adjacent points, and meanwhile, the optical flow vectors of all the points are consistent.
Considering that the position of a point in space on a frame image is (x, y), and the gray value thereof is I (x, y, t), after time dt, the point moves to the position (x + d)x,y+dy) At gray scale value is correspondingly I (x + d)x,y+dy,t+dt)。
In practical application, the sensor data is read in real time to establish a pyramid image. The pyramid image generation needs two steps, namely, a gaussian filtering and smoothing image and a down-sampling step. The down-sampling, that is, each sampling, reduces the resolution of each layer of image to half of the original resolution.
And calculating the optical flow layer by layer from the top layer of the image pyramid by using a Lucas-Kanade optical flow method, taking the optical flow estimation obtained by calculating the previous layer of image as the initial value of the optical flow estimation of the next layer of image, and continuously repeating the process until the optical flow information of the bottom layer of the image pyramid is calculated. Suppose that the maximum displacement of pixel motion that the original Lucas-Kanade optical flow method can handle is dmaxThen the maximum displacement of pixel motion that can be processed by the pyramid Lucas-Kanade optical flow method becomes dmaxfinal=(2L+1-1)dmax
Furthermore, multiple iterations are performed in computing the optical flow for each layer of the image pyramid. For two adjacent frames I and J of a certain layer of pyramid images, the iteration frequency is K (K is more than or equal to 1), and the optical flow is obtained after the K-1 (K is more than or equal to 2 and less than or equal to K) iteration is finished
Figure BDA0003602739350000051
When the kth iteration is carried out, the k-th iteration is taken as a motion vector of the image J to obtain the image JkAnd is provided with
Figure BDA0003602739350000052
The objective of the kth iteration is to compute the optimization vector ηkTo minimize the following objective function:
Figure BDA0003602739350000061
wxand wyThe size of the neighborhood window is represented, and eta is obtained by solvingk=G-1bkFinally, the optical flow d after k iterations is obtainedk=dk-1kWhen the number of iterations K or the optimization variable eta is reachedkAnd finishing iteration when the optical flow is smaller than the set threshold value, and starting to calculate the next layer of optical flow of the pyramid image.
Through the above operations, the useless background light stream can be eliminated, and the moving target area can be extracted more accurately. Firstly, after optical flow information of a moving scene is obtained, threshold segmentation is respectively carried out on the optical flow in the horizontal direction and the vertical direction before optical flow components are combined, so that the situation that the optical flow vectors are directly segmented to lose effective optical flows is avoided.
Secondly, when the horizontal direction and the vertical direction are respectively divided, the division threshold value is dynamically selected according to the optical flows of different motion scenes. When the motion state of the moving object changes, the segmentation threshold is adjusted according to the actual optical flow, and finally the optical flow with the absolute value smaller than the segmentation threshold is taken as the background generated and removed, and the rest optical flows are reserved.
When the neural network identifies, the neural network takes the collected image data of each frame as input, compares the input image data with the face data in the local memory, and controls the electronic lock to switch the on-off state after the comparison is successful.
The neural network model is transplanted to the embedded equipment by using a TinyML library, and the whole set of system is completely operated on the local equipment.
Based on the method, the system for creating the intelligent door lock in the embodiment comprises a training stage and a using stage, wherein image information data are collected through an image sensor in the training stage, local training is carried out in a microcontroller, and a training result is stored in a local memory;
in the using stage, data are read and preprocessed, then dynamic target detection is carried out on the image through pyramid Lucas-Kanade, foreground personnel information is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for identification, personnel identity information is confirmed after identification is finished, and whether unlocking is carried out or not is judged.
The above embodiments are only specific ones of the present invention, and the scope of the present invention includes but is not limited to the above embodiments, and any suitable changes or substitutions that are consistent with the method and apparatus claims for creating an intelligent door lock and described by the ordinary skilled person in the art should fall within the scope of the present invention.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. A method for creating an intelligent door lock is characterized by comprising a training stage and a using stage, wherein image information data are collected through an image sensor in the training stage, local training is carried out in a microcontroller, and a training result is stored in a local memory;
in the using stage, data are read and preprocessed, then dynamic target detection is carried out on the image through pyramid Lucas-Kanade, foreground personnel information is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for identification, personnel identity information is confirmed after identification is finished, and whether unlocking is carried out or not is judged.
2. The method for creating an intelligent door lock according to claim 1, wherein when data is read and preprocessed, the video image is firstly preprocessed in an image graying way for recognizing that a person suddenly enters the visual field;
in graying, different weights are respectively assigned according to different importance of three primary colors of the color image R, G, B, and then the brightness values of the three primary colors are weighted and averaged to serve as an image gray value.
3. The method for creating an intelligent door lock according to claim 2, wherein the graying processing formula is as follows:
f(i,j)=0.29R(i,j)+0.58G(i,j)+0.13B(i,j)
wherein f (i, j) represents the pixel value of the gray image after gray scale processing at the position (i, j), and R (i, j), G (i, j) and B (i, j) represent the components of red, green and blue at the position (i, j) respectively;
the Lucas-Kanade optical flow method needs to simultaneously satisfy three basic assumptions, firstly, the gray scale is constant when the same point in a set space moves among different frames along with time;
secondly, the position of the moving object between the adjacent frames does not change greatly;
finally, projecting the adjacent points in the space into the image to form the adjacent points, and simultaneously enabling the optical flow vectors of all the points to be consistent; considering that the position of a point in space on a frame image is (x, y) and the gray value is I (x, y, t), after time dt, the point moves to the position (x + d)x,y+dy) At gray scale value corresponding to I (x + d)x,y+dy,t+dt)。
4. The method of claim 3, wherein the pyramid image is created by two steps, one is Gaussian filtered smoothed image and the other is down-sampled;
the down-sampling is that each sampling reduces the resolution of each layer of image to half of the original resolution.
5. The method for creating an intelligent door lock according to claim 4, wherein the optical flow is calculated layer by layer from the top layer of the image pyramid by a Lucas-Kanade optical flow method, the optical flow estimation calculated from the previous layer of image is used as the initial value of the optical flow estimation of the next layer of image, and the steps are repeated until the optical flow information of the bottom layer of the image pyramid is calculated;
suppose that the maximum displacement of pixel motion that the original Lucas-Kanade optical flow method can handle is dmaxThen the maximum displacement of pixel motion that can be handled by the pyramid Lucas-Kanade optical flow method becomes dmaxfinal=(2L+1-1)dmax
6. The method as claimed in claim 5, wherein the light stream of each image layer of the image pyramid is calculated by multiple iterations, the iteration number of two adjacent frames of images I and J of a certain layer of the image pyramid is K (K ≧ 1), and the light stream is obtained after the K-1(2 ≦ K ≦ K) iteration is finished
Figure FDA0003602739340000021
Obtaining an image J as a motion vector of the image J when performing the k-th iterationkAnd is provided with
Figure FDA0003602739340000022
The objective of the kth iteration is to compute the optimization vector ηkTo minimize the following objective function:
Figure FDA0003602739340000023
wxand wyThe size of the neighborhood window is represented, and eta is obtained by solvingk=G-1bkFinally, the optical flow d after k iterations is obtainedk=dk-1kWhen the number of iterations K or the optimization variable eta is reachedkAnd finishing iteration when the optical flow is smaller than the set threshold value, and starting to calculate the next layer of optical flow of the pyramid image.
7. The method for creating an intelligent door lock according to claim 6, wherein when foreground person information is extracted, after optical flow information of a moving scene is obtained first, threshold segmentation is performed on the optical flow in the horizontal direction and the vertical direction respectively before optical flow components are combined;
secondly, when the horizontal direction and the vertical direction are respectively segmented, dynamically selecting a segmentation threshold value according to the optical flows of different motion scenes;
when the motion state of the moving object changes, the segmentation threshold value is adjusted according to the actual optical flow, finally, the optical flow of which the absolute value is smaller than the segmentation threshold value is regarded as that generated by the background and is removed, and the residual optical flow is reserved.
8. The method as claimed in claim 7, wherein when the neural network performs recognition, the neural network takes each frame of collected image data as input, compares the input with the face data in the local memory, and controls the electronic lock to switch on and off states after the comparison is successful.
9. A system for creating an intelligent door lock is characterized by comprising a training stage and a using stage, wherein image information data are collected through an image sensor in the training stage, local training is carried out in a microcontroller, and a training result is stored in a local memory;
in the using stage, data are read and preprocessed, then dynamic target detection is carried out on the image through pyramid Lucas-Kanade, foreground personnel information is extracted after detection is finished, then the extracted foreground personnel information is sent to a neural network for identification, personnel identity information is confirmed after identification is finished, and whether unlocking is carried out or not is judged.
CN202210407775.5A 2022-04-19 2022-04-19 Method and system for creating intelligent door lock Pending CN114783087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210407775.5A CN114783087A (en) 2022-04-19 2022-04-19 Method and system for creating intelligent door lock

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210407775.5A CN114783087A (en) 2022-04-19 2022-04-19 Method and system for creating intelligent door lock

Publications (1)

Publication Number Publication Date
CN114783087A true CN114783087A (en) 2022-07-22

Family

ID=82431035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210407775.5A Pending CN114783087A (en) 2022-04-19 2022-04-19 Method and system for creating intelligent door lock

Country Status (1)

Country Link
CN (1) CN114783087A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908140A (en) * 2010-07-29 2010-12-08 中山大学 Biopsy method for use in human face identification
CN103605971A (en) * 2013-12-04 2014-02-26 深圳市捷顺科技实业股份有限公司 Method and device for capturing face images
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety helmet detection method and system in dynamic background
CN113391695A (en) * 2021-06-11 2021-09-14 山东浪潮科学研究院有限公司 Low-power-consumption face recognition method based on TinyML

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908140A (en) * 2010-07-29 2010-12-08 中山大学 Biopsy method for use in human face identification
CN103605971A (en) * 2013-12-04 2014-02-26 深圳市捷顺科技实业股份有限公司 Method and device for capturing face images
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety helmet detection method and system in dynamic background
CN113391695A (en) * 2021-06-11 2021-09-14 山东浪潮科学研究院有限公司 Low-power-consumption face recognition method based on TinyML

Similar Documents

Publication Publication Date Title
Maddalena et al. The SOBS algorithm: What are the limits?
CN110781838A (en) Multi-modal trajectory prediction method for pedestrian in complex scene
CN113537099B (en) Dynamic detection method for fire smoke in highway tunnel
CN103002289B (en) Video constant quality coding device for monitoring application and coding method thereof
CN104601964A (en) Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system
CN107066916B (en) Scene semantic segmentation method based on deconvolution neural network
CN108399373A (en) The model training and its detection method and device of face key point
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN101971190A (en) Real-time body segmentation system
CN102663362B (en) Moving target detection method based on gray features
Czyżewski et al. Multi-stage video analysis framework
CN111460964A (en) Moving target detection method under low-illumination condition of radio and television transmission machine room
US20060210159A1 (en) Foreground extraction approach by using color and local structure information
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
CN109919023A (en) A kind of networking alarm method based on recognition of face
CN110047041A (en) A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method
CN114783087A (en) Method and system for creating intelligent door lock
CN110349178B (en) System and method for detecting and identifying abnormal behaviors of human body
Sekhar et al. Object based image splicing localization using block artificial grids
Lu et al. Coarse-to-fine pedestrian localization and silhouette extraction for the gait challenge data sets
CN109936703A (en) The method and apparatus that the video of monocular camera shooting is reconstructed
CN118037777B (en) DeepSort-based pedestrian tracking method
CN114049585B (en) Mobile phone operation detection method based on motion prospect extraction
Ani et al. Neural network based unsupervised face and mask detection in surveillance networks
Karacs et al. Route number recognition ot public transport vehicles via the bionic eyeglass

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220722