CN115130082A - Intelligent sensing and safety control method for ruggedized computer - Google Patents
Intelligent sensing and safety control method for ruggedized computer Download PDFInfo
- Publication number
- CN115130082A CN115130082A CN202211029826.1A CN202211029826A CN115130082A CN 115130082 A CN115130082 A CN 115130082A CN 202211029826 A CN202211029826 A CN 202211029826A CN 115130082 A CN115130082 A CN 115130082A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- control method
- intelligent sensing
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Aiming at the professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method utilizes an ultra-wide-angle camera installed on the special computer to shoot the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environmental states, and implements safety protection control on the special computer.
Description
Technical Field
The invention belongs to the cross application field of machine vision and machine learning technology, and particularly relates to an intelligent sensing and safety control system for a special computer.
Background
A special purpose computer within a ruggedized computer generally refers to a computer system for a particular use or application, such as an application location where security is required. Modern computer equipment is generally provided with image-based identity authentication equipment and method such as face recognition, and the identity of a person logging in the computer is confirmed by recognizing face biological characteristics, so that the use safety of the computer is ensured. However, for some special purpose computers, the existing login verification system based on image face recognition is not enough to meet the security requirements, for example, the system can not respond well to the security risks such as peeking, the user leaving and the like.
In particular, some special computers are not in a stable environment such as a machine room, but need to be applied to special occasions such as the field and ships. In these occasions, the background of the environment is complex, the illumination changes violently, the whole equipment shakes, and the like. Therefore, when the conventional method adopting image detection faces the complex environments, the login can not be quickly and accurately realized and the condition after the login can not be comprehensively and accurately monitored.
For this reason, the algorithm for image detection needs to be optimized specifically for the complex environment to improve the safety of the special computer in use in the complex environment. Meanwhile, how to ensure the safety of the computer in the use process is also an urgent problem to be solved.
Disclosure of Invention
To solve the above problems, the following inventions have been proposed.
Aiming at the professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method utilizes an ultra-wide-angle camera installed on the special computer to shoot the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environmental states, and implements safety protection control on the special computer.
Intelligent sensing and safety control method for ruggedized computer
Step 1: acquiring visible light and infrared composite waveband images at the same moment, and recording as V (x, y, z), wherein x and y are position coordinates, and z is a mark of an image imaging waveband, so that a hybrid observation model is established:
calculating the probability that any coordinate in the mixed observation O (x, y) is a face, further obtaining a binary image B (x, y) marked by the face according to a threshold value, and further marking the range of the face in the mixed observation; to pairFiltering by adopting pixel-by-pixel median to obtain filtered image(ii) a GetIs set of pixels, denoted as;
Wherein, the first and the second end of the pipe are connected with each other,to representOf the rectangular area of the first set of pixels,、respectively representing the width and height of the rectangular area;the symbol represents the number of pixels included in the set of pixel compositions;
if the rectangular area is solved according to the formula (4)If the formula (5) is satisfied, the region is considered to beIf the image is a face region, the face cannot be found in the current image; wherein、Is a control threshold; after the face area is determined, the face area is divided into subgraphs, the subgraphs are respectively subjected to convolution operation to extract features, and whether the subgraphs are legal loggers or not is judged according to the features;
and 2, step: after login is finished, scanning surrounding areas by using different window sizes respectively, and sending the surrounding areas into a neural network model to judge whether the obtained image has a human face of a non-login person; if yes, sending out warning information; after warning information is sent out in continuous multi-frame images, the special computer is controlled to enter an unregistered state;
wherein the neural network model is: the hidden layer consists of three convolution layers and a full connecting layer; the outputs of the three convolutional layers are:
wherein、、The convolution kernels of each layer are increased in sequence; p and q represent relative position coordinates in a convolution kernel;、、respectively, each layer of linear deviation; the excitation function is:
the collection adopts an ultra wide angle camera.
And step 2, regularly acquiring an environment image by using the ultra-wide angle camera, and monitoring the login environment around the special computer.
The width and height of the rectangle of the face part obtained by the method in the step 1 are respectivelyAs the upper threshold of the window.
And 2, after the computer enters the non-login state in the step 1, the computer needs to adopt the method in the step 1 again to verify and then enter the login state.
And an output layer is connected behind the full connection layer of the neural network model.
The invention has the advantages that:
1. the infrared image and the visible light image are overlapped into an image through weighting, and the algorithm is optimized, so that the human face range can be quickly and accurately defined, and the accurate extraction of the human face features is ensured by matching with the feature extraction algorithm. Therefore, the recognition efficiency is higher than that of directly using the collected picture or simply processing the collected picture. And then, the rapid and accurate extraction of the human face features is realized by optimizing an extraction template and an algorithm. Through the operation, compared with the traditional method for carrying out face recognition by using algorithms such as a neural network or image segmentation recognition, the face recognition method is quicker, the anti-interference performance is stronger, and the login safety and speed can be ensured in a severe environment.
2. When environment scanning is carried out, the requirement on real-time responsiveness is not as high as that of a login link, so that in order to improve scanning efficiency and accuracy, only a visible light image is used, a neural network model is optimized, a variable convolution kernel is selected, a special excitation function is set, model training is carried out by adopting a better cost function, and the rapidness and the accuracy of face recognition of a non-login user in a severe environment are ensured.
3. The invention realizes the mutual matching of face login authentication and face scanning of other illegal users by continuously scanning the environment after face scanning login and alarming and returning to a section which is not logged in when danger is found, thereby realizing the safety precaution in the whole process from login to use. Aiming at professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method shoots the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environment states.
Detailed Description
Step 1The ultra-wide-angle shot image-based user identity identification and login control method comprises the steps of shooting a visible light and infrared composite wave band image of the face of a user by using an ultra-wide-angle camera, comparing whether the face appearing in the image is consistent with a pre-registered user face image or not by using a computer vision identification method, and allowing a user to login a system after the face is judged to be consistent.
The ultra-wide-angle camera refers to a camera with a lens visual range larger than 170 degrees; the collection of the multimode image means that a photosensitive element of the camera can collect images in a composite waveband of an infrared waveband and a visible light waveband.
Due to the manufacturing process of the optical lens, the lens of the ultra-wide angle camera generates larger image distortion than a conventional visual angle lens, and difficulty is caused for a computer vision identification task. In order to improve the accuracy of human face feature recognition by computer vision, the invention provides a method for jointly using human identity recognition by using visible light and infrared composite band images.
By adopting the camera capable of acquiring the visible light and infrared composite band images, the camera is arranged at a proper position on the upper part of the special computer, so that the camera can easily acquire the face images of a user. Collecting visible light and infrared composite wave band images possibly containing human faces at the same time and recording the images as. WhereinIs the position coordinates of a pixel in the image,for marking the image-forming wavelength band whenWhen the utility model is used, the water is discharged,a pixel representing a visible light band whenWhen the temperature of the water is higher than the set temperature,a pixel representing the infrared band.
Hybrid observationExpressed as the weighted sum of the pixels at the corresponding positions in the visible and infrared bands.
Order toPixel values representing mixed observationsIs a probabilistic representation of the face that is,distribution parameters representing a face pixel blending observation. Modeling by adopting a Gaussian model, then:
wherein the content of the first and second substances,it is shown that the circumferential ratio,the function of a natural index is represented,is the mean parameter of the gaussian model,is the variance parameter of the gaussian model.
Sample data of a complex band image of a human face is prepared, and parameters can be learned by maximum likelihood method according to equations (1) and (2)、、、The optimal solution of (1). Using the optimal solution asCan calculate a mixture of observationsIn which any coordinate is a faceProbability, further obtaining the binary image of the face mark according to the threshold valueAnd then marking the range of the human face in the mixed observation.
To pairFiltering by adopting pixel-by-pixel median to obtain a filtered imageFrom the experimental data, a preferred filter window is 9 x 9 with an original image resolution of 640 x 480.
The above method is in mixed observationThe part of the face is marked as a set of several pixelsGet itIs set of pixels, denoted as。The symbol represents the number of pixels included in the set of pixel compositions. Definition ofRepresents a set satisfying the following conditions:
wherein, the first and the second end of the pipe are connected with each other,to representA subset of the rectangular areas in the array,、respectively representing the width and height of the rectangular area.The intersection of sets, i.e. the set consisting of pixels belonging to both sets, is represented.
And (3) calculating:
if the rectangular area is solved according to the formula (4)If the formula (5) is satisfied, the region is considered to beAnd if not, the face cannot be found in the current image. In the formulae (4) and (5)、To control the threshold, test preferences based on experimental data,。
A rectangular region obtained by solving the above conditions satisfying both (4) and (5)The area mapped to the original image is the position of the face in the original image. The method extracts the region where the human face is located from the visible light and infrared composite wave band image, has higher precision compared with the traditional human face detection and extraction method based on a visible light Gaussian model, has higher efficiency compared with the human face detection and extraction method based on a high-dimensional parameter model such as a neural network and the like popular in recent years, meets the engineering application requirement of the invention, avoids the long waiting of the user during login, and improves the user experience while ensuring the safety.
Further, rememberCorresponding to visible light and infrared composite wave band imageThe region where the face is located.Respectively, a position coordinate and a band mark corresponding to one pixel. Will be provided withDividing the x and y directions into 8 × 8 equal-size subsets, and recording each subset as:、、…、,、、…、,…,、、…、. The original image size is not exactly divided by 8 and may be complemented with zero pixels.
And (3) calculating:
in the formulaAccording to the definition, the number of pixels of the face area is represented; the value range of p and q is 1-8.Representing the normalized coefficient, as defined above:
a total of 8 × 4=256 results can be obtained according to equation (6), and a 256-dimensional vector can be formed:
for the above-mentioned vectorBinary classification is performed for identifying whether the face part in the image belongs to a specific user. Shooting a plurality of human face visible light and infrared composite wave band images of a specific user, and calculating according to the method to obtain corresponding vectorsAnd forming a training sample set. And a binary classification algorithm such as a support vector machine is adopted to obtain a recognition model of the specific user face image, wherein the recognition model can recognize whether the input is a positive sample (the specific user face) or a negative sample (the specific user face is not used). Through the operation of the template, the characteristics can be quickly and accurately extracted, and the detection effect is better.
The system is in the non-logging state after the special computer is started. And inputting a group of new visible light and infrared composite wave band images to the identification model through a camera, judging whether the input is a positive sample or a negative sample by adopting the identification model, and allowing the user to log in the system if the input is judged to be the positive sample. After logging in, the special computer system is in a logging-in state.
Step 2The login environment monitoring method based on the ultra-wide-angle shot image scans and monitors the environment around a user in the ultra-wide-angle image, and if people except the user are found, a monitoring alarm is started.
After a specific user of the special purpose computer has logged in using the method described in step 1, the system enters a logged-in state. And in the login state, the method is adopted, and the surrounding environment of the special computer is monitored at regular time by shooting images.
The timing monitoring method and process are as follows.
Collecting a visible light and infrared composite wave band image at intervals, and detecting whether a non-specific face exists in the image by adopting a non-specific face detection algorithm on the visible light wave band image part. The non-specific face refers to a face image which is irrelevant to identity. When the non-specific human face is detected, only the visible light band image is adopted, and the infrared band image is not adopted, so that the calculation efficiency is improved.
According toThe width and the height of the rectangle of the face part obtained by the method in the step 1 are respectivelyAs the upper threshold, a lower threshold is additionally setGet it、For the size candidate range for non-specific face detection,the step size of the detection area is increased in the width and height directions,and (3) detecting the visible light waveband image acquired in the step (2) for the step length of the starting point of the detection area in the width and height directions. The detection process is as S21-S24.
S21, setting the initial detection area start point coordinate asThe width and height of the initial detection area are. Namely, it is、The formed rectangular area is an initial detection area. For visible light wave band imageSub-image implementation of non-specific faces within the detection regionAnd (6) detecting.
S21.1, if the sub-image returned by the non-specific face detection method is a face, detecting whether the sub-image belongs to a positive sample of a specific user by adopting the method in the step 1 in combination with the visible light and infrared composite band image of the corresponding area.
If it belongs to the positive sample of the particular user, proceed to step S22, otherwise go to step S25.
S21.2 otherwise, if the non-specific face detection method returns that the sub-image is not a face, continuing to step S22.
S22, according to the step lengthThe coordinates of the start point of the movement detection area are,Representing the number of movements of the step as an integer; after moving the coordinates of the start point of the detection area each time, detection is performed according to the method described in S21 until the entire area of the original image is completely scanned in the size of the current detection area. Go to step S23.
S23, according to the step lengthAdjusting the size of the detection area toAnd repeats steps S21, S22.
S24, sending out a warning message which indicates that the face of the user other than the specific user is detected in the current image; as an optimized automatic safety control method, after warning information is sent out in continuous multi-frame images, a special computer is controlled to enter an unregistered state, and the safety of data information is protected. After the computer enters the non-login state, the method in the step 1 needs to be adopted again to verify and then enter the login state.
In the non-specific face detection method in step S21, a convolutional fast neural network model is used to perform face detection on the subgraph. The neural network model consists of input, output and hidden layers.
The input of the neural network model is a visible light wave band imageIs denoted by。Representing the coordinates of the image pixels in the sub-picture.
The hidden layer of the neural network model consists of three convolutional layers and a full-connection layer. Each layer is a function of the previous layer; the first convolutional layer is defined as follows:
a convolution kernel of size 3 x 3 of a rectangle; p and q represent relative position coordinates in the convolution kernel, based on the relative position with the reference positionThe deviation degree of (A) is taken as。Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning. Excitation functionFor a nonlinear piecewise function:
the log is a natural logarithmic function, and e represents a natural index. Non-linear piecewise functionAnd establishing input and output nonlinear mapping, and further reducing overfitting of model classification on a traditional nonlinear natural logarithm model through the nonlinear piecewise function, so that the identification deviation of the model to learning and test samples is reduced.
The second convolutional layer is defined as follows:
is a convolution kernel with the size of 9 x 9 of a rectangle, and p and q represent relative position coordinates in the convolution kernel, so that the values of p and q are as9 integers in between.Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning.Is defined as (8).
The third convolutional layer is defined as follows:
is a rectangular convolution kernel of 17 x 17 in size, p,q represents the relative position coordinates in the convolution kernel, so p, q take on the values of17 integers in between.Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning.Is defined as (8).
The three convolution layers jointly extract the face-related local image features, convolution kernels with different sizes are adopted, the local image features in different sizes are met, and convolution operation is provided with a fast parallel computing method on a modern computer, so that the method is high in implementation efficiency.
The fully connected layers after the three convolutional layers are defined as follows:
representing a connection between the third convolutional layer and some two nodes in the fully-connected layer,a node in the third convolutional layer is defined,and defining nodes of a full connection layer, wherein the full connection layer comprises 256 nodes, and one connection exists between any two nodes of the third convolution layer and the full connection layer.Is a linear deviation.Is defined as (8).
Representing the connection of a certain node in the full-connection layer with the output layer Face.Nodes of the fully connected layer are defined. One connection exists between any node of the full connection layer and the Face.Is a linear deviation.Is defined as (8).
The Face value is 0 or 1, and when the Face =0, the input image is represented as a non-human Face, and when the Face =1, the input image is represented as a human Face. The face is specifically a non-specific face.
Learning the neural network model (7-12) by adopting the following cost function, and detecting the non-specific face by using the model after learning is finished:
wherein the content of the first and second substances,is the true value of the learning sample corresponding to a certain image sample,representing the output value calculated from the learning sample input substitution model. 0To control a parameterThe method is beneficial to improving the robustness of the model to noise. Preferably, take. And solving the cost function iteratively to obtain the optimized value of each parameter in the model (7-12).
After model learning is completed, timing monitoring may be implemented. Collecting a visible light and infrared composite wave band image at intervals, detecting whether a non-specific human face exists in the image by adopting the neural network model for the visible light wave band image part, and taking measures according to S21-S24.
The invention provides an intelligent sensing and safety control method for a special computer, which realizes the intelligent sensing and safety control of the special computer through automatic image identification. Table 1 shows the application test indexes of the method, and table 2 shows the application test indexes of the prior art. The experimental result shows that the invention has no obvious advantages compared with the prior art when the invention is environment-friendly, but the performance of the invention on the field and bumpy ships with complex environment is far higher than that of the prior art. Therefore, the method can effectively identify the abnormal environment, quickly react to the abnormal environment and realize the intelligent sensing and safety control of the special computer.
TABLE 1
TABLE 2
The above embodiments are only limited examples, and the space cannot be exhaustive, so the scope of the claims is not limited thereto, and all technical solutions similar to the above products and methods are within the scope of the present application.
Claims (10)
1. An intelligent sensing and safety control method for a ruggedized computer is characterized by comprising the following steps:
step 1: acquiring visible light and infrared composite waveband images at the same moment, and recording as V (x, y, z), wherein x and y are position coordinates, and z is a mark of an image imaging waveband, so that a hybrid observation model is established:
calculating the probability that any coordinate in the mixed observation O (x, y) is a face, further obtaining a binary image B (x, y) marked by the face according to a threshold value, and further marking the range of the face in the mixed observation; to pairFiltering by adopting pixel-by-pixel median to obtain a filtered image(ii) a GetIs set of pixels, denoted as;
Wherein the content of the first and second substances,to representOf the rectangular area of the first set of pixels,、respectively representing the width and height of the rectangular area;a symbol represents the number of pixels included in a set of pixel compositions;
if the rectangular area is solved according to the formula (4)If the formula (5) is satisfied, the region is considered to beIf the image is a face area, otherwise, the face cannot be found in the current image; wherein、Is a control threshold; after the face area is determined, the face area is divided into subgraphs, the subgraphs are respectively subjected to convolution operation to extract features, and whether the subgraphs are legal loggers or not is judged according to the features;
and 2, step: after login is finished, scanning surrounding areas by using different window sizes respectively, and sending the surrounding areas into a neural network model to judge whether the obtained image has a human face of a non-login person; if yes, sending out warning information; after warning information is sent out in continuous multi-frame images, the special computer is controlled to enter an unregistered state;
wherein the neural network model is: the hidden layer consists of three convolution layers and a full connecting layer; the outputs of the three convolutional layers are:
wherein、、The convolution kernels of each layer are increased in sequence; p and q represent relative position coordinates in a convolution kernel;、、respectively, each layer of linear deviation; the excitation function is:
2. the intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: the collection adopts an ultra wide angle camera.
3. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: and step 2, regularly acquiring an environment image by using the ultra-wide angle camera, and monitoring the login environment around the special computer.
6. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: and 2, after the computer enters the non-login state in the step 1, the computer needs to adopt the method in the step 1 again to verify and then enter the login state.
10. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: and an output layer is connected behind the full connection layer of the neural network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211029826.1A CN115130082B (en) | 2022-08-26 | 2022-08-26 | Intelligent sensing and safety control method for ruggedized computer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211029826.1A CN115130082B (en) | 2022-08-26 | 2022-08-26 | Intelligent sensing and safety control method for ruggedized computer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115130082A true CN115130082A (en) | 2022-09-30 |
CN115130082B CN115130082B (en) | 2022-11-04 |
Family
ID=83387918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211029826.1A Active CN115130082B (en) | 2022-08-26 | 2022-08-26 | Intelligent sensing and safety control method for ruggedized computer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115130082B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803301A (en) * | 2017-03-28 | 2017-06-06 | 广东工业大学 | A kind of recognition of face guard method and system based on deep learning |
US20180068198A1 (en) * | 2016-09-06 | 2018-03-08 | Carnegie Mellon University | Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network |
CN110414305A (en) * | 2019-04-23 | 2019-11-05 | 苏州闪驰数控***集成有限公司 | Artificial intelligence convolutional neural networks face identification system |
WO2020258121A1 (en) * | 2019-06-27 | 2020-12-30 | 深圳市汇顶科技股份有限公司 | Face recognition method and apparatus, and electronic device |
-
2022
- 2022-08-26 CN CN202211029826.1A patent/CN115130082B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180068198A1 (en) * | 2016-09-06 | 2018-03-08 | Carnegie Mellon University | Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network |
CN106803301A (en) * | 2017-03-28 | 2017-06-06 | 广东工业大学 | A kind of recognition of face guard method and system based on deep learning |
CN110414305A (en) * | 2019-04-23 | 2019-11-05 | 苏州闪驰数控***集成有限公司 | Artificial intelligence convolutional neural networks face identification system |
WO2020258121A1 (en) * | 2019-06-27 | 2020-12-30 | 深圳市汇顶科技股份有限公司 | Face recognition method and apparatus, and electronic device |
Non-Patent Citations (1)
Title |
---|
鲍睿栋等: "基于卷积神经网络的人脸识别研究综述", 《软件导刊》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115130082B (en) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jourabloo et al. | Face de-spoofing: Anti-spoofing via noise modeling | |
Priesnitz et al. | An overview of touchless 2D fingerprint recognition | |
Cozzolino et al. | Noiseprint: A CNN-based camera model fingerprint | |
Zhang et al. | Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation | |
Cai et al. | DRL-FAS: A novel framework based on deep reinforcement learning for face anti-spoofing | |
Tian et al. | Detection and separation of smoke from single image frames | |
CN104951940B (en) | A kind of mobile payment verification method based on personal recognition | |
Raghavendra et al. | Scaling-robust fingerprint verification with smartphone camera in real-life scenarios | |
Deb et al. | Look locally infer globally: A generalizable face anti-spoofing approach | |
Chen et al. | An adaptive CNNs technology for robust iris segmentation | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
Zhang et al. | License plate localization in unconstrained scenes using a two-stage CNN-RNN | |
CN108446699A (en) | Identity card pictorial information identifying system under a kind of complex scene | |
CN106778742B (en) | Car logo detection method based on Gabor filter background texture suppression | |
CN111767879A (en) | Living body detection method | |
CN113128481A (en) | Face living body detection method, device, equipment and storage medium | |
Yeh et al. | Face liveness detection based on perceptual image quality assessment features with multi-scale analysis | |
CN111222380A (en) | Living body detection method and device and recognition model training method thereof | |
CN110852292B (en) | Sketch face recognition method based on cross-modal multi-task depth measurement learning | |
Lorch et al. | Reliable camera model identification using sparse gaussian processes | |
Wang et al. | Domain generalization for face anti-spoofing via negative data augmentation | |
Huang et al. | Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection. | |
CN115130082B (en) | Intelligent sensing and safety control method for ruggedized computer | |
Li et al. | A dual-modal face anti-spoofing method via light-weight networks | |
CN116229528A (en) | Living body palm vein detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |