CN110222608A - A kind of self-service examination machine eyesight detection intelligent processing method - Google Patents

A kind of self-service examination machine eyesight detection intelligent processing method Download PDF

Info

Publication number
CN110222608A
CN110222608A CN201910442135.6A CN201910442135A CN110222608A CN 110222608 A CN110222608 A CN 110222608A CN 201910442135 A CN201910442135 A CN 201910442135A CN 110222608 A CN110222608 A CN 110222608A
Authority
CN
China
Prior art keywords
physical examination
module
eye
determining
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910442135.6A
Other languages
Chinese (zh)
Inventor
韩东明
解凡
寇瑜琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Hablile Technology Ltd By Share Ltd Information System
Original Assignee
Shandong Hablile Technology Ltd By Share Ltd Information System
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Hablile Technology Ltd By Share Ltd Information System filed Critical Shandong Hablile Technology Ltd By Share Ltd Information System
Priority to CN201910442135.6A priority Critical patent/CN110222608A/en
Publication of CN110222608A publication Critical patent/CN110222608A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/04Coin-freed apparatus for hiring articles; Coin-freed facilities or services for anthropometrical measurements, such as weight, height, strength

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses a kind of self-service examination machine eyesights to detect intelligent processing method characterized by comprising obtains monitoring video information in physical examination cabin;Identify physical examination number in physical examination cabin;Determine the lens wear situation of the examinee;Determine the eye circumstance of occlusion of the examinee;Eyesight detection is carried out to the examinee.The embodiment of the present invention provides a kind of method using artificial intelligence, it is automatically performed number detection in cabin, identification of whether wearing glasses, hides the whether correct sequence of operations of eye mode, the degree of automation for improving eyesight detection, saves manpower, while improving the accuracy of detection.

Description

Intelligent processing method for vision detection of self-service physical examination machine
Technical Field
The embodiment of the invention relates to the technical field of delivery management self-service physical examination, in particular to an intelligent processing method for visual detection of a self-service physical examination machine.
Background
When applying for a driver license, people of suitable age need to perform physical examination in a manual mode according to requirements of application and use of motor vehicle driver licenses (2016), and at present, self-help physical examination equipment is tried in a small part of areas to enter unmanned physical examination. However, the self-service physical examination equipment on the market cannot meet the requirements of real cases at the intelligent level, cannot intelligently judge whether a plurality of people are detected in the same cabin, whether the people wear glasses or not, whether the physical examination is carried out in a normal mode or not, and cannot stop the physical examination or prevent the cheating actions in due time when people are changed midway and other people intervene in the physical examination process. The method needs the staff to monitor in real time in the physical examination process to check or uniformly review videos and snap photos for judgment after the completion, the mode not only affects the efficiency, but also causes the detection result to be affected by the monitoring angle, time, subjective factors of auditors and the like, and the effectiveness of the physical examination process and the accuracy of the result are greatly reduced.
Disclosure of Invention
Therefore, the embodiment of the invention provides an intelligent processing method for vision detection of a self-service physical examination machine, which aims to solve the problems that in the prior art, the detection environment is not standard and the accuracy of a detection result is not high due to subjective factors existing in manual judgment during vision detection.
In order to achieve the above object, an embodiment of the present invention provides a method using artificial intelligence to automatically complete a series of operations such as number detection of people in a cabin, whether glasses are worn for identification, whether a eye-covering mode is correct, and the like, so as to improve the automation degree of vision detection and save manpower. The specific technical scheme is as follows:
the embodiment of the invention provides an intelligent processing method for vision detection of a self-service physical examination machine, which is characterized by comprising the following steps:
acquiring monitoring video information in a physical examination cabin;
identifying the number of physical examination people in the physical examination cabin;
determining the glasses wearing condition of the physical examiner;
determining the eye shielding condition of the physical examination person;
and carrying out vision detection on the physical examination person.
Further, the number of the physical examination people in the physical examination cabin is identified by the following steps:
according to a preset video frame rate, carrying out human body key part recognition on the video information based on a human body posture recognition model to obtain human body posture data; the human body posture data comprise face key part posture data and body key part posture data; the key part posture data comprises coordinate information and score information of the key part;
analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin, and finishing the identification of the number of the physical examination people.
Further, determining the eyeglass wear status of the physical examiner comprises:
acquiring face image information of a physical examination person in the monitoring video information in the physical examination cabin;
and inputting the face image information into a glasses wearing classification model trained in advance, predicting whether the physical examination person wears glasses or not, and determining the glasses wearing condition of the physical examination person.
Further, if it is identified that the number of physical examination people in the physical examination cabin is a single person, determining the eye shielding condition of the physical examination people comprises:
determining the position of the occluded eye;
inputting the face image information in the monitoring video information to a pre-trained masking plate position model, and predicting the position of the masking plate;
calculating a ratio between the position of the blocked eyes and the position of the masking plate;
if the ratio is larger than a preset threshold value, the eye shielding of the physical examination person is correct.
Further, the glasses wearing classification model is trained by adopting Faster R-CNN.
Further, determining the occluded eye location comprises:
fitting the posture data of the key parts of the face by adopting a least square method to obtain a head centerline function;
and mapping the shielded eye key part according to the head midline function by taking the screened eye key part as a reference position so as to determine the shielded eye position.
The invention provides an intelligent processing device for vision detection of a self-service physical examination machine, which is characterized by comprising an acquisition module, an identification module, a glasses wearing determination module, an eye shielding determination module and a vision detection module;
the acquisition module is used for acquiring monitoring video information in the physical examination cabin;
the identification module is used for identifying the number of physical examination people in the physical examination cabin;
the glasses wearing determining module is used for determining the glasses wearing condition of the physical examination person;
the eye occlusion determining module is used for determining the eye occlusion condition of the physical examination person;
the vision detection module is used for carrying out vision detection on the physical examination person.
Further, the recognition module comprises a human body posture data calculation module and an analysis module; wherein,
the human body posture data calculation module is used for carrying out human body key part recognition on the video information based on a human body posture recognition model according to a preset video frame rate to obtain human body posture data;
the analysis module is used for analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin and finishing the identification of the number of the physical examination people.
Further, the glasses wearing determining module comprises a face image information obtaining module and a model calculating module; wherein,
the face image information acquisition module is used for acquiring face image information of a physical examination person in the monitoring video information in the physical examination cabin;
the model calculation module is used for inputting the face image information into a glasses wearing classification model trained in advance, predicting whether the physical examination person wears glasses or not, and determining the glasses wearing condition of the physical examination person.
Furthermore, the module for determining eye shielding comprises a shielding position determining module, a model predicting module, a ratio calculating module and a judging module; wherein,
the occlusion position determining module is used for determining the position of the occluded eye;
the model prediction module is used for inputting the face image information in the monitoring video information into a pre-trained masking plate position model and predicting the position of the eye masking plate;
the ratio calculation module is used for calculating the ratio between the position of the shielded eye and the position of the masking plate;
the judging module is used for judging that the eye shielding of the physical examination person is correct when the ratio is larger than a preset threshold value;
the shielding position determining module comprises a midline function calculating module and a mapping determining module;
the central line function calculation module is used for fitting the pose data of the key parts of the face by adopting a least square method to obtain a head central line function;
the mapping determination module is used for mapping the shielded eye key parts according to the head midline function by taking the screened eye key parts as reference positions so as to determine the shielded eye positions.
The embodiment of the invention has the following advantages:
the invention adopts an artificial intelligent processing method to finish the automatic identification of the number of the physical examination people in the physical examination cabin, automatically determine the wearing condition of glasses of the physical examination people and the eye shielding condition of the physical examination people, and carry out vision detection on the physical examination people under the condition of meeting all the preset conditions. The whole process does not need manual participation, and all processes are automatically processed by the self-service physical examination machine. The manpower is saved, and the detection accuracy is improved.
Furthermore, the human body posture recognition model is adopted to recognize key parts of the human body to obtain human body posture data, the human body posture data are screened for multiple times, and then the number of physical examination people in the physical examination cabin is recognized. The number of physical examination people is determined according to the human posture data, and the number of people in the physical examination cabin can be more accurately determined. Screening is carried out for many times before identification, physical examination people who are in disability or outside a fixed region are screened and filtered, difficulty in later-stage identification algorithm development is reduced, and identification accuracy is guaranteed.
Furthermore, after the number of people is identified, the key parts of the face of the physical examination person are calculated and analyzed, whether the physical examination person is a living person is judged, real and credible human body posture data are filtered, and the person image on the clothes is prevented from influencing the follow-up number judgment.
Furthermore, after the face recognition is completed, if the number of the physical examination persons in the physical examination cabin is identified as a single person, the face comparison is performed on the physical examination persons every other predetermined time period. Confirm on above that to finish the physical examination people after, prevent that midway reptiles or other people from prompting at the gate, carry out the eigenvalue of once per second to the face of physical examination people and compare, guarantee whole physical examination in-process, the authenticity and the accuracy of testing result are ensured to physical examination people's uniformity.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
Fig. 1 is a flow chart of an intelligent processing method for vision test of a self-service physical examination machine according to embodiment 1 of the present invention;
fig. 2 is a flowchart of a preferred embodiment of a flow of an intelligent processing method for vision testing of a self-service physical examination machine according to embodiment 2 of the present invention;
FIG. 3 is a picture of key parts of physical examination of human eyes;
FIG. 4 is a graph of eye aspect ratio variation;
fig. 5 is a simulation diagram of human body posture data.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flow chart of an intelligent processing method for vision testing of a self-service physical examination machine provided in embodiment 1 of the present invention includes:
acquiring monitoring video information in a physical examination cabin;
identifying the number of physical examination people in the physical examination cabin;
determining the glasses wearing condition of the physical examiner;
determining the eye shielding condition of the physical examination person;
and carrying out vision detection on the physical examination person.
The invention carries out a series of artificial intelligence algorithm calculation on the monitoring video information in the physical examination cabin, including the identification of the number of physical examination people in the physical examination cabin, the determination of the wearing condition of glasses and the determination of the eye shielding condition of the physical examination people, and detects the vision of the physical examination people under the condition that the conditions meet the preset conditions, thereby realizing the intelligent processing of the vision detection.
The identification of the number of the physical examination people in the physical examination cabin comprises the following steps:
according to a preset video frame rate, carrying out human body key part recognition on the video information based on a human body posture recognition model to obtain human body posture data;
analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin, and finishing the identification of the number of the physical examination people.
For the fast and efficient data processing, the video in the cabin is analyzed in real time at 25 frames per second, and the human body posture recognition model can be DensePose published by the Facebook research institute and can also be AlphaPose. Preferably, the human body posture data is obtained by identifying the key parts of the human body in the video information based on the open-source human body posture identification framework OpenPose. Openpos is an open-source human body posture recognition framework, and can estimate key parts of a human body, the range is 0 to 1, and the closer to 1, the more credible the position is.
The video in the cabin is analyzed in real time in 25 frames per second based on OpenPose, human key point recognition is carried out on human images appearing in each frame of images, and due to human diversity, such as the human images on physical examination person clothes, when the cabin door is opened, non-physical examination person images outside the door need to be filtered to remove interference data.
The human body posture data comprise coordinate information and score information of key body parts, the key body parts comprise key face parts and key body parts, namely the human body posture data comprise posture data of the key face parts and posture data of the key body parts; the key part posture data comprises coordinate information and score information of the key part;
after the human body posture data are obtained through calculation, the human body posture data are screened for a plurality of times, and preferably, the human body posture data are screened for two times. Firstly, the proportion between the human body posture data and the physical examination fixed area is analyzed, and the physical examination people which do not accord with the preset proportion range are preliminarily screened.
Specifically, human body posture data is firstly screened in proportion. Due to the shoulder width and the head length of normal people, a proportion range (the proportion range is different according to the specific size of the selected area) exists in the fixed area selected by the physical examination cabin, and the human body posture data are primarily screened according to the proportion range.
Secondly, analyzing the position and proportion relation between the posture data of the key parts of the face and the key parts of the body, and carrying out secondary screening on the physical examination person. Specifically, due to normal physical examination people, the key parts of the body all conform to physiological structures, for example, the head is above the shoulders, the width of the head and the width of the shoulders also have a proportion range, and the physical examination people are secondarily screened according to the proportion range of the physiological structures.
After the physical examination people are screened twice, the number of the rest human body posture data is judged, and therefore the number of the physical examination people in the physical examination cabin can be identified. Through twice screening of human body posture data physical examination people, special crowds which do not meet detection requirements or have malformed bodies are filtered, people participating in physical examination are guaranteed to be normal physical examination people, and detection accuracy of self-service physical examination is guaranteed.
The human body posture recognition model is adopted to recognize key parts of a human body, human body posture data are obtained, the human body posture data are screened for many times, and then the number of people for physical examination in the physical examination cabin is recognized. The number of physical examination people is determined according to the human posture data, and the number of people in the physical examination cabin can be more accurately determined. The body is screened for many times before being identified, and physical examination people with physical disabilities or physical disabilities outside a fixed area are screened and filtered, so that the difficulty of algorithm development is reduced, and the identification accuracy is ensured.
When the fact that only one person exists in the human posture data in the physical examination cabin is recognized, the glasses wearing condition of the physical examination person is detected. The method comprises the following steps:
acquiring the face image information of the physical examination person in the monitoring video information in the physical examination cabin;
and inputting the face image information into a classification model trained in advance, predicting whether the physical examination person wears glasses or not, and determining the glasses wearing condition of the physical examination person.
Firstly, training a classification model whether to wear glasses or not based on a Google inclusion V3 classification model. Specifically, a certain number of glasses wearing and glasses not wearing are obtained by taking a real physical examination cabin back plate environment as a background, two positive and negative data sets are obtained, ten thousand positive sample pictures are used, namely, a glasses wearing image is obtained, ten thousand negative sample pictures are used, namely, a glasses not wearing image is obtained, and two thousand verification images are obtained. And then preprocessing the data, extracting the eye region as the input of an inclusion V3 classification model, extracting the characteristic value calculated by the inclusion V3, inputting the characteristic value to a full-connection layer with or without glasses, classifying whether the glasses are worn or not, obtaining various parameters of the inclusion V3 when the classification precision reaches the precision required by presetting, and further obtaining the glasses wearing classification model.
It should be noted that in the continuous use process of the model trained by the user, the recognition accuracy rate is continuously improved, every time a picture is detected in the actual environment, the picture is added into the training model library, a new training model wearing glasses is continuously optimized according to the actual result, and by analogy, the recognition accuracy rate is steadily increased.
Then extracting the face image information of the physical examination person in the monitoring video information, and inputting the face image information into the glasses wearing classification model, so that whether the physical examination person wears glasses can be accurately judged. If the glasses wearing condition of the physical examination person is consistent with the glasses wearing condition pre-selected by the physical examination person (it needs to be explained that when the physical examination person enters the physical examination cabin to perform vision examination, the physical examination person can pre-select whether to wear the glasses per se on the physical examination machine), the physical examination person enters the next detection link, namely, the eye shielding condition of the physical examination person is judged.
Determining the eye shielding condition of the physical examination person comprises:
determining the position of the occluded eye;
inputting the face image information in the monitoring video information to a pre-trained masking plate position model based on Faster R-CNN, and predicting the position of the masking plate;
calculating a ratio between the position of the blocked eyes and the position of the masking plate;
if the ratio is larger than a preset threshold value, the eye shielding of the physical examination person is correct.
When the head of the physical examination person is correct, calculating key parts of the face of the physical examination person based on OpenPose, and fitting the posture data of the key parts of the face by using a least square method to obtain a head centerline function; and mapping the shielded eye key part according to the head midline function by taking the screened eye key part as a reference position so as to determine the shielded eye position. The calculation formula for calculating the sum of squares of the residuals by the least squares method is as follows:
determining a head midline function through Q, and finally obtaining β 0, β 1 through derivation, wherein the head midline function is calculated according to the following formula:
the two parameters β 0, β 1 of the head centerline function are calculated, so that the mathematical expression of the head centerline function can be calculated, which is not described herein again.
And training an eye shielding plate position model based on the Faster R-CNN, and predicting the position of the eye shielding plate. Specifically, the appearance characteristics of the eye shielding plate are stable, so that the article detection model can be used for detection. The method comprises the steps of firstly obtaining pictures of a certain number of physical examination people using the eye shielding plates, and then detecting the physical examination people by using an article detection model. The method comprises the steps of firstly obtaining pictures of a certain number of physical examination people using the eye shielding plates, taking the faces as the background of the eye shielding plates, tending to a real detection scene, and reducing the false detection rate of the eye shielding plates. And then, carrying out boundary marking on the eye shielding plate on the photo set, carrying out boundary marking on only the upper half part of the eye shielding plate due to the shielding area on the main area of the eye shielding plate, then sending the photo set into a Faster R-CNN, and training an eye shielding plate detection model.
And after the eye shielding plate detection model is obtained, inputting the face image information in the monitoring video information into the trained eye shielding plate position model, and predicting the position of the eye shielding plate. In order to improve the detection accuracy, three times of recognition are carried out on the face image information to be recognized of each frame, and at most three times of predicted positions of the eye shielding plates are output.
Finally, ratio calculation is carried out on the predicted positions of the three predicted eye shielding plates and the positions of the shielded eyes respectively, the ratio is recorded as a ratio, and the calculation formula is as follows:if the ratio of the predicted position of one of the eye masks to the position of the eye mask is greater than a preset threshold value, and the preset threshold value is preferably 0.85, the occlusion is considered to be correct.
It should be noted that, in order to ensure that the eye shielding position is accurately found, before the shielded eye position is determined, the head posture data and the body posture data of the physical examination person are further comprehensively determined based on openpos, whether the physical examination person is in the head-bending posture or not is determined, if the physical examination person is in the head-bending posture, the user is prompted to have inaccurate head posture or return to the initial state of self-service physical examination again, and the processes of identifying the number of physical examination persons, determining the glasses wearing condition, determining the eye shielding condition and the like are completed again.
Referring to fig. 2, it is a flowchart of a preferred implementation manner of the flow of the intelligent processing method for eyesight test of the self-service physical examination machine provided in embodiment 2 of the present invention. The preferred embodiment comprises the steps of:
acquiring monitoring video information in a physical examination cabin;
according to a preset video frame rate, carrying out human body key part recognition on the video information based on a human body posture recognition model to obtain human body posture data; the human body posture data comprise face key part posture data and body key part posture data; the key part posture data comprises coordinate information and score information of the key part;
analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin, and finishing the identification of the number of the physical examination people;
calculating and analyzing key parts of the face of the physical examination person, and judging whether the physical examination person is a living person or not; wherein the key parts of the face comprise eyes, nose and ears of a physical examination person;
determining the glasses wearing condition of the physical examiner;
determining the eye shielding condition of the physical examination person;
comparing the faces of the physical examination people at intervals of a preset time period;
and carrying out vision detection on the physical examination person.
In order to optimize the present invention, on the basis of example 1, it is recognized whether a physical examination person in the physical examination cabin is a living body. After the human body posture data are screened and analyzed twice, the number of people in the physical examination cabin is 1, and then the video frames are subjected to living body verification in continuous monitoring video information, namely, the recognized people are verified to be a real physical examination person. Whether the physical examination person is a living body is judged by prompting the physical examination person to move, twist head, blink and the like. When the physical examination person does not have obvious displacement, and twists and blinks, the physical examination person is regarded as a living body. The judgment process is as follows:
referring to fig. 3, there are pictures of key parts of eyes of a physical examination person, and there are generally 6 key points of the key parts of eyes, such as P1, P2, P3, P4, P5, and P6 in fig. 3. The face was calculated for 68 keypoints based on openpos, with 6 keypoints per eye, and the aspect ratio of the eyes changed regularly when blinking behavior was performed (see fig. 4), with the greater the number of image frames, the smoother the change curve. And at 25 frames per second, blinking behavior is carried out once, the dif change trend graph is shown, and the minimum point is closed eyes.
The aspect ratio Dif of the eye can be obtained by the following calculation:
Dif=(||p2-p6||+||p3-p5||)/2||p1-p4||
referring to fig. 5, it is a simulation diagram of human body posture data, and it can be determined whether the physical examination person twists his head according to the diagram. Under general conditions, when the normal person is in the end posture, all there is certain angle between its ear and the shoulder, when the physical examination people turned round the head to the right, the angle a that left ear, right shoulder, left shoulder constitute can continuously increase, and in the same way, when turning round the head to the left, the angle b that right ear, left shoulder, right shoulder constitute also can continuously increase, confirms whether the physical examination people turned round the head through the size change of detecting angle a, angle b, and then judges whether the physical examination people is the live body.
According to the self-help physical examination process, physical examination persons are finished in the physical examination cabin, and in order to prevent prompting when persons are changed midway or persons beside the physical examination cabin are at the door, when the number of physical examination persons is identified and only one person is determined, the human posture is analyzed by adopting the shot full-lens area instead of the fixed area. When the physical examination cabin door is not closed, the door position detects human posture data, then considers that abnormal physical examination occurs, returns to the initial state during physical examination, and finishes the processes of identification of the number of physical examination people, determination of the glasses wearing condition, judgment of the eye shielding condition and the like again. When the physical examination cabin door is closed, the human posture is detected as one person, the human posture is compared with the human posture once per second, and the consistency of physical examination people in the whole physical examination process is ensured.
The process of face comparison is as follows:
firstly, according to the cut-out face image information of the physical examination person, a pre-trained faceNet model is put in to extract a face 128-dimensional feature vector, and the face 128-dimensional feature vector is stored and recorded. Wherein the FaceNet model is trained according to 1w collected pictures.
Secondly, according to the human face picture detected in real time, putting the human face picture into a trained faceNet model to obtain a 128-dimensional characteristic vector, and storing records;
and finally, calculating Euclidean distance values of the above 2 vectors, wherein the rule is as follows: if the same person exists, the value is smaller than about 1.05, and if the same person exists in two identical face pictures, the obtained distance is 0, wherein 1.05 is a proper threshold value obtained by combining multiple experimental verification and the Euclidean distance principle.
In the embodiment 2 of the invention, after the number of people is identified, the key part of the face of the physical examination person is calculated and analyzed, whether the physical examination person is a living person is judged, and the real and credible human body posture data is filtered out, so that the figure image on the clothes is prevented from influencing the follow-up number judgment.
Further, after the face recognition is finished, if the number of the physical examination persons in the physical examination cabin is recognized as a single person, the face comparison is carried out on the physical examination persons every other preset time period. Confirm on above that to finish the physical examination people after, prevent that midway reptiles or other people from prompting at the gate, carry out the eigenvalue of once per second to the face of physical examination people and compare, guarantee whole physical examination in-process, the authenticity and the accuracy of testing result are ensured to physical examination people's uniformity.
The invention provides an intelligent processing device for vision detection of a self-service physical examination machine, which is characterized by comprising an acquisition module, an identification module, a glasses wearing determination module, an eye shielding determination module and a vision detection module;
the acquisition module is used for acquiring monitoring video information in the physical examination cabin;
the identification module is used for identifying the number of physical examination people in the physical examination cabin;
the glasses wearing determining module is used for determining the glasses wearing condition of the physical examination person;
the eye occlusion determining module is used for determining the eye occlusion condition of the physical examination person;
the vision detection module is used for carrying out vision detection on the physical examination person.
Further, the recognition module comprises a human body posture data calculation module and an analysis module; wherein,
the human body posture data calculation module is used for carrying out human body key part recognition on the video information based on a human body posture recognition model according to a preset video frame rate to obtain human body posture data;
the analysis module is used for analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin and finishing the identification of the number of the physical examination people.
Further, the glasses wearing determining module comprises a face image information obtaining module and a model calculating module; wherein,
the face image information acquisition module is used for acquiring face image information of a physical examination person in the monitoring video information in the physical examination cabin;
the model calculation module is used for inputting the face image information into a glasses wearing classification model trained in advance, predicting whether the physical examination person wears glasses or not, and determining the glasses wearing condition of the physical examination person.
Furthermore, the module for determining eye shielding comprises a shielding position determining module, a model predicting module, a ratio calculating module and a judging module; wherein,
the occlusion position determining module is used for determining the position of the occluded eye;
the model prediction module is used for inputting the face image information in the monitoring video information into a pre-trained masking plate position model and predicting the position of the eye masking plate;
the ratio calculation module is used for calculating the ratio between the position of the shielded eye and the position of the masking plate;
the judging module is used for judging that the eye shielding of the physical examination person is correct when the ratio is larger than a preset threshold value;
the shielding position determining module comprises a midline function calculating module and a mapping determining module;
the central line function calculation module is used for fitting the pose data of the key parts of the face by adopting a least square method to obtain a head central line function;
the mapping determination module is used for mapping the shielded eye key parts according to the head midline function by taking the screened eye key parts as reference positions so as to determine the shielded eye positions.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. The intelligent processing method for the vision detection of the self-service physical examination machine is characterized by comprising the following steps:
acquiring monitoring video information in a physical examination cabin;
identifying the number of physical examination people in the physical examination cabin;
determining the glasses wearing condition of the physical examiner;
determining the eye shielding condition of the physical examination person;
and carrying out vision detection on the physical examination person.
2. The method of claim 1, wherein identifying the number of physical exams in the physical examination cabin comprises:
according to a preset video frame rate, carrying out human body key part recognition on the video information based on a human body posture recognition model to obtain human body posture data; the human body posture data comprise face key part posture data and body key part posture data; the key part posture data comprises coordinate information and score information of the key part;
analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin, and finishing the identification of the number of the physical examination people.
3. The method of claim 1, wherein determining the lens wear of the physical examiner comprises:
acquiring face image information of a physical examination person in the monitoring video information in the physical examination cabin;
and inputting the face image information into a glasses wearing classification model trained in advance, predicting whether the physical examination person wears glasses or not, and determining the glasses wearing condition of the physical examination person.
4. The method of claim 2, wherein if the number of physical examination people in the physical examination cabin is identified as a single person, determining the eye occlusion condition of the physical examination people comprises:
determining the position of the occluded eye;
inputting the face image information in the monitoring video information to a pre-trained masking plate position model, and predicting the position of the masking plate;
calculating a ratio between the position of the blocked eyes and the position of the masking plate;
if the ratio is larger than a preset threshold value, the eye shielding of the physical examination person is correct.
5. The method according to claim 3, wherein the eyewear classification model is trained using Faster R-CNN.
6. The method of claim 4, wherein determining the occluded eye location comprises:
fitting the posture data of the key parts of the face by adopting a least square method to obtain a head centerline function;
and mapping the shielded eye key part according to the head midline function by taking the screened eye key part as a reference position so as to determine the shielded eye position.
7. An intelligent processing device for vision detection of a self-service physical examination machine is characterized by comprising an acquisition module, an identification module, a module for determining glasses wearing, a module for determining eye shielding and a vision detection module;
the acquisition module is used for acquiring monitoring video information in the physical examination cabin;
the identification module is used for identifying the number of physical examination people in the physical examination cabin;
the glasses wearing determining module is used for determining the glasses wearing condition of the physical examination person;
the eye occlusion determining module is used for determining the eye occlusion condition of the physical examination person;
the vision detection module is used for carrying out vision detection on the physical examination person.
8. The apparatus of claim 7, wherein the recognition module comprises a human pose data calculation module and an analysis module; wherein,
the human body posture data calculation module is used for carrying out human body key part recognition on the video information based on a human body posture recognition model according to a preset video frame rate to obtain human body posture data;
the analysis module is used for analyzing the human body posture data, judging the number of the physical examination people in the physical examination cabin and finishing the identification of the number of the physical examination people.
9. The apparatus of claim 7, wherein the glasses-wearing determining module comprises a face image information obtaining module, a model calculating module; wherein,
the face image information acquisition module is used for acquiring face image information of a physical examination person in the monitoring video information in the physical examination cabin;
the model calculation module is used for inputting the face image information into a glasses wearing classification model trained in advance, predicting whether the physical examination person wears glasses or not, and determining the glasses wearing condition of the physical examination person.
10. The apparatus of claim 8, wherein the means for determining eye occlusion comprises means for determining occlusion location, means for model prediction, means for ratio calculation, and means for determining; wherein,
the occlusion position determining module is used for determining the position of the occluded eye;
the model prediction module is used for inputting the face image information in the monitoring video information into a pre-trained masking plate position model and predicting the position of the eye masking plate;
the ratio calculation module is used for calculating the ratio between the position of the shielded eye and the position of the masking plate;
the judging module is used for judging that the eye shielding of the physical examination person is correct when the ratio is larger than a preset threshold value;
the shielding position determining module comprises a midline function calculating module and a mapping determining module;
the midline function calculation module is used for fitting the pose data of the key parts of the face by adopting a least square method to obtain a head midline function;
the mapping determination module is used for mapping the shielded eye key parts according to the head midline function by taking the screened eye key parts as reference positions so as to determine the shielded eye positions.
CN201910442135.6A 2019-05-24 2019-05-24 A kind of self-service examination machine eyesight detection intelligent processing method Pending CN110222608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910442135.6A CN110222608A (en) 2019-05-24 2019-05-24 A kind of self-service examination machine eyesight detection intelligent processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910442135.6A CN110222608A (en) 2019-05-24 2019-05-24 A kind of self-service examination machine eyesight detection intelligent processing method

Publications (1)

Publication Number Publication Date
CN110222608A true CN110222608A (en) 2019-09-10

Family

ID=67818342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910442135.6A Pending CN110222608A (en) 2019-05-24 2019-05-24 A kind of self-service examination machine eyesight detection intelligent processing method

Country Status (1)

Country Link
CN (1) CN110222608A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
CN112957002A (en) * 2021-02-01 2021-06-15 江苏盖睿健康科技有限公司 Self-help eyesight detection method and device and computer readable storage medium
CN116913007A (en) * 2023-09-14 2023-10-20 贵州大学 Multi-terminal interaction method and device based on self-help physical examination machine

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871129A (en) * 2012-12-09 2014-06-18 山东中科安矿科技有限公司 Mine wellhead unattended security control system
CN204480251U (en) * 2014-08-19 2015-07-15 青岛通产软件科技有限公司 The self-service detection system of a kind of driver's physical qualification
CN106175658A (en) * 2016-07-05 2016-12-07 苏州宣嘉光电科技有限公司 A kind of vision dynamic and intelligent monitoring system
CN106407911A (en) * 2016-08-31 2017-02-15 乐视控股(北京)有限公司 Image-based eyeglass recognition method and device
CN107080524A (en) * 2017-05-22 2017-08-22 福州米鱼信息科技有限公司 Intelligent physical examination all-in-one
CN108076312A (en) * 2016-11-14 2018-05-25 北京航天长峰科技工业集团有限公司 Demographics monitoring identifying system based on depth camera
CN109165552A (en) * 2018-07-14 2019-01-08 深圳神目信息技术有限公司 A kind of gesture recognition method based on human body key point, system and memory
CN109410466A (en) * 2018-12-25 2019-03-01 云车行网络科技(北京)有限公司 Driver's self-service examination equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871129A (en) * 2012-12-09 2014-06-18 山东中科安矿科技有限公司 Mine wellhead unattended security control system
CN204480251U (en) * 2014-08-19 2015-07-15 青岛通产软件科技有限公司 The self-service detection system of a kind of driver's physical qualification
CN106175658A (en) * 2016-07-05 2016-12-07 苏州宣嘉光电科技有限公司 A kind of vision dynamic and intelligent monitoring system
CN106407911A (en) * 2016-08-31 2017-02-15 乐视控股(北京)有限公司 Image-based eyeglass recognition method and device
CN108076312A (en) * 2016-11-14 2018-05-25 北京航天长峰科技工业集团有限公司 Demographics monitoring identifying system based on depth camera
CN107080524A (en) * 2017-05-22 2017-08-22 福州米鱼信息科技有限公司 Intelligent physical examination all-in-one
CN109165552A (en) * 2018-07-14 2019-01-08 深圳神目信息技术有限公司 A kind of gesture recognition method based on human body key point, system and memory
CN109410466A (en) * 2018-12-25 2019-03-01 云车行网络科技(北京)有限公司 Driver's self-service examination equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
CN112957002A (en) * 2021-02-01 2021-06-15 江苏盖睿健康科技有限公司 Self-help eyesight detection method and device and computer readable storage medium
CN116913007A (en) * 2023-09-14 2023-10-20 贵州大学 Multi-terminal interaction method and device based on self-help physical examination machine
CN116913007B (en) * 2023-09-14 2023-12-12 贵州大学 Multi-terminal interaction method and device based on self-help physical examination machine

Similar Documents

Publication Publication Date Title
CN105631439B (en) Face image processing process and device
KR102056333B1 (en) Method and apparatus and computer program for setting the indication of spectacle lens edge
Alioua et al. Driver’s fatigue detection based on yawning extraction
CN109840565A (en) A kind of blink detection method based on eye contour feature point aspect ratio
CN102096810B (en) The detection method and device of a kind of fatigue state of user before computer
CN109146856A (en) Picture quality assessment method, device, computer equipment and storage medium
US12056954B2 (en) System and method for selecting images for facial recognition processing
EP3680794B1 (en) Device and method for user authentication on basis of iris recognition
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN110222608A (en) A kind of self-service examination machine eyesight detection intelligent processing method
Darshana et al. Efficient PERCLOS and gaze measurement methodologies to estimate driver attention in real time
CN110226913A (en) A kind of self-service examination machine eyesight detection intelligent processing method and device
CN103093210A (en) Method and device for glasses identification in face identification
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
CN111460950A (en) Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
CN111259815A (en) Method, system, equipment and medium for evaluating quality of face image
Robin et al. Improvement of face and eye detection performance by using multi-task cascaded convolutional networks
CN110705454A (en) Face recognition method with living body detection function
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
Lee et al. Robust iris recognition baseline for the grand challenge
US10956735B1 (en) System and method for determining a refractive error from red reflex images of eyes
US20230020160A1 (en) Method for determining a value of at least one geometrico-morphological parameter of a subject wearing an eyewear
CN110211679A (en) A kind of self-service examination machine eyesight detection intelligent processing method and device
Singh et al. Driver fatigue detection using machine vision approach
CN110287795A (en) A kind of eye age detection method based on image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190910

RJ01 Rejection of invention patent application after publication