CN110874585A - Peeping cheating behavior identification method based on attention area - Google Patents

Peeping cheating behavior identification method based on attention area Download PDF

Info

Publication number
CN110874585A
CN110874585A CN201911189014.1A CN201911189014A CN110874585A CN 110874585 A CN110874585 A CN 110874585A CN 201911189014 A CN201911189014 A CN 201911189014A CN 110874585 A CN110874585 A CN 110874585A
Authority
CN
China
Prior art keywords
angle
attention area
attention
cheating
nose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911189014.1A
Other languages
Chinese (zh)
Other versions
CN110874585B (en
Inventor
伍懿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Yunhai Information Technology Co Ltd
Original Assignee
Xi'an Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Yunhai Information Technology Co Ltd filed Critical Xi'an Yunhai Information Technology Co Ltd
Priority to CN201911189014.1A priority Critical patent/CN110874585B/en
Publication of CN110874585A publication Critical patent/CN110874585A/en
Application granted granted Critical
Publication of CN110874585B publication Critical patent/CN110874585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a peeping cheating behavior identification method based on an attention area. A peeping cheating behavior identification method based on attention areas comprises the following steps: measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen; during examination, a camera acquires a facial image of an examinee every 100ms, and whether the attention area is in a standard angle interval is judged; (3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated; the invention mainly calculates the position of the region where the attention of the examinee is concentrated by identifying the position of the facial organ in the video image.

Description

Peeping cheating behavior identification method based on attention area
Technical Field
The invention relates to the field of identification of cheating behaviors in a paperless examination system, in particular to a peeping type cheating behavior identification method based on an attention area.
Background
At present, students at home and abroad provide various solutions for examination cheating. Can be broadly divided into two categories: a traditional information verification class and an electronic-assisted verification class. The conventional information verification type aims to improve and improve the conventional examination cheating identification method, such as more effective planning on the patrol mode of a invigilator or providing information which is more difficult to be faked by a test taker for identity verification. Electronic auxiliary verification aims at monitoring and identifying cheating behaviors of examinees by using an electronic automation technology besides traditional examination information verification, and currently, an electronic auxiliary verification mode based on biological characteristic information is the mainstream. McGinity states that biometric-based authentication is superior to authentication based on traditional information such as identification numbers. Another study underscores the importance of the detection mechanism being effective throughout the course of the examination. Populus and verbauhwide also consider that biometric systems provide better security than traditional cryptography systems. Biometric authentication uses automatic identification of an authentication object with reference to physiological characteristics of a living person, such as voice, geometric characteristics of a hand, a fingerprint, a facial image, and the like. In general, biometric identification requires comparing pre-stored data with captured data to yield a similarity result. There are two main categories of biometric identification methods: identification based on keystroke dynamics characteristics and identification based on video image characteristics.
Identification mode based on keystroke dynamics
Flior and Kowalski proposed a way to provide continuous biometric user authentication for online examination through keystroke dynamics. The characteristics of words input by a user and the key stroke rhythm are counted and compared, the problem is solved more simply after cheating is carried out by an examinee, and the cheating behavior is identified according to the obvious change of the relevant key stroke characteristics during the problem answering. However, the keystroke dynamics approach has the disadvantage that the recognition result is influenced by the change of the keystroke characteristics caused by long-time typing fatigue as the examination time progresses, the change of the keystroke characteristics caused by different thinking times when solving the difficult problems and simple problems, and the like.
(II) recognition mode based on video analysis
The identification mode based on video analysis is different and has different classifications according to different objects of video surveillance, and plum persistent proposes a cheating behavior identification mode through video images of an examination room, wherein the object of surveillance is the whole examination room. But because the supervision object is too complex, the deficiency is obvious. It can only identify extreme examination cheating behaviors of large-area body movement such as leaving a seat and walking in an examination room. And the identification effect on the routine examination cheating behaviors with small actions is not ideal. Compared with a method for carrying out cheating behavior recognition by taking the whole examination room as a unit, more cheating behavior recognition based on videos is carried out by taking an examinee as a unit. One examinee corresponds to one camera, and cheating behaviors are recognized by analyzing the video of the upper body or the face of the examinee. The identification method has good effect on identity cheating, namely the identification of the cheating behavior of the test taker, and has good identification rate on other cheating behaviors with small actions. However, the identification of cheating actions with smaller physical actions, i.e. keeping the corresponding face still within the shooting area of the camera, is still insufficient, in fact, when the corresponding examinee is looking at the carried data or the head of the screen of the adjacent examinee, compared with the cheating actions of peeping type, which only change at a certain angle during normal answering, the head of the screen of the adjacent examinee is changed.
Disclosure of Invention
The invention aims to provide an attention area calculation method based on a face alignment technology aiming at the problems of the identification mode based on video analysis.
The technical scheme of the invention is as follows:
a peeping cheating behavior identification method based on attention areas comprises the following steps:
(1) before the examination, measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen;
(2) in the examination, a face image of an examinee is acquired by a camera every 100ms, an attention area is formed by the head rotation angle α, the horizontal inclination angle β and the vertical inclination angle gamma of the face image relative to the camera, and whether the attention area is in a standard angle interval or not is judged;
(3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated;
wherein, the standard angle interval is: the horizontal angle interval [ -30 degrees, 30 degrees ], the vertical angle interval [ -40 degrees, 20 degrees ], and the rotation angle interval [ -60 degrees, 60 degrees ].
The method for acquiring the attention area comprises the following steps:
(1) normalizing and graying the face image acquired by the camera and recording as grey;
(2) carrying out face region identification on the normalized and grayed face image to obtain a maximum face region which is marked as biggestFace;
(3) identifying the facial organ areas of eyes, nose and mouth of the maximum facial area, and respectively recording as leftEye, rightEye, nose and mouth;
(4) the head rotation angle α, the horizontal tilt angle β, and the vertical tilt angle γ are calculated, respectively.
Preferably, the head rotation angle α is calculated as follows:
defining α is 0 degree when the head is vertical to the horizontal axis of the camera, the angle of rotation is increased by α degree right and the angle of rotation is decreased by α degree left, because the connecting line between the left eye and the right eye is always vertical to the horizontal direction of the head, the included angle theta between the connecting line between the left eye and the right eye and the horizontal direction is used for marking the head rotation angle α;
(1) the head rotation angle α 1 is marked by the angle θ between the horizontal direction and the line between the left eye and the right eye:
left eye starting coordinate (x)1,y1) Length len1, width hei 1; coordinates of the left eye
Figure BDA0002293096750000031
Right eye start coordinate (x)2,y2) Length len2, width hei 2; coordinates of the right eye
Figure BDA0002293096750000032
Figure BDA0002293096750000033
(2) Marking the head rotation angle α 2 by the included angle theta' between the connecting line between the mouth and the nose and the vertical direction;
nose start coordinate (x)3,y3) Length len3, width hei 3; nose coordinate
Figure BDA0002293096750000034
Mouth start coordinate (x)4,y4) Length len4, width hei 4; mouth coordinates
Figure BDA0002293096750000035
Figure BDA0002293096750000036
(3)
Figure BDA0002293096750000037
If α falls in the rotation angle range of [ -60 DEG, 60 DEG ]]Inner, then s α1, otherwise sα=0。
Alternatively, the horizontal tilt angle β is preferably calculated as follows:
(1) using eyes and nose as face positioning markers;
when there is a horizontal tilt of the head, the larger the tilt angle, the closer the nose region is to the eye region in the corresponding direction in the horizontal direction
Figure BDA0002293096750000038
Defining the horizontal tilt angle β to be 0 when the binocular emmetropic camera is viewed, β increasing when the head turns to the right and β decreasing when the head turns to the left;
defining a distance l between the left and right eye2Intersection point p1 and intersection point p1 of perpendicular lines drawn by connecting the nose to the left eye and the right eyeDistance to left eye l1(ii) a Is calculated to
Figure BDA0002293096750000039
For horizontal tilt angle and R1The mapping relationship of (2) is confirmed by adopting a statistical fitting mode. Firstly, clear sample pictures are selected, the horizontal inclination angle is manually marked, and the range of the selected angle is about [ -30 DEG, 30 DEG ]]Calculating R of each sample picture1Value, recording the corresponding horizontal tilt angle and R1The relationship between them. As shown in FIG. 3, the x-axis is the horizontal tilt angle of the manual mark, and the y-axis is R1The proportional value of (c); the points are the relation coordinates of the corresponding sample pictures, and the straight lines are the fitted linear mapping functions.
The horizontal inclination angle and the R can be obtained by fitting the linear function of the three-dimensional linear inclination angle1The mapping function between is:
β1=72.1R1-35.4;
(2) using eyes and mouth as face positioning markers;
defining an intersection point p2 of perpendicular lines drawn from the mouth to the line between the left and right eyes, and a distance l from the intersection point p2 to the left eye3
Figure BDA0002293096750000041
β2=72.1R2-35.4;
The solving process of the coordinates of the intersection point p1 is as follows:
straight line of binocular coordinate points:
Figure BDA0002293096750000042
extracting the corresponding coefficient to obtain:
a1=1;
Figure BDA0002293096750000043
Figure BDA0002293096750000044
Figure BDA0002293096750000045
distance l from left eye coordinate point to p11Obtaining:
Figure BDA0002293096750000046
Figure BDA0002293096750000047
wherein, the straight line where the nose coordinate point is located:
Figure BDA0002293096750000048
the same can be extracted for the corresponding coefficients:
a2=1;
Figure BDA0002293096750000049
Figure BDA00022930967500000410
Figure BDA00022930967500000411
p1 and p2 point coordinates can be obtained using determinants in linear algebra:
Figure BDA0002293096750000051
Figure BDA0002293096750000052
distance l from intersection point p2 to left eye3Obtaining:
Figure BDA0002293096750000053
(3) selecting the ratio between the area sizes of the left eye and the right eye as a correlation factor;
through observation, it can be found that because of perspective transformation, when the horizontal inclination angle is 0 °, the left and right eye regions are the same, and when the angle is positive, the left eye region is large and the right eye region is small, and when the angle is negative, the right eye region is large and the left eye region is small;
Figure BDA0002293096750000054
further comprising:
Figure BDA0002293096750000055
horizontal tilt angle value and R3The relation between the horizontal inclination angle β and the horizontal inclination angle R is shown in the figure 4 on the Y axis and the X axis3Each point represents a sample and the line represents the fitting function; fitting to obtain:
β3=49.1R3-48.9;
Figure BDA0002293096750000056
if β falls in the horizontal angle range of [ -30 DEG, 30 DEG ]]Inner, then sβ1, otherwise sβ=0。
Or preferably, the calculation process of the vertical inclination angle γ is as follows:
(1) the length of a perpendicular line drawn from the nose to the line between the left and right eyes is defined as l4(ii) a The vertical inclination angle of the face is 0 degree when the head looks at the camera, and the face is positive when the head leans upwardsNegative when downward;
Figure BDA0002293096750000057
to R4And vertical tilt angle γ 1 as shown in fig. 5. X-axis and Y-axis respectively represent R4And vertical tilt angle γ 1, points represent samples and straight lines represent fitting functions. The corresponding mapping function is obtained as:
γ1=800R4 2+395R4-80;
(2) defining the length of a perpendicular line drawn from the mouth to the line between the left eye and the right eye as l5
Figure BDA0002293096750000058
R5And the vertical inclination angle γ 2 are as shown in fig. 6. In the figure, the X-axis represents R5The Y axis represents a vertical inclination angle gamma 2, the points represent samples, and the straight line represents a fitting curve;
γ2=800R5 2-141R5-122.5;
Figure BDA0002293096750000061
if gamma falls within the vertical angle range of [ -40 DEG, 20 DEG ]]Inner, then sγ1, otherwise sγ=0。
The invention has the technical effects that:
the method provided by the invention mainly identifies the position of the facial organ in the video image, and further calculates the position of the region where the attention of the examinee is concentrated. Furthermore, the identification of peeping cheating behaviors which are difficult to identify such as checking adjacent examinees PC or checking illegal carried data and the like, which are not available in the similar cheating behavior identification system, is realized.
Drawings
Fig. 1 is a schematic diagram of the head inclination angle θ based on the left and right eye coordinates.
Fig. 2 is a schematic diagram of the head inclination angle θ' based on the nose-mouth coordinates.
FIG. 3 is R of a sample picture1、R2A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
FIG. 4 is R of a sample picture3A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
FIG. 5 is R of a sample picture4A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
FIG. 6 is R of a sample picture5A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
Fig. 7 is a schematic diagram of a peeping cheating behavior identification method based on attention area.
FIG. 8 is a bar graph of the head rotation angle difference in the exemplary embodiment.
FIG. 9 is a bar graph of the difference in horizontal tilt angle of the head in the example embodiment.
FIG. 10 is a bar graph of the vertical tilt angle of the head in the example embodiment.
Detailed Description
A peeping cheating behavior identification method based on attention areas comprises the following steps:
(1) before the examination, measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen;
(2) in the examination, a face image of an examinee is acquired by a camera every 100ms, an attention area is formed by the head rotation angle α, the horizontal inclination angle β and the vertical inclination angle gamma of the face image relative to the camera, and whether the attention area is in a standard angle interval or not is judged;
(3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated;
wherein, the standard angle interval is: the horizontal angle interval [ -30 degrees, 30 degrees ], the vertical angle interval [ -40 degrees, 20 degrees ], and the rotation angle interval [ -60 degrees, 60 degrees ].
The method for acquiring the attention area comprises the following steps:
(1) normalizing and graying the face image acquired by the camera and recording as grey;
(2) carrying out face region identification on the normalized and grayed face image to obtain a maximum face region which is marked as biggestFace;
(3) identifying the facial organ areas of eyes, nose and mouth of the maximum facial area, and respectively recording as leftEye, rightEye, nose and mouth;
(4) the head rotation angle α, the horizontal tilt angle β, and the vertical tilt angle γ are calculated, respectively.
Firstly, the calculation process of the head rotation angle α is as follows:
defining α is 0 degree when the head is vertical to the horizontal axis of the camera, the angle of rotation is increased by α degree right and the angle of rotation is decreased by α degree left, because the connecting line between the left eye and the right eye is always vertical to the horizontal direction of the head, the included angle theta between the connecting line between the left eye and the right eye and the horizontal direction is used for marking the head rotation angle α;
(1) the head rotation angle α 1 is marked by the angle θ between the horizontal direction and the line between the left eye and the right eye:
left eye starting coordinate (x)1,y1) Length len1, width hei 1; coordinates of the left eye
Figure BDA0002293096750000071
Right eye start coordinate (x)2,y2) Length len2, width hei 2; coordinates of the right eye
Figure BDA0002293096750000072
Figure BDA0002293096750000073
(2) Marking the head rotation angle α 2 by the included angle theta' between the connecting line between the mouth and the nose and the vertical direction;
nose start coordinate (x)3,y3) Length len3, width hei 3; nose coordinate
Figure BDA0002293096750000074
Mouth start coordinate (x)4,y4) Length len4, width hei 4; mouth coordinates
Figure BDA0002293096750000075
Figure BDA0002293096750000076
(3)
Figure BDA0002293096750000077
If α falls in the rotation angle range of [ -60 DEG, 60 DEG ]]Inner, then sα1, otherwise sα=0。
Secondly, the calculation process of the horizontal inclination angle β is as follows:
(1) using eyes and nose as face positioning markers;
when there is a horizontal tilt of the head, the larger the tilt angle, the closer the nose region is to the eye region in the corresponding direction in the horizontal direction
Figure BDA0002293096750000078
Defining the horizontal tilt angle β to be 0 when the binocular emmetropic camera is viewed, β increasing when the head turns to the right and β decreasing when the head turns to the left;
defining a distance l between the left and right eye2Intersection point p1 of perpendicular lines drawn by connecting the nose to the left eye and the right eye, and distance l from intersection point p1 to the left eye1(ii) a Is calculated to
Figure BDA0002293096750000081
For horizontal tilt angle and R1The mapping relationship of (2) is confirmed by adopting a statistical fitting mode. Firstly, clear sample pictures are selected, the horizontal inclination angle is manually marked, and the range of the selected angle is about [ -30 DEG, 30 DEG ]]Calculating R of each sample picture1Value, recording the corresponding horizontal tilt angle and R1The relationship between them. As shown in FIG. 3, the x-axis is the horizontal tilt angle of the manual mark, and the y-axis is R1The proportional value of (c); the points are the relation coordinates of the corresponding sample pictures, and the straight lines are the fitted linear mapping functions.
The horizontal inclination angle and the R can be obtained by fitting the linear function of the three-dimensional linear inclination angle1The mapping function between is:
β1=72.1R1-35.4;
(2) using eyes and mouth as face positioning markers;
defining an intersection point p2 of perpendicular lines drawn from the mouth to the line between the left and right eyes, and a distance l from the intersection point p2 to the left eye3
Figure BDA0002293096750000082
β2=72.1R2-35.4;
The solving process of the coordinates of the intersection point p1 is as follows:
straight line of binocular coordinate points:
Figure BDA0002293096750000083
extracting the corresponding coefficient to obtain:
a1=1;
Figure BDA0002293096750000084
Figure BDA0002293096750000085
Figure BDA0002293096750000086
distance l from left eye coordinate point to p11Obtaining:
Figure BDA0002293096750000087
Figure BDA0002293096750000091
wherein, the straight line where the nose coordinate point is located:
Figure BDA0002293096750000092
the same can be extracted for the corresponding coefficients:
a2=1;
Figure BDA0002293096750000093
Figure BDA0002293096750000094
Figure BDA0002293096750000095
p1 and p2 point coordinates can be obtained using determinants in linear algebra:
Figure BDA0002293096750000096
Figure BDA0002293096750000097
distance l from intersection point p2 to left eye3Obtaining:
Figure BDA0002293096750000098
(3) selecting the ratio between the area sizes of the left eye and the right eye as a correlation factor;
through observation, it can be found that because of perspective transformation, when the horizontal inclination angle is 0 °, the left and right eye regions are the same, and when the angle is positive, the left eye region is large and the right eye region is small, and when the angle is negative, the right eye region is large and the left eye region is small;
Figure BDA0002293096750000099
further comprising:
Figure BDA00022930967500000910
horizontal tilt angle value and R3The relation between the horizontal inclination angle β and the horizontal inclination angle R is shown in the figure 4 on the Y axis and the X axis3Each point represents a sample and the line represents the fitting function; fitting to obtain:
β3=49.1R3-48.9;
Figure BDA0002293096750000101
if β falls in the horizontal angle range of [ -30 DEG, 30 DEG ]]Inner, then sβ1, otherwise sβ=0。
Thirdly, the calculation process of the vertical inclination angle gamma is as follows:
(1) the length of a perpendicular line drawn from the nose to the line between the left and right eyes is defined as l4(ii) a When the head is viewed from the camera, the vertical inclination angle of the face is approximately 0 degrees, when the head faces upwards, the face is positive, and when the head faces downwards, the face is negative;
Figure BDA0002293096750000102
to R4And vertical tilt angle γ 1 as shown in fig. 5. X-axis and Y-axis respectively represent R4And vertical tilt angle γ 1, points represent samples and straight lines represent fitting functions. The corresponding mapping function is obtained as:
γ1=800R4 2+395R4-80;
(2) defining the length of a perpendicular line drawn from the mouth to the line between the left eye and the right eye as l5
Figure BDA0002293096750000103
R5And the vertical inclination angle γ 2 are as shown in fig. 6. In the figure, the X-axis represents R5The Y axis represents a vertical inclination angle gamma 2, the points represent samples, and the straight line represents a fitting curve;
γ2=800R5 2-141R5-122.5;
Figure BDA0002293096750000104
if gamma falls within the vertical angle range of [ -40 DEG, 20 DEG ]]Inner, then sγ1, otherwise sγ=0。
Specific examples of the experiments
Left eye region start coordinates (248, 213), length 38, width 12; left eye coordinates (267, 219);
right eye region start coordinates (364, 234), length 48, width 16; right eye coordinate (388, 242)
Nose region start coordinate (272,224), length 52, width 88; nose coordinates (298, 268);
mouth start coordinate (248,340), length 88, width 42; mouth coordinates (292,361).
The head rotation angle α is calculated as follows:
Figure BDA0002293096750000105
Figure BDA0002293096750000106
Figure BDA0002293096750000107
the calculation process of the horizontal inclination angle β is as follows:
P1(248,213);P2(312,227);
l1=39.62,l2=123.17,l3=45.71,l4=42.76,l5=135.48;
obtaining:
Figure BDA0002293096750000111
β1=72.1R1-35.4=-12.21°;
Figure BDA0002293096750000112
β2=72.1R1-35.4=-8.64°;
Figure BDA0002293096750000113
Figure BDA0002293096750000114
(III) the calculation process of the vertical inclination angle gamma is as follows:
Figure BDA0002293096750000115
γ1=800R4 2+395R4-80=0.198°;
Figure BDA0002293096750000116
γ2=800R5 2-141R5-122.5=0.152°;
Figure BDA0002293096750000117
50 clear facial images are selected, and the corresponding head rotation angle, horizontal inclination angle and vertical inclination angle are obtained by the method provided by the patent. Comparing the obtained angle with the angle of the manual identification, recording the difference value of the corresponding angle of each sample, and calculating the average value of the corresponding difference values and the variance of the difference value distribution;
the mean value of the head rotation angle difference was found to be 3.14 ° and the variance was 2.9204. As shown in fig. 8;
the average of the head horizontal tilt angle differences was 4.76 ° and the variance was 3.0824. As shown in fig. 9;
the average of the head vertical tilt angle differences was 5.34 ° and the variance was 12.2644. As shown in fig. 10;
from the results, it is understood that the average error of the head rotation angle is the smallest, only 3.14 °, and the error distribution is also the smallest. The average error of the vertical inclination angle is maximum and reaches 5.34 degrees, and the error distribution is also maximum.

Claims (7)

1. A peeping cheating behavior identification method based on attention areas is characterized in that: the method comprises the following steps:
(1) before the examination, measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen;
(2) in the examination, a face image of an examinee is acquired by a camera every 100ms, an attention area is formed by the head rotation angle α, the horizontal inclination angle β and the vertical inclination angle gamma of the face image relative to the camera, and whether the attention area is in a standard angle interval or not is judged;
(3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated;
wherein, the standard angle interval is: the horizontal angle interval [ -30 degrees, 30 degrees ], the vertical angle interval [ -40 degrees, 20 degrees ], and the rotation angle interval [ -60 degrees, 60 degrees ].
2. The attention area-based peer-to-peer cheating action recognition method of claim 1, wherein: the method for acquiring the attention area comprises the following steps:
(1) normalizing and graying the face image acquired by the camera and recording as grey;
(2) carrying out face region identification on the normalized and grayed face image to obtain a maximum face region which is marked as biggestFace;
(3) identifying the facial organ areas of eyes, nose and mouth of the maximum facial area, and respectively recording as leftEye, rightEye, nose and mouth;
(4) the head rotation angle α, the horizontal tilt angle β, and the vertical tilt angle γ are calculated, respectively.
3. The method for recognizing peeping-type cheating behavior based on attention area as claimed in claim 2, wherein the head rotation angle α is calculated as follows:
(1) the head rotation angle α 1 is marked by the angle θ between the horizontal direction and the line between the left eye and the right eye:
left eye starting coordinate (x)1,y1) Length len1, width hei 1; coordinates of the left eye
Figure FDA0002293096740000011
Right eye start coordinate (x)2,y2) Length len2, width hei 2; coordinates of the right eye
Figure FDA0002293096740000012
Figure FDA0002293096740000013
(2) Marking the head rotation angle α 2 by the included angle theta' between the connecting line between the mouth and the nose and the vertical direction;
nose start coordinate (x)3,y3) Length len3, width hei 3; nose coordinate
Figure FDA0002293096740000014
Mouth start coordinate (x)4,y4) Length len4, width hei 4; mouth coordinates
Figure FDA0002293096740000015
Figure FDA0002293096740000021
(3)
Figure FDA0002293096740000022
If α falls in the rotation angle range of [ -60 DEG, 60 DEG ]]Inner, then sα1, otherwise sα=0。
4. The method for recognizing peeping-type cheating action based on attention area according to claim 3, wherein the horizontal inclination angle β is calculated as follows:
(1) defining a distance l between the left and right eye2Intersection point p1 of perpendicular lines drawn by connecting the nose to the left eye and the right eye, and distance l from intersection point p1 to the left eye1(ii) a According to the Pythagorean theorem, calculate
Figure FDA0002293096740000023
β1=72.1R1-35.4;
(2) Defining the mouth to the left and right eyesThe intersection point p2 of the perpendicular lines between the lines and the distance l from the intersection point p2 to the left eye3
Figure FDA0002293096740000024
β2=72.1R2-35.4;
(3) Selecting the ratio between the area sizes of the left eye and the right eye as a correlation factor;
Figure FDA0002293096740000025
β3=49.1R3-48.9;
Figure FDA0002293096740000026
if β falls in the horizontal angle range of [ -30 DEG, 30 DEG ]]Inner, then sβ1, otherwise sβ=0。
5. The attention area-based peer-to-peer cheating action recognition method of claim 4, wherein: the calculation process of the vertical inclination angle gamma is as follows:
(1) the length of a perpendicular line drawn from the nose to the line between the left and right eyes is defined as l4
Figure FDA0002293096740000027
γ1=800R4 2+395R4-80;
(2) Defining the length of a perpendicular line drawn from the mouth to the line between the left eye and the right eye as l5
Figure FDA0002293096740000028
γ2=800R5 2-141R5-122.5;
Figure FDA0002293096740000029
If gamma falls within the vertical angle range of [ -40 DEG, 20 DEG ]]Inner, then sγ1, otherwise sγ=0。
6. The attention area-based peer-type cheating action identification method of claim 5, wherein: the above-mentioned1、l2、l3、l4、l5The calculation method comprises the following steps:
Figure FDA0002293096740000031
Figure FDA0002293096740000032
Figure FDA0002293096740000033
Figure FDA0002293096740000034
Figure FDA0002293096740000035
7. the attention area-based peer-to-peer cheating action identification method of claim 6, wherein: the coordinates of the intersection point p1 and the intersection point p2 are
Figure FDA0002293096740000036
Figure FDA0002293096740000037
Wherein
a1=1;
Figure FDA0002293096740000038
Figure FDA0002293096740000039
a2=1;
Figure FDA00022930967400000310
Figure FDA00022930967400000311
Figure FDA00022930967400000312
CN201911189014.1A 2019-11-28 2019-11-28 Peeping cheating behavior identification method based on attention area Active CN110874585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911189014.1A CN110874585B (en) 2019-11-28 2019-11-28 Peeping cheating behavior identification method based on attention area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911189014.1A CN110874585B (en) 2019-11-28 2019-11-28 Peeping cheating behavior identification method based on attention area

Publications (2)

Publication Number Publication Date
CN110874585A true CN110874585A (en) 2020-03-10
CN110874585B CN110874585B (en) 2023-04-18

Family

ID=69717737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911189014.1A Active CN110874585B (en) 2019-11-28 2019-11-28 Peeping cheating behavior identification method based on attention area

Country Status (1)

Country Link
CN (1) CN110874585B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516074A (en) * 2021-07-08 2021-10-19 西安邮电大学 Online examination system anti-cheating method based on pupil tracking

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
CN102799893A (en) * 2012-06-15 2012-11-28 北京理工大学 Method for processing monitoring video in examination room
CN103208212A (en) * 2013-03-26 2013-07-17 陈秀成 Anti-cheating remote online examination method and system
US20140114148A1 (en) * 2011-11-04 2014-04-24 Questionmark Computing Limited System and method for data anomaly detection process in assessments
CN205835343U (en) * 2016-04-27 2016-12-28 深圳前海勇艺达机器人有限公司 A kind of robot with invigilator's function
CN106778676A (en) * 2016-12-31 2017-05-31 中南大学 A kind of notice appraisal procedure based on recognition of face and image procossing
US20170278417A1 (en) * 2014-08-27 2017-09-28 Eyessessment Technologies Ltd. Evaluating test taking
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
US20140114148A1 (en) * 2011-11-04 2014-04-24 Questionmark Computing Limited System and method for data anomaly detection process in assessments
CN102799893A (en) * 2012-06-15 2012-11-28 北京理工大学 Method for processing monitoring video in examination room
CN103208212A (en) * 2013-03-26 2013-07-17 陈秀成 Anti-cheating remote online examination method and system
US20170278417A1 (en) * 2014-08-27 2017-09-28 Eyessessment Technologies Ltd. Evaluating test taking
CN205835343U (en) * 2016-04-27 2016-12-28 深圳前海勇艺达机器人有限公司 A kind of robot with invigilator's function
CN106778676A (en) * 2016-12-31 2017-05-31 中南大学 A kind of notice appraisal procedure based on recognition of face and image procossing
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
熊碧辉;周后盘;黄经州;阮益权;周里程;: "一种融合视线检测的注意力检测方法" *
程文冬;付锐;袁伟;刘卓凡;张名芳;刘通;: "驾驶人注意力分散的图像检测与分级预警" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516074A (en) * 2021-07-08 2021-10-19 西安邮电大学 Online examination system anti-cheating method based on pupil tracking

Also Published As

Publication number Publication date
CN110874585B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN108108684B (en) Attention detection method integrating sight detection
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
TWI383325B (en) Face expressions identification
TWI250469B (en) Individual recognizing apparatus and individual recognizing method
EP3680794B1 (en) Device and method for user authentication on basis of iris recognition
US8698914B2 (en) Method and apparatus for recognizing a protrusion on a face
Batista A drowsiness and point of attention monitoring system for driver vigilance
CN101593352A (en) Driving safety monitoring system based on face orientation and visual focus
US8150118B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
JP6906717B2 (en) Status determination device, status determination method, and status determination program
JP2013513155A (en) Cost-effective and robust system and method for eye tracking and driver awareness
CN109711239B (en) Visual attention detection method based on improved mixed increment dynamic Bayesian network
TW201140511A (en) Drowsiness detection method
Giannakakis et al. Evaluation of head pose features for stress detection and classification
CN109101949A (en) A kind of human face in-vivo detection method based on colour-video signal frequency-domain analysis
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN111460950A (en) Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
JP4507679B2 (en) Image recognition apparatus, image extraction apparatus, image extraction method, and program
CN110874585B (en) Peeping cheating behavior identification method based on attention area
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN115937928A (en) Learning state monitoring method and system based on multi-vision feature fusion
CN113609963B (en) Real-time multi-human-body-angle smoking behavior detection method
CN111104817A (en) Fatigue detection method based on deep learning
CN111275754B (en) Face acne mark proportion calculation method based on deep learning
CN112528767A (en) Machine vision-based construction machinery operator fatigue operation detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant