CN113392776A - Seat leaving behavior detection method and storage device combining seat information and machine vision - Google Patents

Seat leaving behavior detection method and storage device combining seat information and machine vision Download PDF

Info

Publication number
CN113392776A
CN113392776A CN202110672225.1A CN202110672225A CN113392776A CN 113392776 A CN113392776 A CN 113392776A CN 202110672225 A CN202110672225 A CN 202110672225A CN 113392776 A CN113392776 A CN 113392776A
Authority
CN
China
Prior art keywords
seat
frame
initialized
pedestrian
position frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110672225.1A
Other languages
Chinese (zh)
Other versions
CN113392776B (en
Inventor
吴忠健
郑子建
孙丘伟
陈志昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sunsea Iot Technology Co ltd
Original Assignee
Shenzhen Qianfalcon Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianfalcon Technology Co ltd filed Critical Shenzhen Qianfalcon Technology Co ltd
Priority to CN202110672225.1A priority Critical patent/CN113392776B/en
Publication of CN113392776A publication Critical patent/CN113392776A/en
Application granted granted Critical
Publication of CN113392776B publication Critical patent/CN113392776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to a seat leaving behavior detection method and storage equipment combining seat information and machine vision. The method for detecting the seat leaving behavior by combining the seat information and the machine vision comprises the following steps: carrying out target detection on target video stream data to obtain a first detection result and a second detection result; establishing an initialized seat table; updating to obtain a real position frame; and judging whether the initial point of the track table is in a sitting posture and falls in the real position frame, if so, judging whether the size relation between the pedestrian frame at the current moment in the track table and the matched real position frame meets a preset condition, and if so, judging that the detected object leaves the seat. In the method, the desk is not detected, but the target detects the reverse pushing of the desk, so that the problem that the detected object is mistakenly detected as out-of-seat when the detected object has large body movement, such as the body is inclined out of the desk, is effectively solved.

Description

Seat leaving behavior detection method and storage device combining seat information and machine vision
Technical Field
The invention relates to the technical field of data processing, in particular to a seat leaving behavior detection method and storage equipment combining seat information and machine vision.
Background
The smart classroom system is an important scene of a smart campus. The method for artificial intelligent deep learning is used in the scene, and real-time classroom image data shot by a classroom camera is detected and identified. And then, calculating, counting and analyzing the detection and identification results to obtain each index for evaluating class quality of the classroom. By using the intelligent classroom system, the burden of teachers is relieved, continuous quality assessment can be provided for the teaching process, and the improvement of teaching quality of schools is assisted. The detection and identification of the situation that the detected object leaves the seat in the class is an important ring for real-time data detection in the class, and is one of the prerequisites of statistical analysis indexes.
Common methods for solving such problems include background modeling and deep learning target detection plus trajectory tracking methods.
The background modeling method comprises the following steps: through background modeling, a foreground object (a moving detected object) and background information (a static scene) are distinguished, and therefore the detected object is analyzed to move. The method has the following problems: when the detected object has large limb movement on the seat, the limb movement is judged as the foreground (moving detected object) by false detection, and the false identification is that the detected object moves in class.
The deep learning target detection and trajectory tracking method comprises the following steps: and detecting a human body or human face target by using the target detection of deep learning, and tracking the target. This method has the following problems: the target detects human body/face targets, and a teacher and students cannot be distinguished, and the students cannot be distinguished from leaving seats or walking about the teacher.
Disclosure of Invention
Therefore, the method for detecting the behavior of leaving the seat by combining the seat information and machine vision is provided, and is used for solving the problems of high false detection and missing detection rate in the existing method for detecting the detected object leaving the seat. The specific technical scheme is as follows:
a method for off-seat behavior detection in conjunction with seat information and machine vision, comprising the steps of:
carrying out target detection on target video stream data through a pre-established deep learning model to obtain a first detection result and a second detection result, wherein the first detection result comprises: a pedestrian attribute detection frame for each detected object, the pedestrian attribute detection frame including: a pedestrian frame, the second detection result comprising: a track table of the detected object;
executing a first preset operation on the first detection result to establish an initialized seat table, wherein the initialized seat table comprises at least one alternative position frame;
analyzing the second detection result, and updating an alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table;
judging whether the initial point posture of the track table of the detected object is a sitting posture or not and judging whether the initial point falls in the real position frame or not, if the initial point posture of the track table of the detected object is the sitting posture and the initial point falls in the real position frame, judging whether the size relation between the pedestrian frame at the current moment in the track table and the corresponding real position frame meets a preset condition or not, and if the size relation meets the preset condition, judging that the detected object is in an out-of-seat state.
Further, the step of "performing a first preset operation on the first detection result to establish an initialized seat table" specifically includes the steps of:
the pedestrian attribute detection frame further includes: a human head frame and/or a human face frame;
and calculating the proportion value of the human head frame and the pedestrian frame and/or the proportion value of the human face frame and the pedestrian frame, screening out the pedestrian frame with the sitting posture according to the proportion values, and confirming the initialized seat table.
Further, the "analyzing the second detection result, and updating the candidate location frame in the initialized seat table according to the analysis result, where the updating includes: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table ″, specifically comprising the following steps:
the contents of the track table store include, but are not limited to: pedestrian attribute detection frames and postures of each detected object, wherein the pedestrian attribute detection frames comprise but are not limited to: a pedestrian frame, a head frame and a face frame;
screening out table elements with the postures of sitting postures, performing IOU calculation by using pedestrian frames corresponding to the table elements and alternative position frames in the initialized seat table, when the IOU of the alternative position frame in a certain initialized seat table and the pedestrian frame corresponding to the table elements is more than or equal to a preset value, accumulating 1 for the number of the head and the face corresponding to the alternative position frame in the certain initialized seat table,
if the IOU of the candidate position frame in a certain initialized seat table and the pedestrian frame corresponding to the table element is smaller than a preset value, adding the pedestrian frame corresponding to the table element to the initialized seat table as a new candidate position frame;
and when the numbers of the human heads and the human faces corresponding to the alternative position frame are accumulated to a preset number, marking the alternative position frame as a real position frame.
Further, the method also comprises the following steps:
and when the number of a certain face corresponding to the real position frame is accumulated to a preset number, judging that the real position frame corresponds to the real position frame of the person.
Further, the step of determining whether the posture of the starting point of the trajectory table of the detected object is a sitting posture and determining whether the starting point falls in the real position frame includes the steps of:
IOU calculation is carried out on a pedestrian frame corresponding to the starting point of the track table of the detected object and a real position frame in the initialized seat table, whether the starting point falls in the real position frame or not is judged according to the calculation result and information stored by the starting point, and whether the posture of the detected object when the starting point is located is a sitting posture or not is judged.
Further, the step of "further determining whether a size relationship between the pedestrian frame at the current time in the trajectory table and the corresponding real position frame meets a preset condition, and if the size relationship meets the preset condition, determining that the detected object is in an out-of-seat state" specifically includes the steps of:
and performing IOU calculation on the pedestrian frame at the current moment and the matched real position frame in the trajectory table, judging whether the IOU value meets a preset threshold value, and if so, judging that the detected object is in an out-of-seat state.
Further, the method also comprises the following steps:
repeating the step of analyzing the second detection result, and updating the alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises the following steps: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table;
judging whether the initial point posture of the track table of the detected object is a sitting posture or not and judging whether the initial point falls in the real position frame or not, if the initial point posture of the track table of the detected object is the sitting posture and the initial point falls in the real position frame, judging whether the size relation between the pedestrian frame at the current moment in the track table and the corresponding real position frame meets a preset condition or not, and if the size relation meets the preset condition, judging that the detected object is in a seat-off state until a preset time is reached.
Further, the method also comprises the following steps:
and if the size relation between the pedestrian frame at the current moment and the matched real position frame in the track table does not accord with the preset condition, deleting the track cache, and carrying out track tracking on the detection object again.
Further, the initialized seat table is used for recording: the candidate position box accumulates the number of times of the appearing human head and/or human face.
In order to solve the technical problem, the storage device is further provided, and the specific technical scheme is as follows:
a storage device having stored therein a set of instructions for carrying out any of the steps of a method of off-seat behavior detection in combination with seat information and machine vision as set forth in the preceding claims.
The invention has the beneficial effects that: a method for off-seat behavior detection in conjunction with seat information and machine vision, comprising the steps of: carrying out target detection on target video stream data through a pre-established deep learning model to obtain a first detection result and a second detection result, wherein the first detection result comprises: a pedestrian attribute detection frame for each detected object, the pedestrian attribute detection frame including: a pedestrian frame, the second detection result comprising: a track table of the detected object; executing a first preset operation on the first detection result to establish an initialized seat table, wherein the initialized seat table comprises at least one alternative position frame; analyzing the second detection result, and updating an alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table; judging whether the initial point posture of the track table of the detected object is a sitting posture or not and judging whether the initial point falls in the real position frame or not, if the initial point posture of the track table of the detected object is the sitting posture and the initial point falls in the real position frame, judging whether the size relation between the pedestrian frame at the current moment in the track table and the corresponding real position frame meets a preset condition or not, and if the size relation meets the preset condition, judging that the detected object is in an out-of-seat state. In the method, the problem that the desk cannot be detected due to the fact that the desk is not detected but the target detection pushes the desk reversely is solved, and meanwhile, even when the detected object moves in a large body on the seat, the detected object can still be recognized as sitting posture and cannot be recognized as moving in class mistakenly. Various problems existing in the prior art are well solved.
Furthermore, the alternative position frame is updated by setting two parameters of the number of the human heads and the number of the human faces, and when the number of the human heads and the number of the human faces corresponding to the alternative position frame are accumulated to a preset number, the position of a teacher on a platform can be effectively filtered out, and other places where the positions are mistakenly identified as positions can be effectively filtered out. The following embodiments will be further described with respect to how to effectively filter out the teacher's position on the podium.
Drawings
FIG. 1 is a flow diagram of a method for off-seat behavior detection incorporating seat information and machine vision according to an embodiment;
fig. 2 is a schematic block diagram of a storage device according to an embodiment.
Description of reference numerals:
200. a storage device.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
First, the core technical idea of the present application is explained: in the application, the desk is not detected, but alternative position frames of the desk are inversely calculated through target detection (mainly human head frames or ratio calculation of human face frames and pedestrian frames), and then the alternative position frames are optimized to determine a real position frame. After the determination is completed, the trajectory data of each detected object is analyzed, the detected objects with the trajectory starting points meeting the sitting posture and sitting at the real positions are screened out, then whether the detected objects leave the seat at any time in the next trajectory time is judged, and if the detected objects leave the seat, the detected objects leave the seat. The desk is not detected, but the target detection pushes the desk reversely, so that the problem that the detected object is mistakenly detected as being out of the seat when the detected object has large body movement, such as the body is inclined out of the desk, is effectively solved. In addition, if the desk is detected, in different scenes, the desk is different, the problem that the desk detection algorithm needs to be readjusted every time is further involved, and the problem is troublesome, but the method is not needed at all. Various problems existing in the prior art are well solved.
The following is a detailed description:
referring to fig. 1, in the present embodiment, a method for detecting out-of-seat behavior in combination with seat information and machine vision can be applied to a storage device, including but not limited to: personal computers, servers, general purpose computers, special purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, etc.
In this embodiment, the actual scene may be in a classroom, and the camera for acquiring the target video stream is disposed on the side of the classroom platform. When a teacher normally sits on the platform to face a student below the platform, the teacher can only capture the hindbrain of the head of the teacher and cannot capture the face of the student. Therefore, in the present application, when the candidate location frame is updated, the values of the human head frame and the human face frame are calculated at the same time, and the candidate location frame is set as the real location frame only when the values of the human head frame and the human face frame are accumulated to a certain number respectively. Because the alternative position frame is determined according to the proportion value of the head frame and the pedestrian frame or the proportion value of the face frame and the pedestrian frame, if a teacher sits at a position, the teacher can be identified as the alternative position frame, but the application aims to detect whether the student leaves the position, so that the number of the heads and the number of the faces of detected objects with the postures of sitting can be calculated in a subsequent trajectory table, and the teacher cannot turn the faces to a platform in the whole class even if turning the faces to the platform in the class taking process, so that the number of the heads and the faces of the teacher is different from that of the student, and the positions of the teacher can be well screened by setting the preset number of the heads and the faces. The teacher and the students can be well distinguished without face recognition. When the teacher walks to the students, the teacher can directly remove the students due to the posture of the teacher not being the sitting posture.
The following is detailed below:
step S101: carrying out target detection on target video stream data through a pre-established deep learning model to obtain a first detection result and a second detection result, wherein the first detection result comprises: a pedestrian attribute detection frame for each detected object, the pedestrian attribute detection frame including: a pedestrian frame, the second detection result comprising: and a track table of the detected object. The deep learning model can directly use the existing deep learning model, such as: YOLOV5 or MobileNet-SSD. In the present embodiment, the detected objects are preferably students, and the following description is given by an application in a smart classroom, but it should be noted that the method can be applied to detection of whether the detected objects are out of seat in any other places, such as whether workers are out of seat in a factory, whether workers are out of seat in an office, and the like, in addition to detection of students in a smart classroom.
In this embodiment, the target video stream is from real-time video data captured by a camera in a smart classroom. It is preferable to perform shooting after the start of a class.
Step S102: and executing a first preset operation on the first detection result to establish an initialized seat table, wherein the initialized seat table comprises at least one alternative position frame. The method specifically comprises the following steps: the pedestrian attribute detection frame further includes: a human head frame and/or a human face frame; and calculating a ratio value of the human head frame to the pedestrian frame and/or a ratio value of the human face frame to the pedestrian frame, screening the pedestrian frame with the sitting posture according to the ratio values, and then confirming the initialized seat table. For example, if the ratio of the height of the head frame to the height of the pedestrian frame is set to be less than 1.8, the posture of the pedestrian frame is determined to be a sitting posture, the position of the pedestrian frame is correspondingly determined to be the alternative position frame, the position framed by the alternative position frame in the initialized seat table may be the seat of the teacher or other false positions, for example, a student may only stand in a certain scene and perform a stooping operation, but is mistakenly identified as the position of the student having an alternative position, but actually does not exist. The initialized seat table is updated to determine the real position frame in step S103.
Step S103: analyzing the second detection result, and updating an alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises: and updating the alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table. The method specifically comprises the following steps: the contents of the track table store include, but are not limited to: pedestrian attribute detection frames and postures of each detected object, wherein the pedestrian attribute detection frames comprise but are not limited to: a pedestrian frame, a head frame and a face frame; screening out table elements with postures of sitting postures, performing IOU calculation by using pedestrian frames corresponding to the table elements and alternative position frames in an initialized seat table, accumulating the number of human heads and human faces corresponding to the alternative position frames in the initialized seat table by 1 when the IOU of the alternative position frames in the initialized seat table and the IOU of the pedestrian frames corresponding to the table elements is larger than or equal to a preset value, and adding the pedestrian frames corresponding to the table elements to the initialized seat table as new alternative position frames if the IOU of the alternative position frames in the initialized seat table and the IOU of the pedestrian frames corresponding to the table elements is smaller than the preset value; and when the numbers of the human heads and the human faces corresponding to the alternative position frame are accumulated to a preset number, marking the alternative position frame as a real position frame. The IOU calculation specifically refers to calculating the area of the intersection of the pedestrian frame and the real position frame divided by the area of the union of the pedestrian frame and the position frame to obtain an intersection union ratio.
In the method, only the optional position frame is determined to be the real position frame, and the user does not need to determine who sits on the real position frame, so that the calculation amount is low; another possibility is to determine which person is out of seat for more precise positioning, and further determine which person corresponds to the actual position frame in addition to determining that the alternative position frame is the actual position frame, which has the advantage of being able to specifically position which detected object is frequently out of seat. Then, further, the method comprises the following steps: and when the number of a certain face corresponding to the real position frame is accumulated to a preset number, judging that the real position frame corresponds to the real position frame of the person.
Such as: the initialized seat table established in step S102 includes five candidate position frames a1, a2, a3, a4, and a5, and the trajectory table of the detected object is processed to obtain the number of the human heads and faces in the sitting posture, and it is determined whether the five candidate position frames often have the human heads and faces and reach the preset number, and if yes, it is determined that the position frame is a real position frame. And if the human face does not appear on the real position frame frequently, the real position frame is the seat of the human.
Step S104: is the starting point pose of the trajectory table of the detected object in sitting position and determines if the starting point falls within the real position box? The method specifically comprises the following steps: IOU calculation is carried out on a pedestrian frame corresponding to the starting point of the track table of the detected object and a real position frame in the initialized seat table, whether the starting point falls in the real position frame or not is judged according to the calculation result and information stored by the starting point, and whether the posture of the detected object when the starting point is located is a sitting posture or not is judged.
In this embodiment, a threshold may be set, and when the IOU value of the real position frame in the pedestrian frame and the initial seat table corresponding to the start point of the trajectory table of the detected object is greater than the threshold, it may be determined that the start point of the trajectory table of the detected object falls in the real position frame. And further performs the following step S105.
Step S105: is the size relationship between the pedestrian frame at the current time and the corresponding real position frame in the trajectory table meet the preset condition? If the preset condition is met, executing step S106: the detected object is judged to be in the out-of-seat state. The method specifically comprises the following steps: and performing IOU calculation on the pedestrian frame at the current moment in the trajectory chart and the matched real position frame, judging whether the IOU value meets a preset threshold value, and if so, judging that the detected object leaves the seat. In the present embodiment, the preset threshold may be set to 0. In other embodiments, the preset threshold may be set to a smaller value according to actual needs.
And if the size relation between the pedestrian frame at the current moment and the matched real position frame in the track table does not accord with the preset condition, deleting the track cache, and carrying out track tracking on the detection object again.
The above-described initialized seat table is used to record: the candidate position box accumulates the number of times of the appearing human head and/or human face.
Steps S103 to S106 may be repeated throughout the course of the classroom until the preset time is reached, for example, until the next class.
A method for off-seat behavior detection in conjunction with seat information and machine vision, comprising the steps of: carrying out target detection on target video stream data through a pre-established deep learning model to obtain a first detection result and a second detection result, wherein the first detection result comprises: a pedestrian attribute detection frame for each detected object, the pedestrian attribute detection frame including: a pedestrian frame, the second detection result comprising: a track table of the detected object; executing a first preset operation on the first detection result to establish an initialized seat table, wherein the initialized seat table comprises at least one alternative position frame; analyzing the second detection result, and updating an alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table; judging whether the initial point posture of the track table of the detected object is a sitting posture or not and judging whether the initial point falls in the real position frame or not, if the initial point posture of the track table of the detected object is the sitting posture and the initial point falls in the real position frame, judging whether the size relation between the pedestrian frame at the current moment in the track table and the corresponding real position frame meets a preset condition or not, and if the size relation meets the preset condition, judging that the detected object is in an out-of-seat state. In the method, the problem that the desk cannot be detected due to the fact that the desk is not detected but the target detection pushes the desk reversely is solved, and meanwhile, even when the detected object moves in a large body on the seat, the detected object can still be recognized as sitting posture and cannot be recognized as moving in class mistakenly. Various problems existing in the prior art are well solved.
Furthermore, the alternative position frame is updated by setting two parameters of the number of the human heads and the number of the human faces, and when the number of the human heads and the number of the human faces corresponding to the alternative position frame are accumulated to a preset number, the position of a teacher on a platform can be effectively filtered out, and other places where the positions are mistakenly identified as positions can be effectively filtered out. Effectively distinguishing teachers from students.
Referring to fig. 2, in the present embodiment, a memory device 200 is implemented as follows:
a memory device 200 having stored therein a set of instructions for performing any of the steps of a method of off-seat behavior detection in combination with seat information and machine vision as set forth in the preceding claims.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.

Claims (10)

1. A method for detecting an out-of-seat behavior in conjunction with seat information and machine vision, comprising the steps of:
carrying out target detection on target video stream data through a pre-established deep learning model to obtain a first detection result and a second detection result, wherein the first detection result comprises: a pedestrian attribute detection frame for each detected object, the pedestrian attribute detection frame including: a pedestrian frame, the second detection result comprising: a track table of the detected object;
executing a first preset operation on the first detection result to establish an initialized seat table, wherein the initialized seat table comprises at least one alternative position frame;
analyzing the second detection result, and updating an alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table;
judging whether the initial point posture of the track table of the detected object is a sitting posture or not and judging whether the initial point falls in the real position frame or not, if the initial point posture of the track table of the detected object is the sitting posture and the initial point falls in the real position frame, judging whether the size relation between the pedestrian frame at the current moment in the track table and the corresponding real position frame meets a preset condition or not, and if the size relation meets the preset condition, judging that the detected object is in an out-of-seat state.
2. The method for detecting an out-of-seat behavior in combination with seat information and machine vision according to claim 1, wherein the step of performing a first preset operation on the first detection result to establish an initialized seat table further comprises the steps of:
the pedestrian attribute detection frame further includes: a human head frame and/or a human face frame;
and calculating a ratio value of the human head frame to the pedestrian frame and/or a ratio value of the human face frame to the pedestrian frame, screening the pedestrian frame with the sitting posture according to the ratio values, and then confirming the initialized seat table.
3. The method for detecting the behavior of leaving a seat by combining seat information and machine vision according to claim 1, wherein the step of analyzing the second detection result and updating the candidate location frame in the initialized seat table according to the analysis result comprises the steps of: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table ″, specifically comprising the following steps:
the contents of the track table store include, but are not limited to: pedestrian attribute detection frames and postures of each detected object, wherein the pedestrian attribute detection frames comprise but are not limited to: a pedestrian frame, a head frame and a face frame;
screening out table elements with postures of sitting postures, performing IOU calculation by using pedestrian frames corresponding to the table elements and alternative position frames in an initialized seat table, and when the IOU of the alternative position frame in a certain initialized seat table and the IOU of the pedestrian frame corresponding to the table elements are greater than or equal to a preset value, accumulating 1 for the number of the head and the face corresponding to the alternative position frame in the certain initialized seat table;
if the IOU of the candidate position frame in a certain initialized seat table and the pedestrian frame corresponding to the table element is smaller than a preset value, adding the pedestrian frame corresponding to the table element to the initialized seat table as a new candidate position frame;
and when the numbers of the human heads and the human faces corresponding to the alternative position frame are accumulated to a preset number, marking the alternative position frame as a real position frame.
4. The method of claim 3 for detecting out-of-seat behavior in combination with seat information and machine vision, further comprising the steps of:
and when the number of a certain face corresponding to the real position frame is accumulated to a preset number, judging that the real position frame corresponds to the real position frame of the person.
5. The method for detecting an out-of-seat behavior by combining seat information and machine vision according to claim 1, wherein the step of determining whether the starting point posture of the trajectory chart of the detected object is a sitting posture and determining whether the starting point falls in the real position frame further comprises the steps of:
IOU calculation is carried out on a pedestrian frame corresponding to the starting point of the track table of the detected object and a real position frame in the initialized seat table, whether the starting point falls in the real position frame or not is judged according to the calculation result and information stored by the starting point, and whether the posture of the detected object when the starting point is located is a sitting posture or not is judged.
6. The method for detecting a behavior of leaving a seat by combining seat information and machine vision according to claim 1, wherein the step of further determining whether a size relationship between a pedestrian frame at a current time in the trajectory chart and a corresponding real position frame thereof meets a preset condition, and if the size relationship meets the preset condition, determining that the detected object is in a state of leaving the seat further comprises the steps of:
and performing IOU calculation on the pedestrian frame at the current moment and the matched real position frame in the trajectory table, judging whether the IOU value meets a preset threshold value, and if so, judging that the detected object is in an out-of-seat state.
7. The method of detecting out-of-seat behavior in combination with seat information and machine vision as claimed in claim 1, further comprising the steps of:
repeating the step of analyzing the second detection result, and updating the alternative position frame in the initialized seat table according to the analysis result, wherein the updating operation comprises the following steps: updating an alternative position frame in the existing initialized seat table to obtain a real position frame, and/or adding new position information to the initialized seat table;
judging whether the initial point posture of the track table of the detected object is a sitting posture or not and judging whether the initial point falls in the real position frame or not, if the initial point posture of the track table of the detected object is the sitting posture and the initial point falls in the real position frame, judging whether the size relation between the pedestrian frame at the current moment in the track table and the corresponding real position frame meets a preset condition or not, and if the size relation meets the preset condition, judging that the detected object is in a seat-off state until a preset time is reached.
8. The method of detecting out-of-seat behavior in combination with seat information and machine vision as claimed in claim 1, further comprising the steps of:
and if the size relation between the pedestrian frame at the current moment and the matched real position frame in the track table does not accord with the preset condition, deleting the track cache, and carrying out track tracking on the detection object again.
9. The method for detecting out-of-seat behavior in combination with seat information and machine vision according to any one of claims 1 to 8, wherein the initialized seat table is used for recording: the candidate position box accumulates the number of times of the appearing human head and/or human face.
10. A storage device having a set of instructions stored therein, wherein the set of instructions is configured to perform the steps of any of claims 1 to 9.
CN202110672225.1A 2021-06-17 2021-06-17 Seat leaving behavior detection method and storage device combining seat information and machine vision Active CN113392776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672225.1A CN113392776B (en) 2021-06-17 2021-06-17 Seat leaving behavior detection method and storage device combining seat information and machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672225.1A CN113392776B (en) 2021-06-17 2021-06-17 Seat leaving behavior detection method and storage device combining seat information and machine vision

Publications (2)

Publication Number Publication Date
CN113392776A true CN113392776A (en) 2021-09-14
CN113392776B CN113392776B (en) 2022-07-12

Family

ID=77621598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672225.1A Active CN113392776B (en) 2021-06-17 2021-06-17 Seat leaving behavior detection method and storage device combining seat information and machine vision

Country Status (1)

Country Link
CN (1) CN113392776B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677644A (en) * 2022-03-31 2022-06-28 北京理工大学 Student seating distribution identification method and system based on classroom monitoring video

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018204921A (en) * 2017-06-09 2018-12-27 アズビル株式会社 Human detection device and method
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN109902628A (en) * 2019-02-28 2019-06-18 广州大学 A kind of seat Management System of Library of view-based access control model Internet of Things
CN111104816A (en) * 2018-10-25 2020-05-05 杭州海康威视数字技术股份有限公司 Target object posture recognition method and device and camera
CN111161313A (en) * 2019-12-16 2020-05-15 华中科技大学鄂州工业技术研究院 Multi-target tracking method and device in video stream
CN111860152A (en) * 2020-06-12 2020-10-30 浙江大华技术股份有限公司 Method, system, equipment and computer equipment for detecting personnel state
CN112183304A (en) * 2020-09-24 2021-01-05 高新兴科技集团股份有限公司 Off-position detection method, system and computer storage medium
CN112200088A (en) * 2020-10-10 2021-01-08 普联技术有限公司 Sitting posture monitoring method, device, equipment and system
CN112470231A (en) * 2018-07-26 2021-03-09 索尼公司 Information processing apparatus, information processing method, and program
CN112580584A (en) * 2020-12-28 2021-03-30 苏州科达科技股份有限公司 Method, device and system for detecting standing behavior and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018204921A (en) * 2017-06-09 2018-12-27 アズビル株式会社 Human detection device and method
CN112470231A (en) * 2018-07-26 2021-03-09 索尼公司 Information processing apparatus, information processing method, and program
CN111104816A (en) * 2018-10-25 2020-05-05 杭州海康威视数字技术股份有限公司 Target object posture recognition method and device and camera
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN109902628A (en) * 2019-02-28 2019-06-18 广州大学 A kind of seat Management System of Library of view-based access control model Internet of Things
CN111161313A (en) * 2019-12-16 2020-05-15 华中科技大学鄂州工业技术研究院 Multi-target tracking method and device in video stream
CN111860152A (en) * 2020-06-12 2020-10-30 浙江大华技术股份有限公司 Method, system, equipment and computer equipment for detecting personnel state
CN112183304A (en) * 2020-09-24 2021-01-05 高新兴科技集团股份有限公司 Off-position detection method, system and computer storage medium
CN112200088A (en) * 2020-10-10 2021-01-08 普联技术有限公司 Sitting posture monitoring method, device, equipment and system
CN112580584A (en) * 2020-12-28 2021-03-30 苏州科达科技股份有限公司 Method, device and system for detecting standing behavior and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677644A (en) * 2022-03-31 2022-06-28 北京理工大学 Student seating distribution identification method and system based on classroom monitoring video

Also Published As

Publication number Publication date
CN113392776B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN111507283B (en) Student behavior identification method and system based on classroom scene
CN109034013B (en) Face image recognition method, device and storage medium
JP5001260B2 (en) Object tracking method and object tracking apparatus
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN105940430B (en) Personnel's method of counting and its device
CN105426827A (en) Living body verification method, device and system
CN101406390B (en) Method and apparatus for detecting part of human body and human, and method and apparatus for detecting objects
CN109284733A (en) A kind of shopping guide's act of omission monitoring method based on yolo and multitask convolutional neural networks
CN110443210A (en) A kind of pedestrian tracting method, device and terminal
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN110718067A (en) Violation behavior warning method and related device
CN111767823A (en) Sleeping post detection method, device, system and storage medium
CN110674680B (en) Living body identification method, living body identification device and storage medium
KR20180020123A (en) Asynchronous signal processing method
CN111814587A (en) Human behavior detection method, teacher behavior detection method, and related system and device
CN113850183A (en) Method for judging behaviors in video based on artificial intelligence technology
CN113392776B (en) Seat leaving behavior detection method and storage device combining seat information and machine vision
CN113705510A (en) Target identification tracking method, device, equipment and storage medium
CN115311111A (en) Classroom participation evaluation method and system
CN110580708B (en) Rapid movement detection method and device and electronic equipment
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
CN107256375A (en) Human body sitting posture monitoring method before a kind of computer
CN105631410A (en) Classroom detection method based on intelligent video processing technology
CN112149517A (en) Face attendance checking method and system, computer equipment and storage medium
CN111008601A (en) Fighting detection method based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211123

Address after: 518000 room 1702, block B, innovation building, 198 Daxin Road, Nantou street, Nanshan District, Shenzhen, Guangdong

Applicant after: SHENZHEN SUNSEA IOT TECHNOLOGY CO.,LTD.

Address before: 518129 1a902, Jiahua Linghui Plaza, Jihua Road, Bantian street, Longgang District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen qianfalcon Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant