CN114708543B - Examination student positioning method in examination room monitoring video image - Google Patents

Examination student positioning method in examination room monitoring video image Download PDF

Info

Publication number
CN114708543B
CN114708543B CN202210629393.7A CN202210629393A CN114708543B CN 114708543 B CN114708543 B CN 114708543B CN 202210629393 A CN202210629393 A CN 202210629393A CN 114708543 B CN114708543 B CN 114708543B
Authority
CN
China
Prior art keywords
image data
monitoring video
examination room
video image
examinee
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210629393.7A
Other languages
Chinese (zh)
Other versions
CN114708543A (en
Inventor
刘说
潘帆
李翔
赵启军
黄珂
杨玲
杨智鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202210629393.7A priority Critical patent/CN114708543B/en
Publication of CN114708543A publication Critical patent/CN114708543A/en
Application granted granted Critical
Publication of CN114708543B publication Critical patent/CN114708543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of image processing, in particular to a method for positioning examinees in an examination room monitoring video image, which mainly comprises the steps of firstly carrying out frame selection marking based on head hair areas of the examinees on a large amount of examination room monitoring video image data containing different examination scenes and different examinees according to the ear visible conditions of the examinees in the examination room monitoring video image data, establishing an examinee head hair area data set, carrying out primary screening based on high false alarm rate target detection on the basis, finally establishing a model based on SSD deep learning target detection, positioning the head hair areas of the examinees, and finally realizing the positioning of the examinees.

Description

Examination student positioning method in examination room monitoring video image
Technical Field
The invention belongs to the field of image processing, and particularly relates to a method for positioning an examinee in an examination room monitoring video image.
Background
Examination is widely used as an important examination and drawing means all over the world, because fairness and justness can be ensured to some extent. However, various cheating means exist for smooth examination passing, and the examination monitoring system is widely applied to various examinations for ensuring the fairness and justice principle of the examinations. However, the examination room has a video monitoring system, which does not mean that the cheating problem can be solved well.
Although the video monitoring can record the examination room information completely, whether the examination cheating behavior exists still needs a large amount of manpower input by related departments to carry out post-processing and examination on the video data, wherein the cheating behavior does not exist in a large proportion of videos, but each video section needs to be carefully examined by related personnel, so that a large amount of workload is generated, the requirement for automatically identifying the behavior of the examinee in the examination room monitoring video is generated, and the key problem which needs to be solved is how to position the examinee in the examination room monitoring video.
The examination room monitoring video detection and positioning methods can be roughly divided into methods based on background difference, methods based on template matching and methods based on image characteristics, and the methods have the problems of limited detection range, large dependence on examination room layout and the like.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for positioning an examinee in an examination room monitoring video image, which comprises the following steps:
step 1: performing frame selection marking based on the head hair area of the examinee on a large amount of examination room monitoring video image data containing different examination scenes and different examinees, and then establishing an examinee head hair area data set of the examination room monitoring video image data;
step 2: establishing a target detection deep learning model for positioning a hair area at the top of an examinee in examination room monitoring video image data, firstly screening pixel points which are possibly hairs in the examination room monitoring video image data to obtain preprocessed image data, and then carrying out deep learning target detection based on SSD on the preprocessed image data;
and step 3: the head hair area of the examinee of the established examination room monitoring video image dataDividing the data set in proportion to generate a training data set and a testing data set respectively, training and testing the established target detection deep learning model for positioning the head top hair region of the examinee in the examination room monitoring video image data to obtain a final target detection model
Figure DEST_PATH_IMAGE001
And 4, step 4: inputting initial image data of examination room monitoring video into final target detection model
Figure DEST_PATH_IMAGE002
Then, the examinee positioning result in the examination room monitoring video image data is obtained.
Further, step 1: the method comprises the following steps of performing frame selection marking based on the head hair area of an examinee on a large number of examinee monitoring video image data containing different examination scenes and different examinees, and specifically comprises the following steps: and carrying out box marking based on the ear exposure condition of the examinee and based on the approximate image on the head and top hair area of the examinee in the examination room monitoring video image data.
Further, the head and top hair area of the examinee in the examination room monitoring video image data is subjected to frame marking based on the examinee ear exposure condition, and the ear exposure condition is divided into: two ears are exposed, one ear is exposed, and no ear is exposed.
Furthermore, the head top hair area of the examinee in the examination room monitoring video image data is marked by a box based on an approximate image, and particularly, the horizontal and vertical edges of the generated frame are parallel to the edges of the image data.
Furthermore, two ears of the examinee in the video image data are exposed, and the frame selection area is as follows: using the lowest point of the boundary between the hair and the forehead obtained by the edge detection as the bottom of the frame selection region, and the distance from the bottom of the frame selection region to the top of the hair obtained by the edge detection
Figure DEST_PATH_IMAGE003
Selecting the height as the frame, and taking the longest distance between the left and right sides of the hair and the background
Figure DEST_PATH_IMAGE004
Doubling the frame selection width to form frame selection area and variable
Figure DEST_PATH_IMAGE005
And
Figure DEST_PATH_IMAGE006
are weighting coefficients.
Further, if an ear of the examinee in the video image data is exposed, the framing area is: using the middle point between the highest point and the lowest point of the junction between the hair and the forehead as the bottom of the framed selection area, and the distance from the bottom of the framed selection area to the top of the hair
Figure DEST_PATH_IMAGE007
Height is selected as the longest distance between the edge of one exposed ear and hair and the edge of the other side hair and background
Figure DEST_PATH_IMAGE008
Doubling the frame selection width to form frame selection area and variable
Figure DEST_PATH_IMAGE009
And
Figure DEST_PATH_IMAGE010
are weighting coefficients.
Furthermore, the examinee does not expose ears in the video image data, the highest point of the junction of the hair and the forehead is taken as the bottom of the frame selection area, the distance from the bottom of the frame selection area to the top of the hair is taken as the frame selection height, and the horizontal width displayed by the forehead in the image data is taken as the frame selection width to form the frame selection area.
Further, step 2: screening pixel points which may be hairs in the examination room monitoring video image data to obtain preprocessed image data, wherein the specific method comprises the following steps: firstly, the image data is grayed to obtain the grayscale image data
Figure DEST_PATH_IMAGE011
(ii) a Then, the pixel value of each pixel point in the gray image data is calculated according to
Figure DEST_PATH_IMAGE012
Performing inversion to obtain grayscale inverted image data
Figure DEST_PATH_IMAGE013
In which
Figure DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
As image data
Figure DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE017
The abscissa of (A) is
Figure DEST_PATH_IMAGE018
On the ordinate of
Figure DEST_PATH_IMAGE019
The gray value of the pixel point; inverting gray scale image data
Figure 606426DEST_PATH_IMAGE017
CFAR target detection with high false alarm rate is carried out to obtain screened image data, and a threshold value is set
Figure DEST_PATH_IMAGE020
Carrying out binarization processing on the screened image data to obtain preprocessed image data
Figure DEST_PATH_IMAGE021
Further, step 2: performing SSD-based deep learning target detection on the preprocessed image data, specifically: image data of binary detection result
Figure DEST_PATH_IMAGE022
And as an index image, mapping coordinates of non-0 pixel value pixel points in the index image in the image to corresponding examination room monitoring video image data, and establishing a target detection model for positioning the hair region of the examination room monitoring video image data based on an SSD target detection framework by taking the mapped pixel points as anchor frame center points.
Further, step 4: inputting initial image data of examination room monitoring video into final target detection model
Figure DEST_PATH_IMAGE023
Then, obtaining the examinee positioning result in the examination room monitoring video image data, specifically: inputting initial image data of examination room monitoring video into final target detection model
Figure DEST_PATH_IMAGE024
In the method, the result of the hair region selection is obtained, and each selection region is expanded downwards to the self range
Figure DEST_PATH_IMAGE025
And obtaining an updated region frame selection result, and determining the updated region frame selection result as a test taker positioning result.
The invention solves the following technical problems:
1. the frame selection marking method based on the examinee hair area is provided for the examinee monitoring video image data according to the ear visible condition of the examinee in the examinee monitoring video image data, and accuracy and reliability of the examinee hair area data set are improved.
2. The primary screening of target detection based on high false alarm rate is carried out on the pixel points which are probably hairs in the examination room monitoring video image data, so that the accuracy of examination of the hair region of the examinee is effectively improved.
3. The binary detection result of the monitoring video image data of the examination room is used as an index image, and the anchor frame selection is carried out on the basis of the index image, so that the accuracy of the examination room hair region detection is improved, and the complexity of a target detection model is reduced.
Drawings
Fig. 1 is a flow chart of a method for positioning examinees in an examination room monitoring video image.
Detailed Description
In the following, the technical solution in the embodiment of the present invention will be clearly and completely described with reference to the drawings in the embodiment of the present invention, and a flow chart of the method is shown in fig. 1, and includes the following steps:
a method for positioning examinees in an examination room monitoring video image comprises the following steps:
step 1: performing frame selection marking based on the head hair area of the examinee on a large amount of examination room monitoring video image data containing different examination scenes and different examinees, and then establishing an examinee head hair area data set of the examination room monitoring video image data;
and 2, step: establishing a target detection deep learning model for positioning a hair area at the top of an examinee in examination room monitoring video image data, firstly screening pixel points which are possibly hairs in the examination room monitoring video image data to obtain preprocessed image data, and then carrying out deep learning target detection based on SSD on the preprocessed image data;
and step 3: dividing an examinee's head top hair region data set of the established examination room monitoring video image data in proportion to generate a training data set and a testing data set respectively, training and testing the established target detection deep learning model for positioning the examinee's head top hair region in the examination room monitoring video image data to obtain a final target detection model
Figure DEST_PATH_IMAGE026
And 4, step 4: inputting initial image data of examination room monitoring video into final target detection model
Figure 163047DEST_PATH_IMAGE002
Then, the examinee positioning result in the examination room monitoring video image data is obtained.
Further, step 1: the method comprises the following steps of performing frame selection marking based on the head hair area of an examinee on a large amount of examination room monitoring video image data containing different examination scenes and different examinees, and specifically comprises the following steps: and carrying out frame marking on the head and top hair area of the examinee in the examination room monitoring video image data based on the ear exposure condition of the examinee and based on the approximate image.
Further, the head and top hair area of the examinee in the examination room monitoring video image data is subjected to frame marking based on the examinee ear exposure condition, and the ear exposure condition is divided into: two ears are exposed, one ear is exposed, and no ear is exposed.
Furthermore, the head top hair area of the examinee in the examination room monitoring video image data is marked by a box based on an approximate image, and particularly, the horizontal and vertical edges of the generated frame are parallel to the edges of the image data.
Furthermore, two ears of the examinee in the video image data are exposed, and the frame selection area is as follows: using the lowest point of the boundary between the hair and the forehead obtained by the edge detection as the bottom of the frame selection region, and the distance from the bottom of the frame selection region to the top of the hair obtained by the edge detection
Figure 867086DEST_PATH_IMAGE005
Selecting the height as the frame, and taking the longest distance between the left and right sides of the hair and the background
Figure DEST_PATH_IMAGE027
Doubling the frame selection width to form frame selection area and variable
Figure 918087DEST_PATH_IMAGE005
And
Figure 466880DEST_PATH_IMAGE006
are weighting coefficients.
Further, if an ear of the examinee in the video image data is exposed, the framing area is: using the middle point between the highest point and the lowest point of the junction between the hair and the forehead as the bottom of the framed selection area, and the distance from the bottom of the framed selection area to the top of the hair
Figure DEST_PATH_IMAGE028
Height is selected as the longest distance between the edge of one exposed ear and hair and the edge of the other side hair and background
Figure DEST_PATH_IMAGE029
Doubling the frame selection width to form frame selection area and variable
Figure 86080DEST_PATH_IMAGE009
And
Figure DEST_PATH_IMAGE030
are weighting coefficients.
Furthermore, the examinee does not expose ears in the video image data, the highest point of the junction of the hair and the forehead is taken as the bottom of the frame selection area, the distance from the bottom of the frame selection area to the top of the hair is taken as the frame selection height, and the horizontal width displayed by the forehead in the image data is taken as the frame selection width to form the frame selection area.
Further, step 2: screening pixel points which are possibly hairs in examination room monitoring video image data to obtain preprocessed image data, wherein the specific method comprises the following steps: firstly, the image data is grayed to obtain the grayscale image data
Figure 274485DEST_PATH_IMAGE016
(ii) a Then, the pixel value of each pixel point in the gray image data is calculated according to
Figure DEST_PATH_IMAGE031
Performing inversion to obtain grayscale inverted image data
Figure 120389DEST_PATH_IMAGE017
Wherein
Figure DEST_PATH_IMAGE032
Figure DEST_PATH_IMAGE033
As a number of imagesAccording to
Figure 382743DEST_PATH_IMAGE016
Figure 297478DEST_PATH_IMAGE017
On the abscissa of
Figure DEST_PATH_IMAGE034
On the ordinate of
Figure DEST_PATH_IMAGE035
The gray value of the pixel point; inverting gray scale image data
Figure 442020DEST_PATH_IMAGE017
Carrying out CFAR target detection with high false alarm rate to obtain screened image data, and setting a threshold value
Figure DEST_PATH_IMAGE036
Carrying out binarization processing on the screened image data to obtain preprocessed image data
Figure 585557DEST_PATH_IMAGE022
Further, step 2: performing SSD-based deep learning target detection on the preprocessed image data, specifically: image data of binary detection result
Figure 33243DEST_PATH_IMAGE022
And as an index image, mapping coordinates of non-0 pixel value pixel points in the index image in the image to corresponding examination room monitoring video image data, and establishing a target detection model for positioning the hair region of the examination room monitoring video image data based on an SSD target detection framework by taking the mapped pixel points as anchor frame center points.
Further, step 4: inputting initial image data of examination room monitoring video into final target detection model
Figure 197508DEST_PATH_IMAGE023
In (1),then obtaining the examinee positioning result in the examination room monitoring video image data, which specifically comprises the following steps: inputting initial image data of examination room monitoring video into final target detection model
Figure 360505DEST_PATH_IMAGE024
In the method, the result of the hair region selection is obtained, and each selection region is expanded downwards to the self range
Figure DEST_PATH_IMAGE037
And obtaining an updated region frame selection result, and determining the updated region frame selection result as a test taker positioning result.
It should be apparent that the described embodiments are only some embodiments of the present invention, and not all embodiments. Other embodiments, which can be derived by one of ordinary skill in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Claims (9)

1. A method for positioning examinees in an examination room monitoring video image is characterized by comprising the following steps:
step 1: the method comprises the following steps of performing frame selection marking based on the head hair area of an examinee on a large number of examinee monitoring video image data containing different examination scenes and different examinees, and specifically comprises the following steps: carrying out box marking based on the ear exposure condition of the examinee and based on the approximate image on the head and hair area of the examinee in the monitoring video image data of the examination room; then establishing a data set of the head hair area of the examinee of the examination room monitoring video image data;
step 2: establishing a target detection deep learning model for positioning a hair area at the top of an examinee in examination room monitoring video image data, firstly screening pixel points which are possibly hairs in the examination room monitoring video image data to obtain preprocessed image data, and then carrying out deep learning target detection based on SSD on the preprocessed image data;
and step 3: dividing the data set of the head top hair area of the examinee of the established examination room monitoring video image data according to the proportion to respectively generate a training data set and a test data setTraining and testing the established target detection deep learning model for positioning the hair area at the top of the examinee in the examination room monitoring video image data to obtain a final target detection model M ssd
And 4, step 4: inputting initial image data of examination room monitoring video into final target detection model M ssd Then, the examinee positioning result in the examination room monitoring video image data is obtained.
2. The method according to claim 1, wherein the examination room monitoring video image is characterized in that the examination room monitoring video image data is subjected to examination room ear exposure condition-based frame marking on the top of the head and hair area of the examinee, and the ear exposure condition is divided into: two ears are exposed, one ear is exposed, and no ear is exposed.
3. The method according to claim 1, wherein the top hair area of the head of the examinee in the examination room monitoring video image data is marked by a box based on the approximate image, and the horizontal and vertical edges of the generated frame are parallel to the edges of the image data.
4. The method according to claim 2, wherein if two ears of the examinee are exposed in the video image data, the frame selection area is: taking the lowest point of the boundary between the hair and the forehead obtained by the edge detection as the bottom of the frame selection area, and taking the distance alpha from the bottom of the frame selection area to the top of the hair obtained by the edge detection 1 Selecting the height as a frame, and taking the beta of the longest distance between the left side and the right side of the hair and the background 1 Doubling the frame selection width to form a frame selection area, and changing the variable alpha 1 And beta 1 Are weighting coefficients.
5. The method as claimed in claim 2, wherein if an ear of the examinee appears in the video image data, the selection area is: with the highest point of the hair-forehead junction andthe middle value point between the lowest points is the bottom of the frame selection area, and the distance alpha from the bottom of the frame selection area to the top of the hair 2 Selecting the height as the frame, and exposing the longest distance beta between the border of one ear and hair and the border of the other hair and background 2 Doubling the frame selection width to form a frame selection area, and changing the variable alpha 2 And beta 2 Are weighting coefficients.
6. The method as claimed in claim 2, wherein the examinee in the video image data has no exposed ear, the highest point of the junction between the hair and the forehead is the bottom of the frame selection area, the distance from the bottom of the frame selection area to the topmost point of the hair is the frame selection height, and the horizontal width of the forehead displayed in the image data is the frame selection width, thereby forming the frame selection area.
7. The method for locating the examinee in the examination room monitoring video image according to claim 1, characterized in that the step 2: screening pixel points which are possibly hairs in examination room monitoring video image data to obtain preprocessed image data, wherein the specific method comprises the following steps: firstly, carrying out gray processing on image data to obtain gray image data I g (ii) a Then, the pixel value of each pixel point in the gray image data is according to I c (i,j)=255-I g (I, j) inverting to obtain grayscale inverted image data I c In which I g (i,j)、I c (I, j) is image data I g 、I c The horizontal coordinate in the gray value is i, and the vertical coordinate is j; inverting the grayscale image data I c Carrying out CFAR target detection with high false alarm rate to obtain screened image data, setting a threshold th, carrying out binarization processing on the screened image data to obtain preprocessed image data I t
8. The method for locating the examinee in the examination room monitoring video image according to claim 1, characterized in that the step 2: performing SSD-based deep learning target detection on the preprocessed image data, specifically: detecting the binary valueResulting image data I t And as an index image, mapping coordinates of non-0 pixel value pixel points in the index image in the image to corresponding examination room monitoring video image data, and establishing a target detection model for positioning the hair region of the examination room monitoring video image data based on an SSD target detection framework by taking the mapped pixel points as anchor frame center points.
9. The method for locating the examinee in the examination room monitoring video image according to claim 1, characterized in that the step 4: inputting initial image data of examination room monitoring video into final target detection model M ssd Then, obtaining the examinee positioning result in the examination room monitoring video image data, specifically: inputting initial image data of examination room monitoring video into final target detection model M ssd And obtaining a hair area framing result, expanding each framing area downwards by Q times of the range of the framing area to obtain an updated area framing result, and determining the updated area framing result as a test taker positioning result.
CN202210629393.7A 2022-06-06 2022-06-06 Examination student positioning method in examination room monitoring video image Active CN114708543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210629393.7A CN114708543B (en) 2022-06-06 2022-06-06 Examination student positioning method in examination room monitoring video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210629393.7A CN114708543B (en) 2022-06-06 2022-06-06 Examination student positioning method in examination room monitoring video image

Publications (2)

Publication Number Publication Date
CN114708543A CN114708543A (en) 2022-07-05
CN114708543B true CN114708543B (en) 2022-08-30

Family

ID=82177605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210629393.7A Active CN114708543B (en) 2022-06-06 2022-06-06 Examination student positioning method in examination room monitoring video image

Country Status (1)

Country Link
CN (1) CN114708543B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310806B (en) * 2023-02-28 2023-08-29 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5430809A (en) * 1992-07-10 1995-07-04 Sony Corporation Human face tracking system
CN105678213A (en) * 2015-12-20 2016-06-15 华南理工大学 Dual-mode masked man event automatic detection method based on video characteristic statistics
CN106991360A (en) * 2016-01-20 2017-07-28 腾讯科技(深圳)有限公司 Face identification method and face identification system
CN107451555A (en) * 2017-07-27 2017-12-08 安徽慧视金瞳科技有限公司 A kind of hair based on gradient direction divides to determination methods
CN108260918A (en) * 2018-02-09 2018-07-10 武汉技兴科技有限公司 A kind of human hair information collecting method, device and intelligent clipping device
CN109711377A (en) * 2018-12-30 2019-05-03 陕西师范大学 Standardize examinee's positioning and method of counting in the single-frame images of examination hall monitoring

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686965A (en) * 2020-12-25 2021-04-20 百果园技术(新加坡)有限公司 Skin color detection method, device, mobile terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5430809A (en) * 1992-07-10 1995-07-04 Sony Corporation Human face tracking system
CN105678213A (en) * 2015-12-20 2016-06-15 华南理工大学 Dual-mode masked man event automatic detection method based on video characteristic statistics
CN106991360A (en) * 2016-01-20 2017-07-28 腾讯科技(深圳)有限公司 Face identification method and face identification system
CN107451555A (en) * 2017-07-27 2017-12-08 安徽慧视金瞳科技有限公司 A kind of hair based on gradient direction divides to determination methods
CN108260918A (en) * 2018-02-09 2018-07-10 武汉技兴科技有限公司 A kind of human hair information collecting method, device and intelligent clipping device
CN109711377A (en) * 2018-12-30 2019-05-03 陕西师范大学 Standardize examinee's positioning and method of counting in the single-frame images of examination hall monitoring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Teaching Assistant and Class Attendance Analysis Using Surveillance Camera;Xiaofei Peng等;《IFTC 2018: Digital TV and Multimedia Communication》;20190511;413–422 *
基于视频的人体检测与计数技术研究;王红梅;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20120515(第05期);I138-1385 *

Also Published As

Publication number Publication date
CN114708543A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
US11551433B2 (en) Apparatus, method and computer program for analyzing image
US9977966B2 (en) System and method for identifying, analyzing, and reporting on players in a game from video
CN107240047B (en) Score evaluation method and device for teaching video
CN102622508B (en) Image processing apparatus and image processing method
CN108229526A (en) Network training, image processing method, device, storage medium and electronic equipment
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
CN104202547B (en) Method, projection interactive approach and its system of target object are extracted in projected picture
WO2022001571A1 (en) Computing method based on super-pixel image similarity
CN110837795A (en) Teaching condition intelligent monitoring method, device and equipment based on classroom monitoring video
CN110689000B (en) Vehicle license plate recognition method based on license plate sample generated in complex environment
CN108846828A (en) A kind of pathological image target-region locating method and system based on deep learning
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN106611160A (en) CNN (Convolutional Neural Network) based image hair identification method and device
CN108305253A (en) A kind of pathology full slice diagnostic method based on more multiplying power deep learnings
CN114708543B (en) Examination student positioning method in examination room monitoring video image
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN111709914A (en) Non-reference image quality evaluation method based on HVS characteristics
CN111339902A (en) Liquid crystal display number identification method and device of digital display instrument
WO2023160666A1 (en) Target detection method and apparatus, and target detection model training method and apparatus
CN113705349A (en) Attention power analysis method and system based on sight estimation neural network
CN111062953A (en) Method for identifying parathyroid hyperplasia in ultrasonic image
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
Kovalev et al. Biomedical image recognition in pulmonology and oncology with the use of deep learning
KR20100010973A (en) Method for automatic classifier of lung diseases
CN113743378A (en) Fire monitoring method and device based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant