CN109431523A - Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form - Google Patents

Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form Download PDF

Info

Publication number
CN109431523A
CN109431523A CN201811220432.8A CN201811220432A CN109431523A CN 109431523 A CN109431523 A CN 109431523A CN 201811220432 A CN201811220432 A CN 201811220432A CN 109431523 A CN109431523 A CN 109431523A
Authority
CN
China
Prior art keywords
feature
module
autism
asocial
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811220432.8A
Other languages
Chinese (zh)
Other versions
CN109431523B (en
Inventor
李明
邹小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Duke Kunshan University
Third Affiliated Hospital Sun Yat Sen University
Original Assignee
Duke Kunshan University
Third Affiliated Hospital Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Duke Kunshan University, Third Affiliated Hospital Sun Yat Sen University filed Critical Duke Kunshan University
Priority to CN201811220432.8A priority Critical patent/CN109431523B/en
Publication of CN109431523A publication Critical patent/CN109431523A/en
Application granted granted Critical
Publication of CN109431523B publication Critical patent/CN109431523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/167Personality evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of autism primary screening apparatus based on asocial's sonic stimulation behavior normal form, including acquisition module: for attracting the controllable sound producing body of child attention, the remote controler and camera of sound producing body sounding are controlled, participates in experimental child to the sound view of the behavior reaction of asocial's sonic stimulation and doctor's instruction, video data to acquire;Preprocessing module: audio data is handled using voice activity detection, detection sound producing body starts phonation time, and as content after starting endpoint interception video counts ten seconds as video analysis data;Characteristic extracting module: to video data, analysis is handled frame by frame, obtains face, expression in the eyes and gesture feature;Training categorization module: using acquired feature training classifier, the sorter model of prediction ASD is obtained;Prediction module: assessment prediction is carried out to tester ASD using the calculated face of characteristic extracting module, expression in the eyes and gesture feature.The present invention is suitable for the outer phenotype early screening of ASD behavior and auxiliary diagnosis.

Description

Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form
Technical field
The present invention relates to multimedia audio-video process fields, more particularly to one kind to be based on the behavior of asocial's sonic stimulation The autism primary screening apparatus of normal form, be based on speech processes, computer vision, acquisition subject observation people face, expression in the eyes with And gesture information and analyzed, to assess the early screening device of autism-spectrum obstacle risk factor.
Background technique
Autism-spectrum obstacle (autism spectrum disorder, ASD), is a kind of neurodevelopmental disorder.Suffer from The performance of person includes social communication obstacle, repeat stereotypic behavior, narrow interest etc., medical staff need to each suspected patient from Judged in terms of the outer phenotype of behavior.Although existing ASD appraisal procedure is highly effective, time-consuming and laborious, to medicine attainment It all has higher requirements with clinical experience, assessment result has certain subjectivity.Most widely used measurement method includes now Autism Diagnostic observes scale (Autism Diagnostic Observation Schedule, ADOS) etc..
These general judge index based on ASD with sequencing can carry out preliminary diagnosis to doubtful sufferer, and this Sequencing, which just makes computer technology be applied to self-closing disease diagnosis, becomes possibility.When whether the doubtful sufferer of judge suffers from ASD, meeting The name reaction of comprehensive consideration test subject, pointing reaction, following phenomenon etc., finally provide finally according to the Couple herbs of doctor As a result.It in terms of computer, can give a mark in each project to testee, every obtain then is combined by way of data science Point predict risk factor that testee suffers from ASD.Suitable efficient algorithm is chosen in this course, it will be able to substitute doctor's Experience allows computer to complete children's ASD early screening of automation.
Doctor and parent is assisted to carry out automation to doubtful ASD children using speech processes and computer vision technique early Phase screening mainly passes through camera and recognizer the comprehensive consideration children in terms of face, sight and hand motion three respectively The performance of this link of pointing reaction in self-closing disease diagnosis, and then the branch provided in computer technology is made a definite diagnosis for last children It holds.But children are larger with a distance from camera, so that eyes, gesture, even entire tested children are in the picture of video The area occupied is smaller, and accurate information how is extracted in such data and assesses the risk system for suffering from ASD with this Number, is a challenging problem.
Summary of the invention
It is pierced in view of the above-mentioned problems of the prior art, the purpose of the present invention is to provide one kind based on asocial's sound Swash the autism primary screening apparatus of behavior normal form.The present invention is to assess instruction of the children to asocial's sonic stimulation and doctor The behavior reaction made predict its suffer from autism-spectrum obstacle risk factor, can be used for autism early screening.
To achieve the above object, the present invention is realized according to following technical scheme:
A kind of autism primary screening apparatus based on asocial's sonic stimulation behavior normal form characterized by comprising
Acquisition module: for attracting the controllable sound producing body of child attention, the remote controler of sound producing body sounding is controlled And camera, the sound for the behavior reaction that asocial's sonic stimulation and doctor instruct is regarded to acquire participation experimental child, Video data;
Preprocessing module: handling audio data using voice activity detection, and detection sound producing body starts the time of sounding, and Content after being intercepted video counts ten seconds as starting endpoint, as video analysis data;
Characteristic extracting module: to video data, analysis is handled frame by frame, obtains the feature of face, expression in the eyes and gesture;
Training categorization module: using acquired feature training classifier, the sorter model of prediction ASD is obtained;
Prediction module: using the calculated face of characteristic extracting module, expression in the eyes and gesture feature to the ASD of tester into Row assessment prediction.
In above-mentioned technical proposal, the characteristic extracting module includes positive face detection unit, sight tracing unit, gestures detection Unit;The positive face detection unit detects the positive face of children, obtains facial characteristics, if children see to extracted video data To the sound producing body, and by the camera detection by sound producing body to positive face, will generate one for facial area again only includes The relatively wicket of face analyzes data as subsequent image;The sight tracing unit to extracted face image data, into One step judges whether sight concentrates on object, obtains expression in the eyes feature;The gestures detection unit analyzes video data frame by frame, Positioning experiment personnel positions reject backcolor, hand image data are positioned and be syncopated as to hand, to extracted number According to being trained to obtain gesture classifier model, detection gesture is directed toward on the basis of this model, obtains gesture feature.
In above-mentioned technical proposal, the gestures detection unit is each in video by the hand that following steps complete positioning children Position in frame picture comprising:
Step S1: doctor and tested children are existed using the trained object detection and recognition algorithm model based on image Position in image extracts the position for getting doctor and tested children;
Step S2: according to the skin color range of people, retain the part that the colorfulness in picture is in this range, and will Region except range is set black, to accomplish to filter picture according to color-values, is completed backcolor and is rejected;
Step S3: it is finally positioned using the cascade classifier based on Haar feature, wherein cascade classifier is by several A Weak Classifier cascades, and when constructing Weak Classifier, detection window from left to right, from top to bottom slides in picture, often A sliding window judges a kind of feature and calculated result numerical value, is compared with threshold value after obtaining result to judge whether the window accords with It closes this feature and classifies, when the image in window passes through the inspection of classifier, it is believed that be exactly expected in the window Target.
In above-mentioned technical proposal, the trained categorization module instructs marked feature using supervised learning classifier Practice classification.
In above-mentioned technical proposal, supervised learning classifier methods and weighting summation method are respectively adopted in the prediction module To predict the risk factor of autism.
Compared with the prior art, the invention has the following advantages:
The invention proposes the frame based on machine learning, by acquiring the audio, video data of experimenter, propose with Face, expression in the eyes and gesture feature are analyzed to predict the software and hardware integration device of ASD.Compared to traditional ADOS appraisal procedure, Method proposed by the present invention has significant objectivity and does not need the participation of experienced doctor.Although dress proposed by the present invention Traditional ASD diagnostic method can not be substituted completely by setting, but it can be considered the auxiliary device of ASD risk factor assessment, So that the ASD screening of early stage is more accurately and conveniently.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is total frame construction drawing of the autism primary screening apparatus the present invention is based on machine learning;
Fig. 2 is the flow diagram of pointing reaction;
Fig. 3 is the schematic layout pattern for acquiring place.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.
The present invention is higher for the high labor cost, interval between diagnosis length and subjectivity of traditional ASD diagnostic method, throughput Change the indices in pointing reaction, extracts the face of tester, expression in the eyes and gesture feature, and according to these features training number According to so that the risk factor of autism-spectrum obstacle is assessed, so as to carry out early screening, reduction cost of labor, section to ASD The a large amount of time is saved.
As shown in Figure 1, the autism primary screening apparatus of the invention based on asocial's sonic stimulation behavior normal form, Include:
Acquisition module: for attracting the controllable sound producing body of child attention, the remote controler of sound producing body sounding is controlled And camera, the sound for the behavior reaction that asocial's sonic stimulation and doctor instruct is regarded to acquire participation experimental child, Video data;
Specifically, experimenter and medical staff are located at experimental site center, the non-frontal but remaining light of experimenter can See that a camera is placed in the position of (diagonally forward), suspension one can be played right above camera with the aircraft configuration of remote controlled sounding Tool, medical staff can control toy sounding.Camera records experiment overall process, as initial data.
In the present embodiment, the experimenter of ASD behavioural analysis data is 115 children, including 58 are diagnosed as solitarily The children of disease and 57 normal children, these child ages are distributed in 24~48 months, have sufficient behavior reaction ability.It is right In the children that each participates in experiment, primary complete pointing reaction testing process can be all undergone, flow chart is as shown in Figure 2. Fig. 3 describes the layout format in acquisition place.
Preprocessing module: handling audio data using voice activity detection, and detection sound producing body starts the time of sounding, and Content after being intercepted video counts ten seconds as starting endpoint, as video analysis data;
In the present embodiment, using the voice activity detection (voice activity detection, VAD) based on energy Algorithm at the time of detecting toy sounding, and is recorded as the experiment incipient stage, ten seconds video contents of video counts is intercepted, as rear Continue audio, video data to be processed.
Characteristic extracting module: to video data, analysis is handled frame by frame, obtains the feature of face, expression in the eyes and gesture;
Characteristic extracting module of the invention includes positive face detection unit, sight tracing unit, gestures detection unit;It is described just Face detection unit detects the positive face of children, obtains facial characteristics to extracted video data, if children see to the sounding object Body, and by the camera detection by sound producing body to positive face, it will be directed to facial area again and generate one only comprising the smaller of face Window analyzes data as subsequent image;The sight tracing unit further judges mesh to extracted face image data Whether light concentrates on object, obtains expression in the eyes feature;The gestures detection unit analyzes video data frame by frame, positioning experiment people Member position, rejects backcolor, hand image data is positioned and be syncopated as to hand, is trained to extracted data Gesture classifier model is obtained, can accurately detection gesture be directed toward on the basis of this model, obtain gesture feature.
Specifically, positive face detection unit of the invention realizes face using the library DLib (http://dlib.net/) method Front detection.DLib Face detection is in the case where being not optimised, it may appear that the feelings that the side face in many images can not detect Condition can use this characteristic just to screen out the case where children side is to camera.Then, to the positive face identified, A relatively wicket only comprising face is regenerated for its facial area, analyzes data as subsequent image.
Specifically, sight tracing unit of the invention is trained using convolutional neural networks end to end and can be in hand More accurate effect is obtained on machine, but data are acquired using common separate camera due to testing, and it is tested need to modify judgement Children face the condition of camera.The ordinate for not considering sight combines actual conditions, i.e., tested youngster in the selection of abscissa Child is in the left side of picture, therefore the decision condition that sight is faced moves to right.[- 9,1] this section is had chosen in this experiment, i.e., Only it can consider that tested children have looked at camera namely target object straight when -9≤x≤1.
Gestures detection unit of the invention completes the hand of positioning children in each frame picture of video by following steps Position comprising:
Step S1: doctor and tested children are existed using trained YOLO (You Only Look Once) algorithm model Position in image extracts the position for getting doctor and tested children;It is equal to and obtains two subgraphs respectively, originally The information such as remaining background can be filtered in picture.It is also possible to using Fast Region-based Convolutional Network method (Fast R-CNN) algorithm.
Step S2: according to the skin color range of people, retain the part that the colorfulness in picture is in this range, and will Region except range is set black, to accomplish to filter picture according to color-values, is completed backcolor and is rejected;
In addition, many background informations can be got rid of although finding the position of people in every frame picture, due to the body of people It body and is still remained against the background of this body, still can not navigate to hand in real time.It is above-mentioned dry in order to get rid of again It disturbs, further filter operation can be done using the colour of skin.
In the separation to human skin color, can be used in tapered space HSV (Hue, Saturation, Value) Rather than rgb space.According to the statistical result of HSV space, the skin color range of the mankind in 9≤h≤15,50≤s≤ Among 255,50≤v≤255, according to this confining spectrum, retain the part that the colorfulness in picture is in this range, and Will not belong to this range region set it is black, to accomplish to filter picture according to color-values.
Step S3: it is finally positioned using the cascade classifier based on Haar feature, wherein cascade classifier is by several A Weak Classifier cascades, and when constructing Weak Classifier, detection window from left to right, from top to bottom slides in picture, often A sliding window judges a kind of feature and calculated result numerical value, is compared with threshold value after obtaining result to judge whether the window accords with It closes this feature and classifies, a Weak Classifier is just built to do weak typing enough.Several Weak Classifiers are combined Come, forms a kind of tree-shaped cascade structure, thus build up a strong classifier.When the image in window passes through this strong point The inspection of class device is i.e. when finally obtained a positive result, and program is considered as in this window being exactly target that we want.This point Class device can classify to gesture, to select the gesture for being directed toward target object.
After positioning and being syncopated as hand image data, the gestures detection unit uses the open source deep learning of Microsoft's product Kit (Computational Network Toolkit, CNTK) tool is made to do transfer learning using ResNet_18 Based on depth model.The last layer feature in the model can be cut away in actual conditions, and is instructed using acquisition data Experienced convolutional layer replaces this layer of feature.It is made an amendment on the basis of mature model, it can be to avoid the demand to huge data.Only need Use hundreds of image datas as training set, the ideal transfer learning classifier of performance can be trained.
Wherein, the cascade classifier based on Haar feature can be found and be marked with the method for classification in an image-region Object as signing the data class being positive, this is a common image object location algorithm.The number that we are used based on this project Face and other regions of people in image can be effectively rejected according to classifier is established.Use is to two in this algorithm Important feature: Haar feature and cascade classifier.
Haar feature is a kind of grey scale change for reflecting image, and pixel sub-module seeks a kind of feature of difference, it is divided into: Edge feature, linear character, central feature and diagonal line feature, are widely used in computer vision technique.Haar feature Contain three kinds of operators: side operator, line operator and rectangle operator, all include two symmetrical regions in these operators, pass through by Pixel in one region and with the pixel in another region and subtract each other the value it can be concluded that this entire operator.Use Haar spy Sign can be avoided program and directly operate to unit pixel, and the arithmetic speed of entire program is promoted with this.In addition, in order to accelerate Calculating process, can be used the calculation method of integrogram, and this method has recorded each point to origin when traversing image array The pixel of the submatrix of composition and, calculate in the next steps in this way any one submatrix pixel and when required time Complexity just greatly reduces.
But in practical applications, in addition to extracting face, expression in the eyes and gesture information, two can also be added and record the time Continuous variable, i.e., to the reaction time on tested child attention to target object and from child attention in mesh since link The duration shifted from target object on mark object to attention.Totally 5 features, the data as training pattern.
Training categorization module: using acquired feature training classifier, the sorter model of prediction ASD is obtained;This hair Bright trained categorization module is trained classification to marked feature using support vector machines.
Present invention training categorization module is trained classification to marked feature using supervised learning classifier, supervises Study strategies and methods include support vector machines, logistic regression, k nearest neighbor, neural network etc..
In the present embodiment, using support vector machines (support vector machine, SVM) to marked feature Classify.Linear SVM learns from training data to a boundary, so that two kinds of different classes of characteristic area indexing to the greatest extent may be used It can be big.But feature obtained in the present embodiment is not linear separability, so using core SVM in the present embodiment, passes through introducing Distinguishing characteristic is carried out on one nonlinear boundary.
Prediction module: using the calculated face of characteristic extracting module, expression in the eyes and gesture feature to the ASD of tester into Row assessment prediction.
The method of SVM method and weighting summation is respectively adopted to predict the risk system of autism in prediction module of the invention Number.
Supervised learning classifier methods and weighting summation method is respectively adopted to predict autism in the prediction module Risk factor.Likewise, supervised learning classifier includes support vector machines, logistic regression, k nearest neighbor, neural network etc..
When using SVM method, tester is tested using the model of the prediction ASD obtained by training categorization module, Predict its risk factor for suffering from autism.
Using weighting summation method when, within the scope of marking has been in standardized, i.e., judged with the score of 0-2 The fine or not degree of tested children's performance.Since these features are in the even same range of the same order of magnitude, Ke Yizhi It connects and the score of various features is done into weighting summation, weight coefficient is adjusted according to the effect in development set, the result that will be obtained It is compared with the threshold value being previously set, to obtain its risk factor for suffering from autism.
Experimental setup of the invention are as follows:
1: obtaining test data and training data.
Specifically, using comparison strategy is intersected, i.e., successively select the data of each individual as test number in sequence According to, while using the data of remaining participant as training data.
2: prediction technique.
Other than SVM, the score of various features is directly done into weighting summation, by obtained result and the threshold being previously set Value is compared, it is also predicted that children suffer from the risk factor of ASD.
3: assessment benchmark
Positive face Detection accuracy: the ratio of the number and total number of persons whether rotary head is accurately detected.
Expression in the eyes Detection accuracy: the ratio of number and total number of persons of the expression in the eyes annotation whether target object is accurately detected.
Gestures detection accuracy rate: the ratio of number from finger to target object and total number of persons whether accurately detecting.
ASD risk factor assesses accuracy rate: Accurate Prediction goes out to suffer from the higher number of ASD risk and really makes a definite diagnosis ASD children The ratio of quantity.
The experimental result of this experiment is as shown in table 1:
Table 1
The present invention is preferably demonstrated according to pointing reaction, is extracted by series of features, be can achieve preferable prediction As a result 72%.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention are fallen within the scope of the claimed invention.

Claims (5)

1. a kind of autism primary screening apparatus based on asocial's sonic stimulation behavior normal form characterized by comprising
Acquisition module: for attracting the controllable sound producing body of child attention, control sound producing body sounding remote controler and Camera participates in experimental child to the sound view of the behavior reaction of asocial's sonic stimulation and doctor's instruction, video to acquire Data;
Preprocessing module: audio data is handled using voice activity detection, detection sound producing body starts the time of sounding, and with this Content after being intercepted video counts ten seconds for starting endpoint, as video analysis data;
Characteristic extracting module: to video data, analysis is handled frame by frame, obtains the feature of face, expression in the eyes and gesture;
Training categorization module: using acquired feature training classifier, the sorter model of prediction ASD is obtained;
Prediction module: the ASD of tester is commented using the calculated face of characteristic extracting module, expression in the eyes and gesture feature Estimate prediction.
2. the autism primary screening apparatus according to claim 1 based on asocial's sonic stimulation behavior normal form, It is characterized in that, the characteristic extracting module includes positive face detection unit, sight tracing unit, gestures detection unit;The positive face Detection unit detects the positive face of children, obtains facial characteristics to extracted video data, if children see to the sound producing body, And by the camera detection by sound producing body to positive face, it will be directed to facial area again and generate a smaller window only comprising face Mouthful, data are analyzed as subsequent image;The sight tracing unit further judges sight to extracted face image data Whether concentrate on object, obtains expression in the eyes feature;The gestures detection unit analyzes video data frame by frame, positioning experiment personnel Position rejects backcolor, hand image data is positioned and be syncopated as to hand, is trained to extracted data To gesture sorter model, detection gesture is directed toward on the basis of this model, obtains gesture feature.
3. the autism primary screening apparatus according to claim 2 based on asocial's sonic stimulation behavior normal form, It is characterized in that, the gestures detection unit completes position of the hand of positioning children in each frame picture of video by following steps It sets comprising:
Step S1: using the trained object detection and recognition algorithm model based on image by doctor and tested children in image In position extract the position for getting doctor and tested children;
Step S2: according to the skin color range of people, retaining the part that the colorfulness in picture is in this range, and by range Except region set black, to accomplish to filter picture according to color-values, complete backcolor and reject;
Step S3: it is finally positioned using the cascade classifier based on Haar feature, wherein cascade classifier is weak by several Classifier cascades, and when constructing Weak Classifier, detection window from left to right, from top to bottom slides in picture, each cunning Dynamic window judges a kind of feature and calculated result numerical value, is compared with threshold value after obtaining result to judge whether the window meets this Feature is simultaneously classified, when the image in window passes through the inspection of classifier, it is believed that is exactly the set goal in the window.
4. the autism primary screening apparatus according to claim 1 based on asocial's sonic stimulation behavior normal form, It is characterized in that, the trained categorization module is trained classification to marked feature using supervised learning classifier.
5. the autism primary screening apparatus according to claim 1 based on asocial's sonic stimulation behavior normal form, It is characterized in that, supervised learning classifier methods and weighting summation method is respectively adopted to predict autism in the prediction module Risk factor.
CN201811220432.8A 2018-10-19 2018-10-19 Autism primary screening device based on non-social voice stimulation behavior paradigm Active CN109431523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811220432.8A CN109431523B (en) 2018-10-19 2018-10-19 Autism primary screening device based on non-social voice stimulation behavior paradigm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811220432.8A CN109431523B (en) 2018-10-19 2018-10-19 Autism primary screening device based on non-social voice stimulation behavior paradigm

Publications (2)

Publication Number Publication Date
CN109431523A true CN109431523A (en) 2019-03-08
CN109431523B CN109431523B (en) 2022-08-30

Family

ID=65547201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811220432.8A Active CN109431523B (en) 2018-10-19 2018-10-19 Autism primary screening device based on non-social voice stimulation behavior paradigm

Country Status (1)

Country Link
CN (1) CN109431523B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109381200A (en) * 2018-10-18 2019-02-26 昆山杜克大学 Autism primary screening apparatus based on separation reunion normal form and computer vision
CN110313923A (en) * 2019-07-05 2019-10-11 昆山杜克大学 Autism early screening system based on joint ability of attention test and audio-video behavioural analysis
CN110349674A (en) * 2019-07-05 2019-10-18 昆山杜克大学 Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system
CN110349667A (en) * 2019-07-05 2019-10-18 昆山杜克大学 The autism assessment system analyzed in conjunction with questionnaire and multi-modal normal form behavioral data
CN110353703A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism based on language paradigm behavioural analysis of repeating the words of others like a parrot assesses apparatus and system
CN110364260A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism earlier evaluations apparatus and system based on indicative language paradigm
CN110363129A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism early screening system based on smile normal form and audio-video behavioural analysis
CN114387678A (en) * 2022-01-11 2022-04-22 凌云美嘉(西安)智能科技有限公司 Method and apparatus for evaluating language readability using non-verbal body symbols
CN116665310A (en) * 2023-07-28 2023-08-29 中日友好医院(中日友好临床医学研究所) Method and system for identifying and classifying tic disorder based on weak supervision learning
CN117473304A (en) * 2023-12-28 2024-01-30 天津大学 Multi-mode image labeling method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354349A (en) * 2011-10-26 2012-02-15 华中师范大学 Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children
CN104219992A (en) * 2012-03-21 2014-12-17 国立大学法人浜松医科大学 Asperger's diagnosis assistance method and system, and asperger's diagnosis assistance device
CN104298969A (en) * 2014-09-25 2015-01-21 电子科技大学 Crowd scale statistical method based on color and HAAR feature fusion
CN107785061A (en) * 2017-10-10 2018-03-09 东南大学 Autism-spectrum disorder with children mood ability interfering system
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354349A (en) * 2011-10-26 2012-02-15 华中师范大学 Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children
CN104219992A (en) * 2012-03-21 2014-12-17 国立大学法人浜松医科大学 Asperger's diagnosis assistance method and system, and asperger's diagnosis assistance device
CN104298969A (en) * 2014-09-25 2015-01-21 电子科技大学 Crowd scale statistical method based on color and HAAR feature fusion
CN107785061A (en) * 2017-10-10 2018-03-09 东南大学 Autism-spectrum disorder with children mood ability interfering system
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DAVID GRODBERG ET AL.: "The Autism Mental Status Exam: Sensitivity and Specificity Using DSM-5 Criteria for Autism Spectrum Disorder in Verbally Fluent Adults", 《J AUTISM DEV DISORD》 *
DAWSON, G. ET AL.: "Early social attention impairments in autism: Social orienting, joint attention,and attention to distress", 《DEVELOPMENTAL PSYCHOLOGY》 *
WENBO LIU ET AL: "Response to name: A dataset and a multimodal machine learning framework towards autism study", 《2017 SEVENTH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII)》 *
WENBO LIU ET. AL: "Efficient Autism Spectrum Disorder Prediction with EyeMovement: A Machine Learning Framework", 《2015 INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII)》 *
郭纪昌: "孤独症谱系障碍多角度研究", 《绥化学院学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109381200A (en) * 2018-10-18 2019-02-26 昆山杜克大学 Autism primary screening apparatus based on separation reunion normal form and computer vision
CN110363129A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism early screening system based on smile normal form and audio-video behavioural analysis
CN110349674A (en) * 2019-07-05 2019-10-18 昆山杜克大学 Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system
CN110349667A (en) * 2019-07-05 2019-10-18 昆山杜克大学 The autism assessment system analyzed in conjunction with questionnaire and multi-modal normal form behavioral data
CN110353703A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism based on language paradigm behavioural analysis of repeating the words of others like a parrot assesses apparatus and system
CN110364260A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism earlier evaluations apparatus and system based on indicative language paradigm
CN110313923A (en) * 2019-07-05 2019-10-11 昆山杜克大学 Autism early screening system based on joint ability of attention test and audio-video behavioural analysis
CN110353703B (en) * 2019-07-05 2021-11-09 昆山杜克大学 Autism assessment device and system based on parrot tongue learning language model behavior analysis
CN110363129B (en) * 2019-07-05 2022-05-27 昆山杜克大学 Early autism screening system based on smiling paradigm and audio-video behavior analysis
CN114387678A (en) * 2022-01-11 2022-04-22 凌云美嘉(西安)智能科技有限公司 Method and apparatus for evaluating language readability using non-verbal body symbols
CN116665310A (en) * 2023-07-28 2023-08-29 中日友好医院(中日友好临床医学研究所) Method and system for identifying and classifying tic disorder based on weak supervision learning
CN116665310B (en) * 2023-07-28 2023-11-03 中日友好医院(中日友好临床医学研究所) Method and system for identifying and classifying tic disorder based on weak supervision learning
CN117473304A (en) * 2023-12-28 2024-01-30 天津大学 Multi-mode image labeling method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109431523B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN109431523A (en) Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form
CN108491077A (en) A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread
CN110349667B (en) Autism assessment system combining questionnaire and multi-modal model behavior data analysis
CN110188615B (en) Facial expression recognition method, device, medium and system
Van Dyck et al. Comparing object recognition in humans and deep convolutional neural networks—an eye tracking study
CN110119672A (en) A kind of embedded fatigue state detection system and method
CN102426652A (en) Traditional Chinese medicine face color identifying and retrieving method based on image analysis
CN112472048B (en) Method for realizing neural network for identifying pulse condition of cardiovascular disease patient
CN107007257A (en) The automatic measure grading method and apparatus of the unnatural degree of face
CN109063572A (en) It is a kind of based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
Chen et al. Smg: A micro-gesture dataset towards spontaneous body gestures for emotional stress state analysis
Kavitha et al. Hierarchical classifier for soft and hard exudates detection of retinal fundus images
CN109381200B (en) Primary screening device for autism based on separation and reunion normal form and computer vision
CN110427881A (en) The micro- expression recognition method of integration across database and device based on the study of face local features
Li et al. Research on leamer's emotion recognition for intelligent education system
Ma et al. A novel two-stage deep method for mitosis detection in breast cancer histology images
Zhang et al. A human-in-the-loop deep learning paradigm for synergic visual evaluation in children
Asare et al. Detection of anaemia using medical images: A comparative study of machine learning algorithms–A systematic literature review
CN110364260A (en) Autism earlier evaluations apparatus and system based on indicative language paradigm
CN110473176A (en) Image processing method and device, method for processing fundus images, electronic equipment
CN113988135A (en) Electromyographic signal gesture recognition method based on double-branch multi-stream network
Vashishth et al. Exploring the role of computer vision in human emotion recognition: a systematic review and meta-analysis
Hnoohom et al. Comparison of ensemble learning algorithms for cataract detection from fundus images
Maheshwari et al. Performance analysis of mango leaf disease using machine learning technique
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant