CN116805272A - Visual education teaching analysis method, system and storage medium - Google Patents

Visual education teaching analysis method, system and storage medium Download PDF

Info

Publication number
CN116805272A
CN116805272A CN202211339865.1A CN202211339865A CN116805272A CN 116805272 A CN116805272 A CN 116805272A CN 202211339865 A CN202211339865 A CN 202211339865A CN 116805272 A CN116805272 A CN 116805272A
Authority
CN
China
Prior art keywords
deaf
mute
lip
questions
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211339865.1A
Other languages
Chinese (zh)
Other versions
CN116805272B (en
Inventor
陈全军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing East China Normal University Educational Technology Research Institute
Original Assignee
Wuhan Xingjixue Education Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xingjixue Education Consulting Co ltd filed Critical Wuhan Xingjixue Education Consulting Co ltd
Priority to CN202211339865.1A priority Critical patent/CN116805272B/en
Publication of CN116805272A publication Critical patent/CN116805272A/en
Application granted granted Critical
Publication of CN116805272B publication Critical patent/CN116805272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of visual education and teaching, and particularly discloses a visual education and teaching analysis method, a system and a storage medium, wherein the method comprises the following steps: the invention not only analyzes the sign language expression text of the deaf-mute students, but also analyzes the lip language expression text of the deaf-mute students, thereby overcoming the defect of single dimension of central analysis in the prior art, and providing powerful data support for the quality analysis of the answering questions of the deaf-mute students.

Description

Visual education teaching analysis method, system and storage medium
Technical Field
The invention relates to the technical field of visual education and teaching, in particular to a visual education and teaching analysis method, a visual education and teaching analysis system and a storage medium.
Background
With the development of society and science and technology, people have higher and higher requirements on mental level, education is a method capable of improving mental conservation of people, so nationally advocates national education, builds education infrastructure at large, education forms are changed accordingly, in recent years, online teaching is not limited by time space and places, students and teachers are convenient to communicate and communicate, meanwhile, knowledge points which cannot be understood by the students can be reviewed and played back further, more teachers select online teaching, but the defect that part of students have hearing ability or speaking ability exists is overcome, if the quality of answering questions of the deaf-mute students in class cannot be guaranteed, judgment of the mastery degree of the questions of the deaf-mute students is influenced by the teachers, online learning of the deaf-mute students is not facilitated, and therefore the quality of answering questions of the deaf-mute students needs to be analyzed.
The quality analysis of the existing deaf-mute student answer questions has the following defects: (1) The quality analysis of the existing deaf-mute student answer questions is mostly carried out according to the sign language expression text or lip language expression text of the deaf-mute student, the analysis dimension is single, on one hand, the phenomenon that the sign language expression text of the deaf-mute student is inaccurate in analysis is likely to exist, and then the phenomenon is taken as the expression language of the deaf-mute student, so that the quality of the deaf-mute student answer questions cannot be truly reflected, on the other hand, the phenomenon that the lip language expression text of the deaf-mute student is inaccurate in analysis is likely to exist, and then the phenomenon is taken as the expression language of the deaf-mute student, so that the reliability of the quality analysis of the deaf-mute student answer questions is not high, and in sum, the analysis method in the prior art cannot provide powerful data support for the quality analysis of the deaf-mute student answer questions.
(2) The prior art analyzes the sign language expression text of the deaf-mute students mostly according to the hand shapes of the deaf-mute students, however, the hand sizes, the hand shapes and the hand sizes of the deaf-mute students are not consistent, further, the accuracy of the sign language expression text analysis results of the deaf-mute students is low, the pertinence of the sign language expression text analysis of the deaf-mute students is low, the subsequent analysis of the sign language expression text analysis results of the deaf-mute students is affected, and the accuracy of the quality analysis results of the answer questions of the deaf-mute students is reduced.
Disclosure of Invention
In order to overcome the defects in the background technology, the embodiment of the invention provides a visual education teaching analysis method, a visual education teaching analysis system and a storage medium, which can effectively solve the problems related to the background technology.
The aim of the invention can be achieved by the following technical scheme: the first aspect of the invention provides a visual education teaching analysis method, which comprises the following steps: s1, collecting hand actions and lip actions of a deaf-mute student: and collecting hand action videos and lip action videos of the deaf-mute student when the deaf-mute student answers the questions each time through the mobile equipment.
S2, analysis of expression text of the deaf-mute students: and respectively analyzing corresponding sign language expression text and lip language expression text of the deaf-mute students when answering the questions for each time based on the collected hand action videos and lip action videos of the deaf-mute students when answering the questions for each time.
S3, analyzing the text matching degree of the expression of the deaf-mute students: and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time based on the sign language expression text and the lip language expression text corresponding to the deaf-mute student when answering the questions for each time, and further analyzing the standard expression language corresponding to the deaf-mute student when answering the questions for each time.
S4, analyzing the quality of the answer questions of the deaf-mute students: and evaluating the quality coefficient corresponding to each time of questions answering by the deaf-mute student according to the standard expression language and the standard answer corresponding to each time of questions answering by the deaf-mute student.
In one possible design, the sign language expression text corresponding to each time the deaf-mute student answers the questions comprises the following specific analysis method: s211: extracting hand motion videos corresponding to the sign language expression texts from the cloud database, dividing the hand motion videos corresponding to the sign language expression texts into hand motion sub-pictures according to the preset video frame number, and further obtaining the hand motion sub-pictures corresponding to the sign language expression texts.
S212: and extracting hand contours according to the hand motion sub-pictures corresponding to the sign language expression texts, and acquiring the joint points of the hand contours of the hand motion sub-pictures corresponding to the sign language expression texts.
S213: connecting hand contour lines connected with all joint points of hand contours of all hand motion sub-images corresponding to all sign language expression texts to obtain all relations of hand contours of all hand motion sub-images corresponding to all sign language expression textsThe bending angle of the node is marked as TQ pjx Where p=1, 2,..q, j is denoted as the number of each hand action sub-picture, j=1, 2,..k, x is denoted as the number of each joint point, x=1, 2,..y.
S214: the method comprises the steps of uniformly dividing collected hand action videos of the deaf-mute students when answering questions for each time into hand action videos of each section, dividing each section of hand action video into hand pictures according to set video frame numbers, marking the hand pictures as pictures to be analyzed, and further obtaining pictures to be analyzed, to which each section of hand action videos belong, when the deaf-mute students answer questions for each time.
S215: similarly, the bending angles of all joint points of hand outlines in all pictures to be analyzed, to which all hand action videos belong, of the deaf-mute students when answering questions for each time are obtained, and the bending angles are marked as WQ iamx Where i is a number when each question is answered, i=1, 2,..n, a is a number of each hand action video, a=1, 2,..b, m is a number of each picture to be analyzed, m=1, 2,..l.
S216: comparing the bending angle of each joint point of the hand outline in each picture to be analyzed, which each section of hand action video belongs to when the deaf-mute student answers the questions, with the bending angle of each joint point of the hand outline, which each hand action sub-picture belongs to, corresponding to each sign language expression text, so as to analyze the matching degree of each picture to be analyzed, which each section of hand action video belongs to when the deaf-mute student answers the questions, and each hand action sub-picture corresponding to each sign language expression text, wherein the calculation formula is as follows:wherein->And the matching degree of the mth picture to be analyzed, which belongs to the mth hand motion video, and the jth hand motion sub picture corresponding to the p sign language expression text when the deaf-mute student answers the question for the ith time is expressed, and y is expressed as the number of the articulation points.
S217: each section of hand action video of each deaf-mute student when answering questions each time is subjected to each waitingThe matching degree of each hand action sub-picture corresponding to the analysis picture and each sign language expression text is respectively compared with a preset matching degree threshold value, so that the number of target pictures of each hand action video when the deaf-mute student answers the questions each time is counted, and the target pictures are marked as SL ia
S218: counting the total number of hand action sub-pictures corresponding to each sign language expression text, and analyzing the matching coefficient of each hand action video and each sign language expression text of each deaf-mute student when each answer questions by combining the number of target pictures of each hand action video when each deaf-mute student answers the questions, the matching degree of each picture to be analyzed, which each hand action video belongs to, and each hand action sub-picture corresponding to each sign language expression text when each deaf-mute student answers the questions, wherein the calculation formula is as follows: Wherein->The matching coefficient of the a-th hand motion video and the p-th sign language expression text when the i-th question is answered by the deaf-mute student is represented, SS p The total number of the hand action sub-pictures corresponding to the p-th sign language expression text is represented, AD is represented as a preset matching degree threshold value, k is represented as the number of the hand action sub-pictures, and l is represented as the number of the pictures to be analyzed.
S219: based on the matching coefficients of the hand action videos and the sign language expression texts when the deaf-mute students answer the questions each time, the sign language expression text of the maximum matching coefficient corresponding to the hand action videos of the deaf-mute students when the questions are answered each time is screened, and the sign language expression text is used as the sign language expression text of the hand action videos of the deaf-mute students when the questions are answered each time.
S2110: and splicing sign language expression texts of the hand action videos of each section when the deaf-mute student answers the questions each time, so as to obtain the corresponding sign language expression texts when the deaf-mute student answers the questions each time.
In one possible design, the specific analysis method of the lip language corresponding to each time the deaf-mute student answers the questions is as follows: s221: extracting lip action videos corresponding to the lip expression texts from the cloud database, dividing the lip action videos corresponding to the lip expression texts into lip action sub-pictures according to the preset video frame number, and further obtaining the lip action sub-pictures corresponding to the lip expression texts.
S222: and extracting lip outlines from lip action sub-pictures corresponding to the lip expression texts.
S223: the lip action videos acquired when the deaf-mute student answers the questions for each time are uniformly divided into lip action videos of each section, the lip action videos of each section are divided into lip action pictures according to the set video frame number, the lip action pictures are marked as pictures to be analyzed, and then the pictures to be analyzed, to which the lip action videos of each section belong, of the deaf-mute student answers the questions for each time are obtained.
S224: and (3) carrying out equal proportion processing on the lip outline in each picture to be analyzed, to which each section of lip action video belongs, when the deaf-mutes answer the questions each time.
S225: and (3) overlapping and comparing the lip outline in each picture to be analyzed, which each section of lip action video belongs to when the deaf-mute student answers the questions for each time after the equal proportion processing, with the lip outline in each lip action sub-picture corresponding to each lip expression text.
S226: acquiring the overlapping area of the lip outline in each picture to be analyzed, which is attributed to each section of lip action video, and the lip outline in each lip action sub-picture corresponding to each lip expression text when the deaf-mute student answers the questions in each time after the equal proportion processing, and marking the overlapping area as Wherein a is denoted as the number of each lip motion video, a=1, 2,..m, B, M is denoted as the number of each picture to be parsed, m=1, 2,..l, U is denoted as the number of each lip expression text, u=1, 2,..r, J is denoted as the number of each lip motion sub-picture, j=1, 2,..k.
Obtaining lip contour areas in each picture to be analyzed to which each section of lip action video belongs when the deaf-mute students answer questions each time after the equal proportion processing,and marks it as MJ UJ
Therefore, the matching coefficient of each section of lip action video and each lip expression text when the deaf-mute student answers the questions each time is analyzed, and the calculation formula is as follows:wherein->The matching coefficient of the A-th section lip motion video and the U-th lip expression text when the deaf-mute student answers the question for the i-th time is represented, and L, K is represented as the number of pictures to be analyzed and the number of lip motion sub-pictures respectively.
S227: based on the matching coefficient of each section of lip motion video and each lip expression text when the deaf-mute student answers the questions each time, the lip expression text of each section of lip motion video corresponding to the largest matching coefficient when the deaf-mute student answers the questions each time is screened, and the lip expression text is used as the lip expression text of each section of lip motion video when the deaf-mute student answers the questions each time.
S228: and splicing lip expression texts of the lip action videos of each section when the deaf-mute student answers the questions each time, so as to obtain the corresponding lip expression texts when the deaf-mute student answers the questions each time.
In one possible design, the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions each time is as follows: s311: constructing a sign language expression text keyword set and a lip language expression text keyword set corresponding to each question answering by the deaf-mute student according to the sign language expression text and the lip language expression text when the deaf-mute student answers the questions each time, and marking the sign language expression text keyword set and the lip language expression text keyword set as E respectively i 、F i
S312: matching and comparing the corresponding sign language expression text keyword set and lip language expression text keyword set when the deaf-mutes answer questions for each time, and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mutes answer questions for each time according to the matching degree, wherein the calculation formula is as follows:wherein->The matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the question for the ith time is expressed.
In one possible design, the analysis method for the standard expression language corresponding to each time the deaf-mute student answers the questions is as follows: s321: comparing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time with a preset threshold value of the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time, and taking the sign language expression text when the deaf-mute student answers the questions for each time as a corresponding standard expression language when the deaf-mute student answers the questions for each time if the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time is greater than or equal to the threshold value of the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time.
S322: if the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for a certain time is smaller than the matching degree threshold of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions, the following analysis is carried out: the method comprises the steps of obtaining the matching coefficients of hand action videos and sign language expression texts of all segments when a deaf-mutes student answers questions at the time, and the matching coefficients of lip action videos and lip language expression texts of all segments, analyzing the average matching coefficients of hand action videos and sign language expression texts and the average matching coefficients of lip action videos and lip language expression texts of the deaf-mutes student at the time according to the matching coefficients, and obtaining the corresponding standard expression language of the deaf-mutes student at the time when the deaf-mutes student answers questions at the time.
S323: and obtaining a standard expression language corresponding to each time of answering the questions by the deaf-mute students.
In one possible design, the specific analysis method of the quality coefficient corresponding to each time the deaf-mute student answers the questions is as follows: according to the standard expression language corresponding to each time of questions answering of deaf-mute students and each time of returns stored in the cloud databaseStandard answers when answering questions construct a standard expression language keyword set corresponding to each time the deaf-mute student answers questions and a standard answer keyword set when each time answers questions, and mark the keyword sets as QA respectively i 、DA i And further analyzing the corresponding quality coefficient of the deaf-mute students when answering the questions each time, wherein the calculation formula is as follows:wherein ZL i Expressed as the corresponding quality coefficient when the deaf-mute student answers the question for the ith time.
A second aspect of the present invention provides a visual educational instruction analysis system comprising: the deaf-mute student hand action and lip action acquisition module comprises: and collecting hand action videos and lip action videos of the deaf-mute student when the deaf-mute student answers the questions each time through the mobile equipment.
Deaf-mute student expression text analysis module: and respectively analyzing corresponding sign language expression text and lip language expression text of the deaf-mute students when answering the questions for each time based on the collected hand action videos and lip action videos of the deaf-mute students when answering the questions for each time.
The deaf-mute student expresses the text matching degree analysis module: and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time based on the sign language expression text and the lip language expression text corresponding to the deaf-mute student when answering the questions for each time, and further analyzing the standard expression language corresponding to the deaf-mute student when answering the questions for each time.
The quality analysis module of the questions answered by the deaf-mute students: and evaluating the quality coefficient corresponding to each time of questions answering by the deaf-mute student according to the standard expression language and the standard answer corresponding to each time of questions answering by the deaf-mute student.
Cloud database: and storing hand action videos corresponding to the sign language expression texts, storing lip action videos corresponding to the lip language expression texts, and storing standard answers when answering the questions for each time.
The third aspect of the invention provides a visual education teaching analysis storage medium, wherein the storage medium is burnt with a computer program, and the computer program realizes the visual education teaching analysis method when running in the memory of a server.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects: (1) According to the quality analysis of the deaf-mute student answer questions, the sign language expression text of the deaf-mute student is analyzed, the lip language expression text of the deaf-mute student is also analyzed, the defect of single analysis dimension of the center in the prior art is overcome, the similarity of the sign language expression text and the lip language expression text of the deaf-mute student is further analyzed through analyzing the sign language expression text and the lip language expression text of the deaf-mute student, the phenomenon that the sign language expression text of the deaf-mute student is inaccurate or the lip language expression text is inaccurate can be effectively avoided, the error rate of taking the single sign language expression text or the single lip language expression text of the deaf-mute student as the expression language of the deaf-mute student is further reduced, the reliability of the quality analysis of the deaf-mute student answer questions is improved, and in conclusion, the quality analysis method can provide powerful data support for the quality analysis of the deaf-mute student answer questions.
(2) When analyzing the sign language expression text of the deaf-mute student, the bending angle of the hand joint points of the deaf-mute student is analyzed, so that the phenomenon that the sign language expression text analysis of the deaf-mute student is inaccurate due to inconsistent hand size, shape and fat and thin of the deaf-mute student is avoided, the accuracy of the sign language expression text analysis result of the deaf-mute student is improved, the problem that the sign language expression text analysis of the deaf-mute student is not strong in pertinence in the prior art is solved, a solid foundation is laid for the subsequent analysis of the sign language expression text analysis result of the deaf-mute student, and the accuracy of the answer question quality analysis result of the deaf-mute student is improved.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without inventive effort from the following drawings.
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the module connection of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a first aspect of the present invention provides a visual education teaching analysis method including the steps of: s1, collecting hand actions and lip actions of a deaf-mute student: and collecting hand action videos and lip action videos of the deaf-mute student when the deaf-mute student answers the questions each time through the mobile equipment.
S2, analysis of expression text of the deaf-mute students: and respectively analyzing corresponding sign language expression text and lip language expression text of the deaf-mute students when answering the questions for each time based on the collected hand action videos and lip action videos of the deaf-mute students when answering the questions for each time.
In a specific embodiment of the invention, the sign language expression text corresponding to each time the deaf-mute student answers the questions comprises the following specific analysis method: s211: extracting hand motion videos corresponding to the sign language expression texts from the cloud database, dividing the hand motion videos corresponding to the sign language expression texts into hand motion sub-pictures according to the preset video frame number, and further obtaining the hand motion sub-pictures corresponding to the sign language expression texts.
S212: and extracting hand contours according to the hand motion sub-pictures corresponding to the sign language expression texts, and acquiring the joint points of the hand contours of the hand motion sub-pictures corresponding to the sign language expression texts.
S213: connecting hand contour lines connected with all joint points of hand contours of all hand motion sub-images corresponding to all sign language expression texts to further obtain all joints of hand contours of all hand motion sub-images corresponding to all sign language expression textsThe bending angle of the dot is marked as TQ pjx Where p=1, 2,..q, j is denoted as the number of each hand action sub-picture, j=1, 2,..k, x is denoted as the number of each joint point, x=1, 2,..y.
S214: the method comprises the steps of uniformly dividing collected hand action videos of the deaf-mute students when answering questions for each time into hand action videos of each section, dividing each section of hand action video into hand pictures according to set video frame numbers, marking the hand pictures as pictures to be analyzed, and further obtaining pictures to be analyzed, to which each section of hand action videos belong, when the deaf-mute students answer questions for each time.
S215: similarly, the bending angles of all joint points of hand outlines in all pictures to be analyzed, to which all hand action videos belong, of the deaf-mute students when answering questions for each time are obtained, and the bending angles are marked as WQ iamx Where i is a number when each question is answered, i=1, 2,..n, a is a number of each hand action video, a=1, 2,..b, m is a number of each picture to be analyzed, m=1, 2,..l.
S216: comparing the bending angle of each joint point of the hand outline in each picture to be analyzed, which each section of hand action video belongs to when the deaf-mute student answers the questions, with the bending angle of each joint point of the hand outline, which each hand action sub-picture belongs to, corresponding to each sign language expression text, so as to analyze the matching degree of each picture to be analyzed, which each section of hand action video belongs to when the deaf-mute student answers the questions, and each hand action sub-picture corresponding to each sign language expression text, wherein the calculation formula is as follows:wherein->And the matching degree of the mth picture to be analyzed, which belongs to the mth hand motion video, and the jth hand motion sub picture corresponding to the p sign language expression text when the deaf-mute student answers the question for the ith time is expressed, and y is expressed as the number of the articulation points.
S217: dividing each waiting point of each hand action video when each deaf-mute student answers questionsComparing the matching degree of each hand action sub-picture corresponding to the analysis picture and each sign language expression text with a preset matching degree threshold value respectively, further counting the number of target pictures to which each segment of hand action video belongs when the deaf-mute student answers the questions each time, and marking the target pictures as SL ia
It should be noted that if the matching degree of a picture to be analyzed, to which a certain hand action video belongs, and a hand action sub-picture corresponding to a sign language when a deaf-mute student answers a question for a certain time is greater than or equal to the matching degree threshold of the picture to be analyzed, to which the hand action video belongs, and the hand sub-picture corresponding to the sign language when the deaf-mute student answers the question for a certain time, the picture to be analyzed, to which the hand action video belongs, is marked as a target picture when the deaf-mute student answers the question for the certain time.
S218: counting the total number of hand action sub-pictures corresponding to each sign language expression text, and analyzing the matching coefficient of each hand action video and each sign language expression text of each deaf-mute student when each answer questions by combining the number of target pictures of each hand action video when each deaf-mute student answers the questions, the matching degree of each picture to be analyzed, which each hand action video belongs to, and each hand action sub-picture corresponding to each sign language expression text when each deaf-mute student answers the questions, wherein the calculation formula is as follows:wherein->The matching coefficient of the a-th hand motion video and the p-th sign language expression text when the i-th question is answered by the deaf-mute student is represented, SS p The total number of the hand action sub-pictures corresponding to the p-th sign language expression text is represented, AD is represented as a preset matching degree threshold value, k is represented as the number of the hand action sub-pictures, and l is represented as the number of the pictures to be analyzed.
S219: based on the matching coefficients of the hand action videos and the sign language expression texts when the deaf-mute students answer the questions each time, the sign language expression text of the maximum matching coefficient corresponding to the hand action videos of the deaf-mute students when the questions are answered each time is screened, and the sign language expression text is used as the sign language expression text of the hand action videos of the deaf-mute students when the questions are answered each time.
S2110: and splicing sign language expression texts of the hand action videos of each section when the deaf-mute student answers the questions each time, so as to obtain the corresponding sign language expression texts when the deaf-mute student answers the questions each time.
When analyzing the sign language expression text of the deaf-mute student, the bending angle of the hand joint points of the deaf-mute student is analyzed, so that the phenomenon that the sign language expression text analysis of the deaf-mute student is inaccurate due to inconsistent hand size, shape and fat and thin of the deaf-mute student is avoided, the accuracy of the sign language expression text analysis result of the deaf-mute student is improved, the problem that the sign language expression text analysis of the deaf-mute student is not strong in pertinence in the prior art is solved, a solid foundation is laid for the subsequent analysis of the sign language expression text analysis result of the deaf-mute student, and the accuracy of the answer question quality analysis result of the deaf-mute student is improved.
In a specific embodiment of the invention, the lip expression text corresponding to each time the deaf-mute student answers the questions comprises the following specific analysis method: s221: extracting lip action videos corresponding to the lip expression texts from the cloud database, dividing the lip action videos corresponding to the lip expression texts into lip action sub-pictures according to the preset video frame number, and further obtaining the lip action sub-pictures corresponding to the lip expression texts.
S222: and extracting lip outlines from lip action sub-pictures corresponding to the lip expression texts.
S223: the lip action videos acquired when the deaf-mute student answers the questions for each time are uniformly divided into lip action videos of each section, the lip action videos of each section are divided into lip action pictures according to the set video frame number, the lip action pictures are marked as pictures to be analyzed, and then the pictures to be analyzed, to which the lip action videos of each section belong, of the deaf-mute student answers the questions for each time are obtained.
S224: and (3) carrying out equal proportion processing on the lip outline in each picture to be analyzed, to which each section of lip action video belongs, when the deaf-mutes answer the questions each time.
The specific method for carrying out the equal proportion processing on the lip outline in each picture to be analyzed, which each section of lip action video belongs to when the deaf-mute student answers the questions each time, is as follows: extracting lip contours from each picture to be analyzed, which each section of lip action video belongs to, when a deaf-mute student answers questions each time, comparing the lip contours in each picture to be analyzed, which each section of lip action video belongs to, with the lip contours in each lip action sub-picture corresponding to each lip language, if the lip contours in a picture to be analyzed, which each section of lip action video belongs to, when a deaf-mute student answers questions each time, are larger than the lip contours in a picture to be analyzed, which corresponds to a lip language, carrying out equal proportion reduction processing on the lip contours in the picture to be analyzed, which each section of lip action video belongs to, when a deaf-mute student answers questions each time, if the lip contours in a picture to be analyzed, which each section of lip action video belongs to, are smaller than the lip contours in a picture to be analyzed, which each section of lip action sub-picture corresponds to, and carrying out equal proportion amplification processing on the lip contours in the picture to be analyzed, which each section of lip action video belongs to, when a deaf-mute student answers questions each time.
S225: and (3) overlapping and comparing the lip outline in each picture to be analyzed, which each section of lip action video belongs to when the deaf-mute student answers the questions for each time after the equal proportion processing, with the lip outline in each lip action sub-picture corresponding to each lip expression text.
S226: acquiring the overlapping area of the lip outline in each picture to be analyzed, which is attributed to each section of lip action video, and the lip outline in each lip action sub-picture corresponding to each lip expression text when the deaf-mute student answers the questions in each time after the equal proportion processing, and marking the overlapping area asWherein a is denoted as the number of each lip motion video, a=1, 2,..m, B, M is denoted as the number of each picture to be parsed, m=1, 2,..l, U is denoted as the number of each lip expression text, u=1, 2,..r, J is denoted as the number of each lip motion sub-picture, j=1, 2,..k.
Obtaining lip contour areas in each picture to be analyzed, to which each section of lip action video belongs, of each section of lip action video when each deaf-mute student answers questions after equal proportion processing, and marking the lip contour areas as MJ UJ
Therefore, the matching coefficient of each section of lip action video and each lip expression text when the deaf-mute student answers the questions each time is analyzed, and the calculation formula is as follows:wherein->The matching coefficient of the A-th section lip motion video and the U-th lip expression text when the deaf-mute student answers the question for the i-th time is represented, and L, K is represented as the number of pictures to be analyzed and the number of lip motion sub-pictures respectively.
S227: based on the matching coefficient of each section of lip motion video and each lip expression text when the deaf-mute student answers the questions each time, the lip expression text of each section of lip motion video corresponding to the largest matching coefficient when the deaf-mute student answers the questions each time is screened, and the lip expression text is used as the lip expression text of each section of lip motion video when the deaf-mute student answers the questions each time.
S228: and splicing lip expression texts of the lip action videos of each section when the deaf-mute student answers the questions each time, so as to obtain the corresponding lip expression texts when the deaf-mute student answers the questions each time.
According to the method, the phenomenon that the sign language of the deaf-mute student is inaccurate or the lip language is inaccurate can be effectively avoided, the error rate of taking the single sign language or the single lip language of the deaf-mute student as the expression language of the deaf-mute student is reduced, the reliability of analyzing the quality of the questions answered by the deaf-mute student is improved, and the method can provide powerful data support for the quality analysis of the questions answered by the deaf-mute student.
S3, analyzing the text matching degree of the expression of the deaf-mute students: and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time based on the sign language expression text and the lip language expression text corresponding to the deaf-mute student when answering the questions for each time, and further analyzing the standard expression language corresponding to the deaf-mute student when answering the questions for each time.
In a specific embodiment of the invention, the matching degree of the sign language expression text and the lip language expression text when the deaf-mute students answer the questions each time is as follows: s311: constructing a sign language expression text keyword set and a lip language expression text keyword set corresponding to each question answering by the deaf-mute student according to the sign language expression text and the lip language expression text when the deaf-mute student answers the questions each time, and marking the sign language expression text keyword set and the lip language expression text keyword set as E respectively i 、F i
S312: matching and comparing the corresponding sign language expression text keyword set and lip language expression text keyword set when the deaf-mutes answer questions for each time, and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mutes answer questions for each time according to the matching degree, wherein the calculation formula is as follows:wherein->The matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the question for the ith time is expressed.
In a specific embodiment of the invention, the method for analyzing the standard expression language corresponding to each time of answering the questions by the deaf-mute students comprises the following steps: the specific analysis method for analyzing the standard expression language corresponding to each time of answering the questions by the deaf-mute students comprises the following steps: s321: comparing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time with a preset threshold value of the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time, and taking the sign language expression text when the deaf-mute student answers the questions for each time as a corresponding standard expression language when the deaf-mute student answers the questions for each time if the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time is greater than or equal to the threshold value of the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time.
S322: if the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for a certain time is smaller than the matching degree threshold of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions, the following analysis is carried out: the method comprises the steps of obtaining the matching coefficients of hand action videos and sign language expression texts of all segments when a deaf-mutes student answers questions at the time, and the matching coefficients of lip action videos and lip language expression texts of all segments, analyzing the average matching coefficients of hand action videos and sign language expression texts and the average matching coefficients of lip action videos and lip language expression texts of the deaf-mutes student at the time according to the matching coefficients, and obtaining the corresponding standard expression language of the deaf-mutes student at the time when the deaf-mutes student answers questions at the time.
It should be noted that, the specific calculation formula of the average matching coefficient of the hand action video and the sign language expression text when the deaf-mute student answers the question this time is as follows:wherein PJ is expressed as the average matching coefficient of hand motion video and sign language expression text when the deaf-mute student answers the question this time,>the matching coefficients of the a-th hand motion video and the p-th sign language expression text are expressed when the deaf-mute student answers the question, and b and q are respectively expressed as the number of the hand motion videos and the number of the sign language expression texts.
It should be noted that, the specific calculation formula of the average matching coefficient of the lip action video and the lip expression text when the deaf-mute student answers the question this time is as follows:wherein PA is expressed as lip action when the deaf-mute student answers the questionAverage matching coefficient of video and lip expression text, < ->The matching coefficients of the A-th hand motion video and the R-th sign language expression text when the deaf-mute students answer the questions are expressed, and B, R are expressed as the number of lip motion videos and the number of lip language expression texts respectively.
It should be noted that, the specific analysis method of the corresponding standard expression language when the deaf-mute student answers the question this time is as follows: comparing the average matching coefficient of the hand motion video and the sign language expression text when the deaf-mute student answers the questions for the time with the average matching coefficient of the lip motion video and the lip language expression text as far as possible, and if the average matching coefficient of the hand motion video and the sign language expression text when the deaf-mute student answers the questions for the time is larger than or equal to the average matching coefficient of the lip motion video and the lip language expression text, taking the sign language expression text when the deaf-mute student answers the questions for the time as the standard expression language corresponding to the questions for the time, otherwise, taking the lip language expression text when the deaf-mute student answers the questions for the time as the standard expression language corresponding to the questions for the time.
S323: and obtaining a standard expression language corresponding to each time of answering the questions by the deaf-mute students.
S4, analyzing the quality of the answer questions of the deaf-mute students: and evaluating the quality coefficient corresponding to each time of questions answering by the deaf-mute student according to the standard expression language and the standard answer corresponding to each time of questions answering by the deaf-mute student.
In a specific embodiment of the invention, the specific analysis method of the quality coefficient corresponding to each time of answering the questions by the deaf-mute student comprises the following steps: constructing a standard expression language keyword set corresponding to each question answering time of the deaf-mute student and a standard answer keyword set corresponding to each question answering time of the deaf-mute student according to the standard expression language corresponding to each question answering time of the deaf-mute student and standard answers stored in a cloud database, and marking the standard expression language keyword set and the standard answer keyword set as QA (quality assurance) respectively i 、DA i And further analyzing the corresponding quality coefficient of the deaf-mute students when answering the questions each time, wherein the calculation formula is as follows:wherein ZL i Expressed as the corresponding quality coefficient when the deaf-mute student answers the question for the ith time.
Referring to fig. 2, a second aspect of the present invention provides a visual education teaching analysis system including: the deaf-mute student hand action and lip action acquisition module comprises: and collecting hand action videos and lip action videos of the deaf-mute student when the deaf-mute student answers the questions each time through the mobile equipment.
Deaf-mute student expression text analysis module: and respectively analyzing corresponding sign language expression text and lip language expression text of the deaf-mute students when answering the questions for each time based on the collected hand action videos and lip action videos of the deaf-mute students when answering the questions for each time.
The deaf-mute student expresses the text matching degree analysis module: and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time based on the sign language expression text and the lip language expression text corresponding to the deaf-mute student when answering the questions for each time, and further analyzing the standard expression language corresponding to the deaf-mute student when answering the questions for each time.
The quality analysis module of the questions answered by the deaf-mute students: and evaluating the quality coefficient corresponding to each time of questions answering by the deaf-mute student according to the standard expression language and the standard answer corresponding to each time of questions answering by the deaf-mute student.
Cloud database: and storing hand action videos corresponding to the sign language expression texts, storing lip action videos corresponding to the lip language expression texts, and storing standard answers when answering the questions for each time.
The system comprises a deaf-mute student hand action and lip action acquisition module, a deaf-mute student expression text analysis module, a deaf-mute student expression text matching degree analysis module, a deaf-mute student answer question quality analysis module and a cloud database, wherein the deaf-mute student hand action and lip action acquisition module is connected with the deaf-mute student expression text analysis module, the deaf-mute student expression text analysis module is connected with the deaf-mute student expression text matching degree analysis module, the deaf-mute student answer question quality analysis module is connected with the deaf-mute student answer question quality analysis module, and the cloud database is respectively connected with the deaf-mute student expression text analysis module and the deaf-mute student answer question quality analysis module.
The third aspect of the invention provides a visual education teaching analysis storage medium, wherein the storage medium is burnt with a computer program, and the computer program realizes the visual education teaching analysis method when running in the memory of a server.
The foregoing is merely illustrative of the structures of this invention and various modifications, additions and substitutions for those skilled in the art can be made to the described embodiments without departing from the scope of the invention or from the scope of the invention as defined in the accompanying claims.

Claims (8)

1. A visual education teaching analysis method, characterized in that the method comprises the following steps:
s1, collecting hand actions and lip actions of a deaf-mute student: acquiring hand action videos and lip action videos of the deaf-mute students when the deaf-mute students answer questions each time through mobile equipment;
s2, analysis of expression text of the deaf-mute students: based on collected hand action videos and lip action videos when the deaf-mute students answer questions for each time, respectively analyzing corresponding sign language expression texts and lip language expression texts when the deaf-mute students answer questions for each time;
S3, analyzing the text matching degree of the expression of the deaf-mute students: analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute students answer the questions for each time based on the sign language expression text and the lip language expression text corresponding to the deaf-mute students answer the questions for each time, and further analyzing the standard expression language corresponding to the deaf-mute students when the questions are answered for each time;
s4, analyzing the quality of the answer questions of the deaf-mute students: and evaluating the quality coefficient corresponding to each time of questions answering by the deaf-mute student according to the standard expression language and the standard answer corresponding to each time of questions answering by the deaf-mute student.
2. The visual education teaching analysis method according to claim 1, wherein: the specific analysis method of the sign language expression text corresponding to each time of answering the questions by the deaf-mute students comprises the following steps:
s211: extracting hand action videos corresponding to the sign language expression texts from a cloud database, dividing the hand action videos corresponding to the sign language expression texts into hand action sub-pictures according to preset video frame numbers, and further obtaining the hand action sub-pictures corresponding to the sign language expression texts;
s212: extracting hand contours according to the hand action sub-pictures corresponding to the sign language expression texts, and obtaining the joint points of the hand contours of the hand action sub-pictures corresponding to the sign language expression texts;
S213: connecting hand contour lines connected with all joint points of hand contours corresponding to all hand motion sub-images corresponding to all sign language expression texts to obtain bending angles of all joint points of hand contours corresponding to all hand motion sub-images corresponding to all sign language expression texts, and marking the bending angles as TQ pjx Where p=1, 2,..q, j denotes the number of each hand action sub-picture, j=1, 2,..k, x represents the number of each node, x=1, 2,..y;
s214: dividing the collected hand action videos of the deaf-mute students in each time of answering questions into hand action videos of each section, dividing the hand action videos of each section into hand pictures according to the set video frame number, and marking the hand pictures as pictures to be analyzed, so as to obtain pictures to be analyzed, to which the hand action videos of each section of the deaf-mute students belong, in each time of answering questions;
s215: similarly, the bending angles of all joint points of hand outlines in all pictures to be analyzed, to which all hand action videos belong, of the deaf-mute students when answering questions for each time are obtained, and the bending angles are marked as WQ iamx Where i is the number when each question is answered, i=1, 2,..n, a is the number of each hand action video, a=1, 2,..b, m is the number of each picture to be analyzed, m=1, 2,..l;
S216: bending angles of all the joint points of the hand outline in each picture to be analyzed, which are belonged to each section of hand action video, of the deaf-mutes students when answering the questions each time, and bending of all the joint points of the hand outline, which are belonged to each hand action sub-picture corresponding to each sign language expression textThe angles are compared, so that the matching degree of each hand action sub-picture corresponding to each sign language expression text of each picture to be analyzed which each hand action video belongs to when the deaf-mute student answers the questions each time is analyzed, and the calculation formula is as follows:wherein->The matching degree of the mth picture to be analyzed, which belongs to the mth hand motion video, and the jth hand motion sub picture corresponding to the p sign language expression text when the deaf-mute student answers the question for the ith time is expressed, and y is expressed as the number of the articulation points;
s217: comparing the matching degree of each hand action sub-picture corresponding to each hand action video to each picture to be analyzed and each sign language expression text when the deaf-mute student answers the questions each time with a preset matching degree threshold value, further counting the number of target pictures to which each hand action video belongs when the deaf-mute student answers the questions each time, and marking the target pictures as SL ia
S218: counting the total number of hand action sub-pictures corresponding to each sign language expression text, and analyzing the matching coefficient of each hand action video and each sign language expression text of each deaf-mute student when each answer questions by combining the number of target pictures of each hand action video when each deaf-mute student answers the questions, the matching degree of each picture to be analyzed, which each hand action video belongs to, and each hand action sub-picture corresponding to each sign language expression text when each deaf-mute student answers the questions, wherein the calculation formula is as follows: Wherein->The matching coefficient of the a-th hand motion video and the p-th sign language expression text when the i-th question is answered by the deaf-mute student is represented, SS p Expressed as p-th sign language expression text correspondenceAD is expressed as a preset matching degree threshold, k is expressed as the number of hand action sub-pictures, and l is expressed as the number of pictures to be analyzed;
s219: based on the matching coefficient of each hand action video and each sign language expression text when the deaf-mute student answers the questions each time, screening the sign language expression text of the maximum matching coefficient corresponding to each hand action video when the deaf-mute student answers the questions each time, and taking the sign language expression text as the sign language expression text of each hand action video when the deaf-mute student answers the questions each time;
s2110: and splicing sign language expression texts of the hand action videos of each section when the deaf-mute student answers the questions each time, so as to obtain the corresponding sign language expression texts when the deaf-mute student answers the questions each time.
3. The visual education teaching analysis method according to claim 1, wherein: the specific analysis method of the lip expression text corresponding to each time of answering the questions by the deaf-mute students comprises the following steps:
s221: extracting lip action videos corresponding to the lip expression texts from a cloud database, dividing the lip action videos corresponding to the lip expression texts into lip action sub-pictures according to preset video frame numbers, and further obtaining the lip action sub-pictures corresponding to the lip expression texts;
S222: extracting lip outlines from lip action sub-pictures corresponding to the lip expression texts;
s223: uniformly dividing the collected lip action videos of the deaf-mute students when answering the questions for each time into each section of lip action videos, dividing each section of lip action videos into each lip action picture according to the set video frame number, and marking each lip action picture as each picture to be analyzed, so as to obtain each picture to be analyzed, to which each section of lip action videos belongs, when the deaf-mute students answer the questions for each time;
s224: carrying out equal proportion processing on lip outlines in each picture to be analyzed, to which each section of lip action video belongs, when a deaf-mute student answers a question each time;
s225: overlapping and comparing the lip outline in each picture to be analyzed, which each section of lip action video belongs to when the deaf-mute student answers the questions for each time after the equal proportion processing, with the lip outline in each lip action sub picture corresponding to each lip expression text;
s226: acquiring the overlapping area of the lip outline in each picture to be analyzed, which is attributed to each section of lip action video, and the lip outline in each lip action sub-picture corresponding to each lip expression text when the deaf-mute student answers the questions in each time after the equal proportion processing, and marking the overlapping area as Wherein a is represented as the number of each lip motion video, a=1, 2,..m, B, M is represented as the number of each picture to be parsed, m=1, 2,..l, U is represented as the number of each lip expression text, u=1, 2,..r, J is represented as the number of each lip motion sub-picture, j=1, 2,..k;
obtaining lip contour areas in each picture to be analyzed, to which each section of lip action video belongs, of each section of lip action video when each deaf-mute student answers questions after equal proportion processing, and marking the lip contour areas as MJ UJ
Therefore, the matching coefficient of each section of lip action video and each lip expression text when the deaf-mute student answers the questions each time is analyzed, and the calculation formula is as follows:wherein->The matching coefficients of the A-th section lip action video and the U-th lip expression text are expressed when the deaf-mute student answers the question for the ith time, and L, K are respectively expressed as the number of pictures to be analyzed and the number of lip action sub-pictures;
s227: based on the matching coefficient of each section of lip action video and each lip expression text when the deaf-mute student answers the questions for each time, screening the lip expression text of the maximum matching coefficient corresponding to each section of lip action video when the deaf-mute student answers the questions for each time, and taking the lip expression text as the lip expression text of each section of lip action video when the deaf-mute student answers the questions for each time;
S228: and splicing lip expression texts of the lip action videos of each section when the deaf-mute student answers the questions each time, so as to obtain the corresponding lip expression texts when the deaf-mute student answers the questions each time.
4. The visual education teaching analysis method according to claim 1, wherein: the matching degree of the sign language expression text and the lip language expression text when the deaf-mute students answer the questions each time is as follows:
s311: constructing a sign language expression text keyword set and a lip language expression text keyword set corresponding to each question answering by the deaf-mute student according to the sign language expression text and the lip language expression text when the deaf-mute student answers the questions each time, and marking the sign language expression text keyword set and the lip language expression text keyword set as E respectively i 、F i
S312: matching and comparing the corresponding sign language expression text keyword set and lip language expression text keyword set when the deaf-mutes answer questions for each time, and analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mutes answer questions for each time according to the matching degree, wherein the calculation formula is as follows:wherein->The matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the question for the ith time is expressed.
5. The visual education teaching analysis method according to claim 1, wherein: the specific analysis method for analyzing the standard expression language corresponding to each time of answering the questions by the deaf-mute students comprises the following steps:
S321: comparing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time with a preset threshold value of the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time, and taking the sign language expression text when the deaf-mute student answers the questions for each time as a corresponding standard expression language when the deaf-mute student answers the questions for each time if the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time is greater than or equal to the threshold value of the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for each time;
s322: if the matching degree of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions for a certain time is smaller than the matching degree threshold of the sign language expression text and the lip language expression text when the deaf-mute student answers the questions, the following analysis is carried out:
obtaining the matching coefficients of each hand action video and each sign language expression text and the matching coefficients of each lip action video and each lip language expression text when the deaf-mutes answer the questions for the time, and analyzing the average matching coefficients of the hand action video and the sign language expression text and the average matching coefficients of the lip action video and the lip language expression text when the deaf-mutes answer the questions for the time according to the matching coefficients, so as to obtain the corresponding standard expression language when the deaf-mutes answer the questions for the time;
S323: and obtaining a standard expression language corresponding to each time of answering the questions by the deaf-mute students.
6. The visual education teaching analysis method according to claim 1, wherein: the specific analysis method of the corresponding quality coefficient when the deaf-mute students answer the questions each time comprises the following steps: constructing a standard expression language keyword set corresponding to each question answering time of the deaf-mute student and a standard answer keyword set corresponding to each question answering time of the deaf-mute student according to the standard expression language corresponding to each question answering time of the deaf-mute student and standard answers stored in a cloud database, and marking the standard expression language keyword set and the standard answer keyword set as QA (quality assurance) respectively i 、DA i And further analyzing the corresponding quality coefficient of the deaf-mute students when answering the questions each time, wherein the calculation formula is as follows:wherein ZL i Expressed as the corresponding quality coefficient when the deaf-mute student answers the question for the ith time.
7. A visual education teaching analysis system is characterized in that: comprising the following steps:
the deaf-mute student hand action and lip action acquisition module comprises: acquiring hand action videos and lip action videos of the deaf-mute students when the deaf-mute students answer questions each time through mobile equipment;
deaf-mute student expression text analysis module: based on collected hand action videos and lip action videos when the deaf-mute students answer questions for each time, respectively analyzing corresponding sign language expression texts and lip language expression texts when the deaf-mute students answer questions for each time;
The deaf-mute student expresses the text matching degree analysis module: analyzing the matching degree of the sign language expression text and the lip language expression text when the deaf-mute students answer the questions for each time based on the sign language expression text and the lip language expression text corresponding to the deaf-mute students answer the questions for each time, and further analyzing the standard expression language corresponding to the deaf-mute students when the questions are answered for each time;
the quality analysis module of the questions answered by the deaf-mute students: evaluating the quality coefficient corresponding to each time of questions answering by the deaf-mute student according to the standard expression language and the standard answer corresponding to each time of questions answering by the deaf-mute student;
cloud database: and storing hand action videos corresponding to the sign language expression texts, storing lip action videos corresponding to the lip language expression texts, and storing standard answers when answering the questions for each time.
8. A visual education teaching analysis storage medium characterized in that: the storage medium is burnt with a computer program, and the computer program realizes the visual education teaching analysis method according to any one of the claims 1-6 when running in the memory of the server.
CN202211339865.1A 2022-10-29 2022-10-29 Visual education teaching analysis method, system and storage medium Active CN116805272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211339865.1A CN116805272B (en) 2022-10-29 2022-10-29 Visual education teaching analysis method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211339865.1A CN116805272B (en) 2022-10-29 2022-10-29 Visual education teaching analysis method, system and storage medium

Publications (2)

Publication Number Publication Date
CN116805272A true CN116805272A (en) 2023-09-26
CN116805272B CN116805272B (en) 2024-07-12

Family

ID=88078561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211339865.1A Active CN116805272B (en) 2022-10-29 2022-10-29 Visual education teaching analysis method, system and storage medium

Country Status (1)

Country Link
CN (1) CN116805272B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011191418A (en) * 2010-03-12 2011-09-29 Nippon Telegr & Teleph Corp <Ntt> System, method and program for generating community-based sign language
CN105868282A (en) * 2016-03-23 2016-08-17 乐视致新电子科技(天津)有限公司 Method and apparatus used by deaf-mute to perform information communication, and intelligent terminal
CN107832976A (en) * 2017-12-01 2018-03-23 合肥亚慕信息科技有限公司 A kind of Classroom Teaching analysis system based on perception analysis
CN108629241A (en) * 2017-03-23 2018-10-09 华为技术有限公司 A kind of data processing method and data processing equipment
CN109637521A (en) * 2018-10-29 2019-04-16 深圳壹账通智能科技有限公司 A kind of lip reading recognition methods and device based on deep learning
CN110931042A (en) * 2019-11-14 2020-03-27 北京欧珀通信有限公司 Simultaneous interpretation method and device, electronic equipment and storage medium
CN111062277A (en) * 2019-12-03 2020-04-24 东华大学 Sign language-lip language conversion method based on monocular vision
CN111144125A (en) * 2019-12-04 2020-05-12 深圳追一科技有限公司 Text information processing method and device, terminal equipment and storage medium
WO2020119496A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Communication method, device and equipment based on artificial intelligence and readable storage medium
CN111857334A (en) * 2020-07-02 2020-10-30 上海交通大学 Human body gesture letter recognition method and device, computer equipment and storage medium
CN112084846A (en) * 2020-07-30 2020-12-15 崔恒鑫 Barrier-free sign language communication system
CN112632257A (en) * 2020-12-29 2021-04-09 深圳赛安特技术服务有限公司 Question processing method and device based on semantic matching, terminal and storage medium
CN113807287A (en) * 2021-09-24 2021-12-17 福建平潭瑞谦智能科技有限公司 3D structured light face recognition method
CN114067433A (en) * 2021-11-10 2022-02-18 周超 Language and image understanding system based on multiple protocols
CN114266511A (en) * 2022-02-28 2022-04-01 北京未来基因教育科技有限公司 Course evaluation automatic generation method and system, electronic equipment and storage medium
US20220309936A1 (en) * 2021-03-26 2022-09-29 Transverse Inc. Video education content providing method and apparatus based on artificial intelligence natural language processing using characters
US20220327309A1 (en) * 2021-04-09 2022-10-13 Sorenson Ip Holdings, Llc METHODS, SYSTEMS, and MACHINE-READABLE MEDIA FOR TRANSLATING SIGN LANGUAGE CONTENT INTO WORD CONTENT and VICE VERSA
CN115239855A (en) * 2022-06-23 2022-10-25 安徽福斯特信息技术有限公司 Virtual sign language anchor generation method, device and system based on mobile terminal

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011191418A (en) * 2010-03-12 2011-09-29 Nippon Telegr & Teleph Corp <Ntt> System, method and program for generating community-based sign language
CN105868282A (en) * 2016-03-23 2016-08-17 乐视致新电子科技(天津)有限公司 Method and apparatus used by deaf-mute to perform information communication, and intelligent terminal
WO2017161741A1 (en) * 2016-03-23 2017-09-28 乐视控股(北京)有限公司 Method and device for communicating information with deaf-mutes, smart terminal
CN108629241A (en) * 2017-03-23 2018-10-09 华为技术有限公司 A kind of data processing method and data processing equipment
CN107832976A (en) * 2017-12-01 2018-03-23 合肥亚慕信息科技有限公司 A kind of Classroom Teaching analysis system based on perception analysis
CN109637521A (en) * 2018-10-29 2019-04-16 深圳壹账通智能科技有限公司 A kind of lip reading recognition methods and device based on deep learning
WO2020119496A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Communication method, device and equipment based on artificial intelligence and readable storage medium
CN110931042A (en) * 2019-11-14 2020-03-27 北京欧珀通信有限公司 Simultaneous interpretation method and device, electronic equipment and storage medium
CN111062277A (en) * 2019-12-03 2020-04-24 东华大学 Sign language-lip language conversion method based on monocular vision
CN111144125A (en) * 2019-12-04 2020-05-12 深圳追一科技有限公司 Text information processing method and device, terminal equipment and storage medium
CN111857334A (en) * 2020-07-02 2020-10-30 上海交通大学 Human body gesture letter recognition method and device, computer equipment and storage medium
CN112084846A (en) * 2020-07-30 2020-12-15 崔恒鑫 Barrier-free sign language communication system
CN112632257A (en) * 2020-12-29 2021-04-09 深圳赛安特技术服务有限公司 Question processing method and device based on semantic matching, terminal and storage medium
US20220309936A1 (en) * 2021-03-26 2022-09-29 Transverse Inc. Video education content providing method and apparatus based on artificial intelligence natural language processing using characters
WO2022203123A1 (en) * 2021-03-26 2022-09-29 주식회사 트랜스버스 Video education content providing method and device on basis of artificially intelligent natural language processing using character
US20220327309A1 (en) * 2021-04-09 2022-10-13 Sorenson Ip Holdings, Llc METHODS, SYSTEMS, and MACHINE-READABLE MEDIA FOR TRANSLATING SIGN LANGUAGE CONTENT INTO WORD CONTENT and VICE VERSA
CN113807287A (en) * 2021-09-24 2021-12-17 福建平潭瑞谦智能科技有限公司 3D structured light face recognition method
CN114067433A (en) * 2021-11-10 2022-02-18 周超 Language and image understanding system based on multiple protocols
CN114266511A (en) * 2022-02-28 2022-04-01 北京未来基因教育科技有限公司 Course evaluation automatic generation method and system, electronic equipment and storage medium
CN115239855A (en) * 2022-06-23 2022-10-25 安徽福斯特信息技术有限公司 Virtual sign language anchor generation method, device and system based on mobile terminal

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KATA CSIZÉR, EDIT H. KONTRA: "Foreign Language LearningCharacteristics of Deaf and SeverelyHard-of-Hearing Students", THE MODERN LANGUAGE JOURNAL, 11 February 2020 (2020-02-11), pages 233 - 249 *
肖庆阳;张金;左闯;范娟婷;梁碧玮;邸硕临;: "基于语义约束的口型序列识别方法", 计算机应用与软件, no. 09, 15 September 2012 (2012-09-15), pages 232 - 235 *
董雪燕;徐娟;贾京鹏;: "语音识别技术在聋人大学生课堂教学中的应用研究", 北京联合大学学报, no. 03, 20 July 2020 (2020-07-20), pages 69 - 75 *
雷青云: "智能化唇、手语装置的设计与研究", 信息记录材料, 1 January 2020 (2020-01-01), pages 74 - 75 *
高文, 陈熙霖, 马继勇, 王兆其: "基于多模式接口技术的聋人与正常人交流***", 计算机学报, no. 12, 12 December 2000 (2000-12-12), pages 23 - 27 *

Also Published As

Publication number Publication date
CN116805272B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN110992741B (en) Learning auxiliary method and system based on classroom emotion and behavior analysis
CN110991381B (en) Real-time classroom student status analysis and indication reminding system and method based on behavior and voice intelligent recognition
CN108648757A (en) A kind of analysis method based on various dimensions Classroom Information
US8682241B2 (en) Method and system for improving the quality of teaching through analysis using a virtual teaching device
CN110334610A (en) A kind of various dimensions classroom based on computer vision quantization system and method
CN107240047B (en) Score evaluation method and device for teaching video
WO2019028592A1 (en) Teaching assistance method and teaching assistance system using said method
Thomas The construction of teacher identities in educational policy documents: A critical discourse analysis
CN106982357A (en) A kind of intelligent camera system based on distribution clouds
CN116109455B (en) Language teaching auxiliary system based on artificial intelligence
CN111611854B (en) Classroom condition evaluation method based on pattern recognition
CN111369408A (en) Hospital home intern teaching management system and method
Ockert The influence of technology in the classroom: An analysis of an iPad and video intervention on JHS students' confidence, anxiety, and FL WTC.
CN116050892A (en) Intelligent education evaluation supervision method based on artificial intelligence
CN112164259A (en) Classroom teacher-student interactive teaching system and method
CN116805272B (en) Visual education teaching analysis method, system and storage medium
CN117078094A (en) Teacher comprehensive ability assessment method based on artificial intelligence
CN111563697A (en) Online classroom student emotion analysis method and system
CN113688789B (en) Online learning input degree identification method and system based on deep learning
CN112750057A (en) Student learning behavior database establishing, analyzing and processing method based on big data and cloud computing and cloud data platform
CN113076835A (en) Regression analysis-based teaching evaluation method and system
Aprilia et al. The Implementation of Direct Reading Thinking Activity to Improve Reading Comprehension
TWM600908U (en) Learning state improvement management system
CN111914683A (en) Handwriting score input system based on bionic image enhancement algorithm and FPGA hardware acceleration
CN114898449B (en) Foreign language teaching auxiliary method and device based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240620

Address after: Building 2, No.12 Xidawang Road, Chaoyang District, Beijing, 100000 (National Advertising Industry Park Incubator 21531)

Applicant after: Beijing East China Normal University Educational Technology Research Institute

Country or region after: China

Address before: No. 6, Jing'an Road, Wuchang District, Wuhan City, Hubei Province, 430061

Applicant before: Wuhan Xingjixue Education Consulting Co.,Ltd.

Country or region before: China

GR01 Patent grant