CN103425987A - Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction - Google Patents

Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction Download PDF

Info

Publication number
CN103425987A
CN103425987A CN201310396167XA CN201310396167A CN103425987A CN 103425987 A CN103425987 A CN 103425987A CN 201310396167X A CN201310396167X A CN 201310396167XA CN 201310396167 A CN201310396167 A CN 201310396167A CN 103425987 A CN103425987 A CN 103425987A
Authority
CN
China
Prior art keywords
lip
centerdot
cwt
image
proper vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310396167XA
Other languages
Chinese (zh)
Other versions
CN103425987B (en
Inventor
张毅
罗元
刘想德
徐晓东
林海波
崔叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201310396167.XA priority Critical patent/CN103425987B/en
Publication of CN103425987A publication Critical patent/CN103425987A/en
Application granted granted Critical
Publication of CN103425987B publication Critical patent/CN103425987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction, and relates to the field of feature extraction and recognition control of a lip recognition technology. The method includes the steps of firstly, conducting DT_CWT filtering on a lip, then, conducting DCT conversion on a lip feather vector extracted through the DT_CWT so that the lip features extracted after conversion is conducted through the DT_CWT can be concentrated in a large coefficient obtained after the DCT conversion, enabling the feature vector to contain the largest amount of lip information, and enabling the effect of dimensionality reduction to be achieved at the same time, wherein due to the fact that the DT_CWT has approximate translation invariance, the difference between feature values of the same lip in different positions in an ROI is small after the DT_CWT filtering is conducted, and the influence produced when the lip recognition is wrong due to position offset of the lip in the ROI is eliminated. According to the intelligent wheelchair man-machine interaction method, the lip recognition rate is greatly improved, and the robustness of a lip recognition system is improved.

Description

Intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip
Technical field
The present invention relates to lip visual information analyzes and identification control field, the particularly feature extracting method in a kind of lip disposal system.
Background technology
Society, world population aging speed is in continuous quickening, and the number that the reasons such as various diseases and calamity cause physical disabilities is also in rising year by year.Factors causes the elderly and physical disabilities on health, to exist defect in various degree, especially motor dysfunction of lower limb to bring huge inconvenience to them, and they can't be lived normally.For this reason, accessible technology has progressed into people's sight line, and has obtained paying close attention to widely.
Accessible technology is to provide effective supplementary means by advanced science and technology for the elderly and physical disabilities, makes them can reintegrate society.Human-computer interaction technology is one of important research content of accessible technology.Human-computer interaction technology can be divided into two classes according to the difference that adopts control model: the first, operate finishing man-machine interaction by hardware implementation, as operating mouse, keyboard, operating rod etc.This control mode easily operates, but and be not suitable for the crowd that there are defect in upper limbs or upper limbs that loses; The second, adopt mode identification technology, utilize the organ of human body self, as finishing man-machine interactions such as hand, wrist, head and brain electricity.To complete the control to electronic equipment by speech recognition, gesture identification, head movement, wrist motion, electromyographic signal and EEG signals (EEG) etc. particularly.This man-machine interaction mode has untouchable, and reciprocal process is also more directly perceived, and the scope of application is wider.Therefore, this technology has potential researching value and meaning.
In daily life, interpersonal interchange great majority are to speak and exchanged by face, and in the vision man-machine interaction, we can carry out by camera collection lip movement information harmony, close friend's man-machine interaction equally.Utilizing lip visual information to control intelligent wheel chair is a current focus.For deaf and dumb physical disabilities and the fuzzy the elderly that speaks, it is the interactive mode an of realization and robot normal " speaking ".Motion by lip is controlled, and user's health can keep transfixion, and this is concerning serious disabled patient people, and this control mode is necessary.
Intelligent wheel chair is as a kind of walking-replacing tool, and being mainly provides service for the elderly and physical disabilities.It has merged multiple technologies, as independent navigation, keep away barrier and the technology such as man-machine interaction.Traditional intelligent wheel chair is to complete the control to motion by manual joystick, but and be not suitable for the user of upper limbs inconvenience, therefore crowd's scope of application is restricted.Along with scientific and technological fast development, the new control technology of Schema-based identification is widely applied on intelligent wheel chair, as gesture, head movement, electromyographic signal and the BCI technology based on EEG signals etc.For give more physical disabilities and the elderly provide a kind of can with the interactive mode of robot normal " speaking ", and movable flexibly, the quick and changeable characteristics of shape according to face, so the application prospect of the human-computer interaction technology based on lip in intelligent wheel chair will be very wide.The lipreading recognition technology is applied on intelligent wheel chair, not only makes its function with traditional push chairs, can also complete the motion control to wheelchair by converting different lips.Therefore, the intelligent wheelchair system of research based on lip has important using value and realistic meaning.
Summary of the invention
In view of this, technical matters to be solved by this invention is to provide a kind of for the lip feature extraction step in the lipreading recognition technical field, a kind of mixing dual-tree complex wavelet (Dual-Tree Complex Wavelet Transform is proposed, the method of DT_CWT) lip being carried out to feature extraction with discrete cosine transform (Discrete Cosine Transform, DCT).
The object of the present invention is achieved like this:
Intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip provided by the invention comprises the following steps:
S1: gather the image that comprises people's face;
S2: image is extracted after the image pre-service to the lip image;
S3: according to the lip image, extract the lip proper vector;
S4: according to the lip proper vector, obtain the lipreading recognition result;
S5: according to the lipreading recognition result, produce steering order and drive the intelligent wheel chair motion.
Further, the extraction lip proper vector in described step S3 specifically comprises the following steps:
S31: the lip proper vector that the lip image is carried out DT_CWT filtering and extracts by the DT_CWT algorithm;
S32: the lip proper vector is carried out to dct transform and form the lip proper vector and carry out tagsort;
S33: the results conversion of tagsort is become to the lipreading recognition result.
Further, described steering order is to send intelligent wheel chair to by wireless transmission method.
Further, in described step S31, the lip image is carried out to DT_CWT filtering and the concrete steps of the lip proper vector extracted by the DT_CWT algorithm as follows:
S311: be ROI image normalization ROI image by the lip image setting;
S312: the ROI image after normalization is divided into to some subimages;
S313: every number of sub images is carried out to the multiple dimensioned two-dimensional filtering of DT_CWT, form the high frequency coefficient matrix on each yardstick;
S314: the range value that the high frequency coefficient matrix on all yardsticks is carried out to complex coefficient calculates to form the real number system matrix number;
S315: the real number system matrix number is arranged in order in column direction and forms as follows feature vector, X:
Figure BDA0000376512480000037
Wherein, subscript T means matrix transpose operation, V L, θMean that the real number matrix on each yardstick is arranged in order the column vector of formation in column direction, l means the decomposition number of plies of DT_CWT conversion, and θ means the direction parameter of DT_CWT conversion.
Further, in described step S32, the lip proper vector being carried out to dct transform forms the lip proper vector and carries out the concrete steps of tagsort as follows:
S321: to the lip proper vector, adopt following formula to carry out dimensionality reduction calculating:
Y=AX,
Wherein, X means the N dimensional feature vector, and Y means the low dimensional feature of M dimension, and A means the matrix of a linear transformation;
S322: select to meet pre-conditioned DCT characteristic coefficient, described DCT characteristic coefficient calculates by following formula:
x ( u , v ) = a ( u ) a ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) cos [ ( 2 x + 1 ) uπ 2 M ] cos [ ( 2 y + 1 ) vπ 2 N ]
Wherein, x (u, v) is the DCT characteristic coefficient, u=0, and 1,2 ..., M-1; V=0,1,2 ..., N-1; F (x, y) means that a width size is the image of M * N, a (u), and a (ν) is defined as respectively:
a ( u ) = 1 M , u = 0 2 M , u = 1,2 , · · · , M - 1 a ( v ) = 1 N , v = 0 2 N , v = 1,2 , · · · , N - 1
S323: adopt the Zig-Zag method to construct in the following manner the lip proper vector:
y = [ x 0 1 , x 1 1 , · · · , x K - 1 1 , x 0 2 , x 1 2 , · · · , x K - 1 2 , · · · , x 0 9 , x 1 9 , · · · , x K - 1 9 ] T ,
Wherein, K is illustrated in the number of the characteristic coefficient that in subimage, Zig-Zag selects, N the characteristic coefficient that means the m number of sub images.
Further, in described step S1, adopt camera to gather the image that comprises people's face.
Further, described image pre-service, extract the lip proper vector and obtain the lipreading recognition result and adopt notebook computer or the single-chip microcomputer as host computer.
Further, the intelligent wheel chair of described driving is as the slave computer of PC control.
The invention has the advantages that: the present invention adopts the lip feature extracting method of a kind of DT_CWT of mixing and DCT to be identified lip, at first the present invention carries out DT_CWT filtering to lip, because DT_CWT has approximate translation invariance, so can make between the eigenwert of the identical lip of diverse location in ROI difference less after DT_CWT filtering, overcome lip because cause the impact of lipreading recognition mistake in the skew of ROI position; And then the lip proper vector that DT_CWT extracts is carried out to dct transform, and in the larger coefficient after the lip feature that makes to extract after the DT_CWT conversion concentrates on dct transform, the quantity of information that makes eigenvector comprise the lip maximum, and reach the effect of dimensionality reduction simultaneously.The method has improved the lipreading recognition rate widely, has improved the lipreading recognition system robustness.
The accompanying drawing explanation
In order to make the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, the present invention is described in further detail, wherein:
Fig. 1 is the intelligent wheelchair control system framework based on lip;
Fig. 2 is DCT lip feature extraction block diagram;
Fig. 3 is the DT_CWT decomposition chart.
Embodiment
Below with reference to accompanying drawing, the preferred embodiments of the present invention are described in detail; Should be appreciated that preferred embodiment is only for the present invention is described, rather than in order to limit the scope of the invention.
Embodiment 1
Two lip feature extracting methods that mix in the present embodiment refer to the method that adopts the lip feature extraction that mixes DT_CWT and DCT.The present invention is directed to the lip feature extraction step in the lipreading recognition technical field, a kind of mixing dual-tree complex wavelet (Dual-Tree Complex Wavelet Transform is proposed, the method of DT_CWT) lip being carried out to feature extraction with discrete cosine transform (Discrete Cosine Transform, DCT).
Because DT_CWT filtering has approximate translation invariance, so can make between the eigenwert of the identical lip of diverse location in ROI difference less after DT_CWT filtering, overcome lip because cause the impact of lipreading recognition mistake in the skew of ROI position; And then the lip proper vector that DT_CWT extracts is carried out to dct transform, and in the larger coefficient after the lip feature that makes to extract after the DT_CWT conversion concentrates on dct transform, the quantity of information that makes eigenvector comprise the lip maximum, and reach the effect of dimensionality reduction simultaneously.
Fig. 1 is the intelligent wheelchair control system framework based on lip, Fig. 2 is DCT lip feature extraction block diagram, Fig. 3 is the DT_CWT decomposition chart, in figure, DT_CWT by 2 groups of quadratures, Perfect Reconstruction and each other the wave filter of Xi Er baud conversion (tree a and tree b) realize.(tree a) generates the real part of conversion to one group of wave filter, and other one group (tree b) generates imaginary part, and the Output rusults of DT_CWT is comprised of the Output rusults of tree a and tree b, and wherein, 0000a, 0001a, 0000b, 0001b mean the DT_CWT wavelet coefficient of the 4th layer.
As shown in the figure: the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip provided by the invention comprises the following steps:
S1: gather the image that comprises people's face;
S2: image is extracted after the image pre-service to the lip image;
S3: according to the lip image, extract the lip proper vector;
Extraction lip proper vector in described step S3 specifically comprises the following steps:
S31: the lip proper vector that the lip image is carried out DT_CWT filtering and extracts by the DT_CWT algorithm;
S32: the lip proper vector is carried out to dct transform and form the lip proper vector and carry out tagsort;
S33: the results conversion of tagsort is become to the lipreading recognition result.
The concrete steps of the lip proper vector of in described step S31, the lip image being carried out to DT_CWT filtering and extracting by the DT_CWT algorithm are as follows:
S311: be ROI image normalization ROI image by the lip image setting;
S312: the subimage that the ROI image after normalization is divided into to n * n size;
S313: every number of sub images is carried out to the multiple dimensioned two-dimensional filtering of DT_CWT, form the high frequency coefficient matrix on each yardstick;
S314: the range value that the high frequency coefficient matrix on all yardsticks is carried out to complex coefficient calculates to form the real number system matrix number;
S315: the real number system matrix number is arranged in order in column direction and forms as follows feature vector, X:
Wherein, subscript T means matrix transpose operation, V L, θMean that the real number matrix on each yardstick is arranged in order the column vector of formation in column direction, l means the decomposition number of plies of DT_CWT conversion, and θ means the direction parameter of DT_CWT conversion.
In described step S32, the lip proper vector being carried out to dct transform forms the lip proper vector and carries out the concrete steps of tagsort as follows:
S321: to the lip proper vector, adopt following formula to carry out dimensionality reduction calculating:
Y=AX,
Wherein, X means the N dimensional feature vector, and Y means the low dimensional feature of M dimension, and A means the matrix of a linear transformation;
S322: select to meet pre-conditioned DCT characteristic coefficient;
Adopt following formula to select to meet pre-conditioned DCT characteristic coefficient;
For a width size, be the image f (x, y) of M * N, x=0 wherein, 1,2 ..., M-1; Y=0,1,2 ..., N-1, its two-dimensional dct is defined as:
x ( u , v ) = a ( u ) a ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) cos [ ( 2 x + 1 ) uπ 2 M ] cos [ ( 2 y + 1 ) vπ 2 N ]
U=0 wherein, 1,2 ..., M-1; V=0,1,2 ..., N-1.
a ( u ) = 1 M , u = 0 2 M , u = 1,2 , · · · , M - 1 a ( v ) = 1 N , v = 0 2 N , v = 1,2 , · · · , N - 1
Wherein x (u, v) is called the DCT coefficient.
S323: adopt the Zig-Zag method to construct in the following manner the lip proper vector:
y = [ x 0 1 , x 1 1 , · · · , x K - 1 1 , x 0 2 , x 1 2 , · · · , x K - 1 2 , · · · , x 0 9 , x 1 9 , · · · , x K - 1 9 , ] T ,
Wherein, K is illustrated in the number of the characteristic coefficient that in subimage, Zig-Zag selects,
Figure BDA0000376512480000065
N the characteristic coefficient that means the m number of sub images.
S4: according to the lip proper vector, obtain the lipreading recognition result;
S5: according to the lipreading recognition result, produce steering order and drive the intelligent wheel chair motion.
Described steering order is to send intelligent wheel chair to by wireless transmission method.
In described step S1, adopt camera to gather the image that comprises people's face.
Described image pre-service, extract the lip proper vector and obtain the lipreading recognition result and adopt notebook computer or the single-chip microcomputer as host computer.
The intelligent wheel chair of described driving is as the slave computer of PC control.
Embodiment 2
The difference of the present embodiment and embodiment 1 only is:
At first the present embodiment carries out DT_CWT filtering to lip, because DT_CWT has approximate translation invariance, so can make between the eigenwert of the identical lip of diverse location in ROI difference less after DT_CWT filtering, overcome lip because cause the impact of lipreading recognition mistake in the skew of ROI position; And then the lip proper vector that DT_CWT extracts is carried out to dct transform, and in the larger coefficient after the lip feature that makes to extract after the DT_CWT conversion concentrates on dct transform, the quantity of information that makes eigenvector comprise the lip maximum, and reach the effect of dimensionality reduction simultaneously.
According to the DT_CWT shift theory, a sub-picture can produce the high-frequency sub-band matrix of 6 directions (θ ∈ {+15 ° ,+45 ° ,+75 ° ,-75 ° ,-45 ° ,-15 ° }), a low frequency sub-band matrix after this conversion on each level.The low frequency sub-band matrix is the initial input that lower one deck decomposes, and the high-frequency sub-band matrix is the coefficient that comprises textural characteristics corresponding to 6 directions; There are some researches show, high frequency coefficient is more important than low frequency coefficient in target identification, and low frequency coefficient is the feature of picture illumination information, can disturb identifying, so the high frequency coefficient of 6 directions usually only select each level at the structural attitude vector time in.
If the lip image for a secondary M * N carries out the decomposition of L level to it, can obtain 6 * L high frequency coefficient matrix, the dimension of 6 high frequency matrixes of ground floor is all M/2 * N/2, be half of original image dimension, by that analogy, the dimension of each high frequency matrix of next stage is again half of upper level.
At first the present embodiment is normalized to 48 * 48 to the lip area-of-interest, the lip area-of-interest is carried out to 4 grades of DT_CWT two-dimensional filterings, the lip image filtering will produce 4 yardsticks, and the high frequency coefficient matrix of 6 directions on each yardstick, so characteristics of image comprises 24 high frequency coefficient matrixes altogether.The size of the high frequency coefficient matrix of first, second, third and fourth level is respectively: 24 * 24,12 * 12,6 * 6,3 * 3.The coefficient produced due to DT_CWT is plural number, so each matrix of coefficients is carried out to the calculating of the range value of complex coefficient, complex matrix is become to real number matrix.Then each real number matrix is arranged in order in column direction, lines up a column vector, use V L, θMean, wherein l and θ mean respectively the decomposition number of plies and the direction parameter of DT_CWT conversion, its span be l ∈ 1 ..., 4}, θ ∈+15 °, and+45 ° ,+75 ° ,-75 ° ,-45 ° ,-15 ° }.The feature vector, X of lip image after 4 layers of DT_CWT conversion can, by 24 column vectors corresponding to magnitude matrix are constituted, can be expressed as formula (6):
Figure BDA0000376512480000071
Wherein subscript T means matrix transpose operation.By (6) formula, can be found out, the dimension of X proper vector is the summation of the coefficient number that in 1,2,3,4 layer of decomposition, 6 directions of every layer of DT_CWT filtering produce, and its dimension is:
Figure BDA0000376512480000072
Large like this space dimensionality, will cause very large burden to calculating and recognition speed.So use formula (1) to carry out dct transform to eigenmatrix X after carrying out DT_CWT filtering, extract in eigenmatrix X the DCT coefficient that comprises lip quantity of information maximum, then utilize the Zig-Zag method to select the larger DCT coefficient in 81 of fronts to construct final eigenvector y, make y have the maximum fault information of lip, guarantee the minimum disappearance of picture signal, and reach the dimensionality reduction effect simultaneously, to improve the lipreading recognition rate.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, obviously, those skilled in the art can carry out various changes and modification and not break away from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention also is intended to comprise these changes and modification interior.

Claims (8)

1. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip is characterized in that: comprise the following steps:
S1: gather the image that comprises people's face;
S2: image is extracted after the image pre-service to the lip image;
S3: according to the lip image, extract the lip proper vector;
S4: according to the lip proper vector, obtain the lipreading recognition result;
S5: according to the lipreading recognition result, produce steering order and drive the intelligent wheel chair motion.
2. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 1, it is characterized in that: the extraction lip proper vector in described step S3 specifically comprises the following steps:
S31: the lip proper vector that the lip image is carried out DT_CWT filtering and extracts by the DT_CWT algorithm;
S32: the lip proper vector is carried out to dct transform and form the lip proper vector and carry out tagsort;
S33: the results conversion of tagsort is become to the lipreading recognition result.
3. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 1, it is characterized in that: described steering order is to send intelligent wheel chair to by wireless transmission method.
4. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 2 is characterized in that: in described step S31, the lip image is carried out to DT_CWT filtering and the concrete steps of the lip proper vector extracted by the DT_CWT algorithm as follows:
S311: be ROI image normalization ROI image by the lip image setting;
S312: the ROI image after normalization is divided into to some subimages;
S313: every number of sub images is carried out to the multiple dimensioned two-dimensional filtering of DT_CWT, form the high frequency coefficient matrix on each yardstick;
S314: the range value that the high frequency coefficient matrix on all yardsticks is carried out to complex coefficient calculates to form the real number system matrix number;
S315: the real number system matrix number is arranged in order in column direction and forms as follows feature vector, X:
Figure FDA0000376512470000011
Wherein, subscript T means matrix transpose operation, V L, θMean that the real number matrix on each yardstick is arranged in order the column vector of formation in column direction, l means the decomposition number of plies of DT_CWT conversion, and θ means the direction parameter of DT_CWT conversion.
5. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 2 is characterized in that: in described step S32, the lip proper vector is carried out to dct transform and form the lip proper vector and carry out the concrete steps of tagsort as follows:
S321: to the lip proper vector, adopt following formula to carry out dimensionality reduction calculating:
Y=AX,
Wherein, X means the N dimensional feature vector, and Y means the low dimensional feature of M dimension, and A means the matrix of a linear transformation;
S322: select to meet pre-conditioned DCT characteristic coefficient, described DCT characteristic coefficient calculates by following formula:
x ( u , v ) = a ( u ) a ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) cos [ ( 2 x + 1 ) uπ 2 M ] cos [ ( 2 y + 1 ) vπ 2 N ]
Wherein, x (u, v) is the DCT characteristic coefficient, u=0, and 1,2 ..., M-1; V=0,1,2 ..., N-1; F (x, y) means that a width size is the image of M * N, a (u), and a (v) is defined as respectively;
a ( u ) = 1 M , u = 0 2 M , u = 1,2 , · · · , M - 1 a ( v ) = 1 N , v = 0 2 N , v = 1,2 , · · · , N - 1
S323: adopt the Zig-Zag method to construct in the following manner the lip proper vector:
y = [ x 0 1 , x 1 1 , · · · , x K - 1 1 , x 0 2 , x 1 2 , · · · , x K - 1 2 , · · · , x 0 9 , x 1 9 , · · · , x K - 1 9 ] T ,
Wherein, K is illustrated in the number of the characteristic coefficient that in subimage, Zig-Zag selects, N the characteristic coefficient that means the m number of sub images.
6. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 1, is characterized in that: in described step S1, adopt camera to gather the image that comprises people's face.
7. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 1 is characterized in that: described image pre-service, extract the lip proper vector and obtain the lipreading recognition result and adopt notebook computer or the single-chip microcomputer as host computer.
8. the intelligent wheel chair man-machine interaction method based on the feature extraction of two mixing lip according to claim 7, it is characterized in that: described intelligent wheel chair is as the slave computer of PC control.
CN201310396167.XA 2013-09-03 2013-09-03 Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction Active CN103425987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310396167.XA CN103425987B (en) 2013-09-03 2013-09-03 Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310396167.XA CN103425987B (en) 2013-09-03 2013-09-03 Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction

Publications (2)

Publication Number Publication Date
CN103425987A true CN103425987A (en) 2013-12-04
CN103425987B CN103425987B (en) 2016-09-28

Family

ID=49650697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310396167.XA Active CN103425987B (en) 2013-09-03 2013-09-03 Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction

Country Status (1)

Country Link
CN (1) CN103425987B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331160A (en) * 2014-10-30 2015-02-04 重庆邮电大学 Lip state recognition-based intelligent wheelchair human-computer interaction system and method
CN106203235A (en) * 2015-04-30 2016-12-07 腾讯科技(深圳)有限公司 Live body discrimination method and device
CN109492692A (en) * 2018-11-07 2019-03-19 北京知道创宇信息技术有限公司 A kind of webpage back door detection method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102048621A (en) * 2010-12-31 2011-05-11 重庆邮电大学 Human-computer interaction system and method of intelligent wheelchair based on head posture
CN102319155A (en) * 2011-05-30 2012-01-18 重庆邮电大学 Method for controlling intelligent wheelchair based on lip detecting and tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102048621A (en) * 2010-12-31 2011-05-11 重庆邮电大学 Human-computer interaction system and method of intelligent wheelchair based on head posture
CN102319155A (en) * 2011-05-30 2012-01-18 重庆邮电大学 Method for controlling intelligent wheelchair based on lip detecting and tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张毅 等: "《基于唇形的智能轮椅人机交互》", 《控制工程》 *
梁亚玲,杜明辉: "《基于DT-CWT和PCA的唇部特征提取方法》", 《视频应用与工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331160A (en) * 2014-10-30 2015-02-04 重庆邮电大学 Lip state recognition-based intelligent wheelchair human-computer interaction system and method
CN106203235A (en) * 2015-04-30 2016-12-07 腾讯科技(深圳)有限公司 Live body discrimination method and device
CN109492692A (en) * 2018-11-07 2019-03-19 北京知道创宇信息技术有限公司 A kind of webpage back door detection method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN103425987B (en) 2016-09-28

Similar Documents

Publication Publication Date Title
Karami et al. Persian sign language (PSL) recognition using wavelet transform and neural networks
CN105708587B (en) A kind of the lower limb exoskeleton training method and system of the triggering of Mental imagery pattern brain-computer interface
CN103886215A (en) Walking ability calculating method and device based on muscle collaboration
CN103735262B (en) Dual-tree complex wavelet and common spatial pattern combined electroencephalogram characteristic extraction method
CN107248150A (en) A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN106204449A (en) A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network
CN111428583B (en) Visual compensation method based on neural network and touch lattice
CN105739702A (en) Multi-posture fingertip tracking method for natural man-machine interaction
CN109993131B (en) Design intention distinguishing system and method based on multi-mode signal fusion
CN103854262A (en) Medical image noise reduction method based on structure clustering and sparse dictionary learning
CN110377049B (en) Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method
CN106886986A (en) Image interfusion method based on the study of self adaptation group structure sparse dictionary
CN107748798A (en) A kind of hand-drawing image search method based on multilayer visual expression and depth network
CN103425987A (en) Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction
CN105511600A (en) Multi-media man-machine interaction platform based on mixed reality
CN109299303A (en) Freehand sketch retrieval method based on deformable convolution Yu depth network
CN109558805A (en) Human bodys' response method based on multilayer depth characteristic
CN107144818A (en) Binaural sound sources localization method based on two-way ears matched filter Weighted Fusion
Juan Gesture recognition and information recommendation based on machine learning and virtual reality in distance education
CN104473635B (en) Right-hand man's Mental imagery EEG feature extraction method of hybrid wavelet and common space pattern
CN109408655A (en) The freehand sketch retrieval method of incorporate voids convolution and multiple dimensioned sensing network
CN105808757A (en) Chinese herbal medicine plant picture retrieval method based on multi-feature fusion BOW model
CN105929947A (en) Scene situation perception based man-machine interaction method
CN108958620A (en) A kind of dummy keyboard design method based on forearm surface myoelectric
CN108648174A (en) A kind of fusion method of multilayer images and system based on Autofocus Technology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant