CN111476058B - Gesture recognition method based on millimeter wave radar - Google Patents

Gesture recognition method based on millimeter wave radar Download PDF

Info

Publication number
CN111476058B
CN111476058B CN201910063997.8A CN201910063997A CN111476058B CN 111476058 B CN111476058 B CN 111476058B CN 201910063997 A CN201910063997 A CN 201910063997A CN 111476058 B CN111476058 B CN 111476058B
Authority
CN
China
Prior art keywords
gesture
layer
track
recognition
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910063997.8A
Other languages
Chinese (zh)
Other versions
CN111476058A (en
Inventor
吴永乐
郑洪涛
黎淑兰
王卫民
刘元安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201910063997.8A priority Critical patent/CN111476058B/en
Publication of CN111476058A publication Critical patent/CN111476058A/en
Application granted granted Critical
Publication of CN111476058B publication Critical patent/CN111476058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a gesture recognition method based on millimeter wave radar, which comprises the steps of constructing a convolutional neural network model; acquiring a track diagram of various gestures as a training set F, and training the convolutional neural network model based on the training set F to obtain an optimized recognition model; the gesture track graph is a moving track of a moving target corresponding to the maximum peak value under a distance-Doppler coordinate system; inputting a track graph of the recognition gesture into the optimized recognition model to recognize the gesture type of the recognition gesture. In the gesture recognition method provided by the embodiment of the invention, the convolutional neural network model is utilized to train the track diagrams of various gestures to obtain the optimized recognition model, and the track diagrams of the recognized gestures are input into the optimized recognition model, so that the gesture types can be obtained rapidly and accurately. The gesture recognition method is simple, the data processing amount is small, and the calculation is simple.

Description

Gesture recognition method based on millimeter wave radar
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a gesture recognition method based on millimeter wave radar.
Background
The gesture is generally recognized by processing based on information collected by a camera, so as to realize classification recognition of different gestures. Gesture recognition is widely applied, such as remote switch on, micro electronic device operation, automatic sign language translation and the like, and convenience of people's life can be greatly improved. However, recognizing a gesture motion with a camera has the following drawbacks:
(1) The camera is easily affected by light, so that the gesture recognition effect is poor, and generally when the illumination intensity is reduced by half, the gesture recognition accuracy rate is reduced by one third.
(2) The camera is influenced by the detection distance, and when the distance is far, the identification effect is poor. Moreover, when a person is further from the sensor, a higher camera resolution is often required, which causes problems of excessive data volume and excessive cost.
(3) The gesture recognition algorithm based on the camera is complex, the data processing amount is large, the power consumption is large, the requirement on computing resources is high, and the gesture recognition algorithm is not convenient to integrate into small-sized equipment.
(4) When one camera is networked, the camera is easily attacked by lawbreakers, so that privacy is revealed.
Disclosure of Invention
The invention aims to provide a gesture recognition method based on millimeter wave radar, wherein in the gesture recognition method, a convolutional neural network model is utilized to train track diagrams of various gestures to obtain an optimized recognition model, and the track diagrams of recognized gestures are input into the optimized recognition model, so that gesture types can be obtained rapidly and accurately. The gesture recognition method is simple, the data processing amount is small, and the calculation is simple.
To solve the above problems, a first aspect of the present invention provides a gesture recognition method based on millimeter wave radar, the method comprising: constructing a convolutional neural network model; acquiring a track diagram of various gestures as a training set F, and training the convolutional neural network model based on the training set F to obtain an optimized recognition model; inputting a track graph of the recognition gesture into the optimized recognition model to recognize the gesture type of the recognition gesture.
Further, the gesture track map acquisition method comprises the following steps: acquiring echo data generated by scanning a gesture by a millimeter wave radar; obtaining a plurality of RD images based on the echo data; searching the spectrum peak of each RD image and solving the maximum peak of the image, thereby obtaining a plurality of maximum peak points; and sequentially connecting a plurality of maximum peak points in the RD coordinate system to obtain a track graph of the gesture.
Further, the structure of the convolutional neural network model sequentially includes: input layer, first convolution layer, first excitation layer, first pooling layer, second convolution layer, second excitation layer, second pooling layer, first fully-connected layer, second fully-connected layer, and output layer.
Further, the step of recognizing the gesture type of the recognition gesture includes: based on the optimized recognition model, respectively obtaining the probability of recognizing gesture track graphs and all types of gestures in the training set F, wherein the sum of the probabilities of all types of gestures is 1; gestures with a probability higher than 95% are determined to be the type of recognition gesture.
Further, the different gestures include one or more of a back and forth movement, a side to side movement, a button, and a flip.
In a second aspect of the present invention, there is also provided a gesture recognition system based on millimeter wave radar, the system comprising: the modeling module is used for constructing a convolutional neural network model; the training module is used for acquiring a plurality of gesture track graphs as a training set F, and training the convolutional neural network model based on the training set F to obtain an optimized recognition model; and the gesture recognition module inputs the track graph of the recognized gesture into the optimized recognition model so as to recognize the gesture type of the recognized gesture.
Further, the method for acquiring the gesture track map by the training module comprises the following steps: acquiring echo data generated by scanning a gesture by a millimeter wave radar; obtaining a plurality of RD images based on the echo data; searching the spectrum peak of each RD image and solving the maximum peak of the image, thereby obtaining a plurality of maximum peak points; and sequentially connecting a plurality of maximum peak points in the RD coordinate system to obtain a track graph of the gesture.
Further, the structure of the convolutional neural network model sequentially includes: input layer, first convolution layer, first excitation layer, first pooling layer, second convolution layer, second excitation layer, second pooling layer, first fully-connected layer, second fully-connected layer, and output layer.
Further, the step of recognizing the gesture type of the recognized gesture by the gesture recognition module comprises the steps of respectively obtaining probabilities of the recognized gesture as all types of gestures in the training set F based on an optimized recognition model, wherein the sum of the probabilities of all types of gestures is 1; gestures with a probability higher than 95% are determined as the type of the recognized gesture.
Further, the gesture includes one or more of a back and forth movement, a side to side movement, a button, and a flip.
The technical scheme of the invention has the following beneficial technical effects:
(1) According to the gesture track map acquisition method provided by the embodiment of the invention, echo data generated by scanning the millimeter wave radar is processed to obtain the gesture track map, each point on the map refers to the motion information of the moving target corresponding to the maximum peak value in a period of time, and the motion information comprises the distance between the moving target and the radar and the radial speed of the moving target relative to the millimeter wave radar in the period of time. Compared with the prior art, the gesture trajectory graph is used as a characteristic of gesture recognition, the processing amount of data is greatly reduced, and the calculation is simpler.
(2) According to the gesture recognition method and system based on the millimeter wave radar, provided by the embodiment of the invention, in the gesture recognition method, the trace diagrams of various gestures are trained by using the convolutional neural network model to obtain the optimized recognition model, and the trace diagrams of the recognized gestures are input into the optimized recognition model, so that the gesture type can be obtained quickly and accurately. The gesture recognition method is simple, the data processing amount is small, and the calculation is simple.
Drawings
FIG. 1 is a flow chart of a gesture trace acquisition method according to a first embodiment of the invention;
FIG. 2 is a RD diagram of a gesture to move up and down according to a first embodiment of the present invention;
FIG. 3 is a flow chart of a millimeter wave radar-based gesture recognition method according to a second embodiment of the present invention;
FIG. 4a is a schematic diagram of a finger button gesture;
FIG. 4b is a diagram of a finger button;
FIG. 5a is a schematic illustration of a palm roll-over motion;
FIG. 5b is a trace diagram of a palm roll-over motion;
FIG. 6a is a schematic diagram of a side-to-side movement gesture;
FIG. 6b is a trace diagram of a move left and right gesture;
FIG. 7a is a schematic diagram of a move up and down gesture;
FIG. 7b is a trace diagram of a move up and down gesture;
fig. 8 is a schematic structural diagram of a gesture recognition system according to a third embodiment of the present invention.
Detailed Description
The objects, technical solutions and advantages of the present invention will become more apparent by the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
Fig. 1 is a flowchart of a gesture trace acquisition method according to a first embodiment of the present invention.
As shown in fig. 1, the method includes steps S102-S108.
In an alternative embodiment, the method includes steps S102-S108:
Step S102, echo data generated by scanning a gesture by the millimeter wave radar is obtained. Wherein the echo data is echo baseband data.
Specifically, in the time when the hand motion exists, the millimeter wave radar transmits chirp signals to the target every several milliseconds, and echo baseband data of the transmitted signals are collected. For example, the hand motion duration is 5 seconds, and the 5 seconds millimeter wave radar collects 100 frames of data, while each frame of data is collected in multiple times by the millimeter wave radar. For example, millimeter wave radar collects data once every 5 milliseconds, and the millimeter wave radar needs to collect data of one frame 10 times.
Optionally, the millimeter wave radar is a linear frequency modulation continuous wave (linear frequency modulated continuous wave, LFMCW) radar, and further optionally, a 77GHz LFMCW millimeter wave radar is used. The 77GHz LFMCW millimeter wave radar comprises two transmitting antennas and four receiving antennas, wherein the transmitting signals are Linear Frequency Modulation Continuous Waves (LFMCW), the maximum bandwidth of the transmitting signals is 4GHz, the theoretical maximum distance resolution is 3.75cm, and finer finger movements can be detected.
It should be noted that, LFMCW is a waveform whose frequency varies periodically with time, and the frequency of each period generally increases linearly with time, and this linear increase is called chirp, where parameters of chirp (such as time and slope) affect system performance, and related parameters of chirp signals are designed in the present invention: the bandwidth B is 3.4404GHz, the chirp period is 778us, the number of chirp per frame is 32, the number of frames per second is 50, and 176 points per chirp sample. The transmission mode adopts a double-antenna time division multiplexing mode to transmit 20 frames of chirp signals per second, and transmits 64 chirp signals per frame, and the signals are alternately transmitted by the antennas A and B.
Step S104, obtaining a plurality of RD images based on the echo data.
Specifically, the echo data acquired each time is subjected to a Fast Fourier Transform (FFT), resulting in a one-dimensional matrix of distance dimensions each time.
The one-dimensional matrix obtained by first collecting echo data is used as a first row of a two-dimensional matrix N, the one-dimensional matrix obtained by second collecting echo data is used as a second row of the two-dimensional matrix N, and the two-dimensional matrix N is obtained by the same way, wherein the two-dimensional matrix N corresponds to gesture data of a frame collected by a radar.
And carrying out Fourier transformation on the column vector of the two-dimensional matrix N to obtain a new two-dimensional matrix M. The two-dimensional matrix M is represented as a RD image (Range-Doppler, two-dimensional Range-Doppler) corresponding to a frame of data acquired by the radar.
Similarly, multiple RD images of the same gesture are acquired using the above steps. For example, a gesture may have 100 frames of data, and 100 RD images may be obtained here.
And S106, searching the spectrum peak of each RD image to obtain a plurality of maximum peak points.
Specifically, the spectral peak of each RD image is searched to obtain row coordinates i and column coordinates j corresponding to the hand motion and a value M ij, where M ij is a peak point of each RD image.
Taking i, j and M ij in a plurality of peak points M ij in each RD chart as row vectors, and forming a new matrix O by a plurality of groups of row vectors. The matrix O is maximized in the third column, and the row vector where the maximum is located is denoted as V max, and V max is the maximum peak point in the RD chart. And the like, obtaining the maximum peak point of each of the plurality of RD graphs.
Step S108, connecting a plurality of maximum peak points in sequence under the RD coordinate system to obtain a track diagram of the hand motion. The track diagram of the gesture is the moving track of the moving target corresponding to the maximum peak value under the distance-Doppler coordinate system.
Specifically, the first column of each maximum row vector V max is taken as an abscissa, the second column is taken as an ordinate, marked in a two-dimensional coordinate system by an asterisk, and the asterisks of two adjacent row vectors are connected at the same time, so that an image is finally obtained, and the image is a gesture track diagram.
The gesture track diagram is a two-dimensional distance-Doppler track diagram, and each point on the diagram refers to motion information of a moving target corresponding to a maximum peak value in a period of time, wherein the motion information comprises the distance between the moving target and a radar and the radial speed of the moving target relative to the millimeter wave radar in the period of time. Compared with the prior art, the gesture trajectory graph is used as a characteristic of gesture recognition, the processing amount of data is greatly reduced, and the calculation is simpler.
Fig. 2 is a RD diagram (distance-doppler diagram) of a gesture of moving up and down according to the first embodiment of the present application, in which a grid of black and white in the middle is represented as a moving object, in the present application, the moving object refers to a hand or a part of the hand, and the depth of color is represented as echo intensity corresponding to the whole moving object. The darker the color is, the stronger the echo intensity corresponding to the position of the moving object. In the RD plot, the middle black lattice represents the largest peak in the RD plot. There is only one peak in this figure. This peak corresponds to the position of the moving object where the echo is strongest, the abscissa indicates the radial velocity, and the ordinate indicates the distance from the radar.
Fig. 3 is a schematic flow chart of a gesture recognition method according to a second embodiment of the present invention.
As shown in fig. 3, the method includes steps S201 to S203:
Step S201, constructing a convolutional neural network model.
Specifically, the structure of the convolutional neural network model sequentially includes: input layer, first convolution layer, first excitation layer, first pooling layer, second convolution layer, second excitation layer, second pooling layer, first fully-connected layer, second fully-connected layer, and output layer.
Further specifically, before the neural network is built, the configuration of the learning rate of the convolutional neural network and the initialization of the weights of each layer of the convolutional neural network are required to be carried out so as to adjust the training speed and the recognition effect of the convolutional neural network. For example, the initial convolutional neural network parameters are: learning rate and other parameter settings: learning_rate is 0.001, beta1 (exponential decay rate of first moment estimation) is 0.9, beta2 (exponential decay rate of second moment estimation) is 0.999, epsilon (this parameter is a very small number, which is divided by zero in order to prevent implementation) is 10E-8. The weights and offsets are initialized to random.
Step S202, a track diagram of various gestures is obtained as a training set F, and a convolutional neural network model is trained based on the training set F to obtain an optimized recognition model. The track diagrams of the gestures in the training set F are all provided with labels of the corresponding gestures, that is, the gestures in the training set F are all known.
Optionally, training the convolutional neural network model includes the following steps:
in step 1, since the input layer uses only gray information, each gesture data in the training data set L is set to a matrix of 90×90×1.
And 2, filtering the input layer data sequentially through the first convolution layer, the first excitation layer and the first pooling layer. The first convolution layer adopts 32 filters, the number of channels is 1, the convolution kernel is 3x3, the convolution step length is 1, the packing mode adopts the Same mode, namely the input and output sizes are consistent, the first excitation layer adopts a ReLU function, and a matrix with 90x90x32 is output after the first convolution and the excitation layer. The first pooling layer adopts maxpooling mode, the size of the filter is 2x2, the step length is 2, and the output after passing through the first pooling layer is 45x45x32.
And step 3, filtering the output data of the first pooling layer through the second convolution layer and the second pooling layer in sequence. The second convolution layer is similar to the first convolution layer except that the channel number is 32, the excitation function is a matrix with the output of 45x45x32 after passing through the second convolution layer and the output of 23x23x32 after passing through the second pooling layer, and the parameters are similar to those of the first pooling layer.
And 4, pulling the output of the second pooling layer into a one-dimensional vector, and sequentially passing through the first full-connection layer and the second full-connection layer to obtain a matrix with the output of 4x 1.
And 5, calculating the error between the output value and the target value obtained according to each layer in the steps 1-4, and transmitting the error back to the network when the error is larger than a preset expected value, namely updating the weight according to the obtained error between the output value and the target value.
Inputting the track diagrams of multiple gestures in all training sets F into a convolutional neural network model and training to finally obtain an optimized recognition model with accurate parameters.
In step S203, the trajectory graph of the recognized gesture is input into the optimized recognition model to recognize the gesture type of the recognized gesture.
Specifically, based on an optimized recognition model, probabilities that the recognized gestures are all gesture types in the training set F are respectively obtained, wherein the sum of the gesture probabilities of all types is 1; gestures with a probability higher than 95% are determined to be the type of recognition gesture. For example, there are 4 kinds of gestures in the training set F, and the optimal recognition model calculates that the probability of recognizing the gesture as a gesture is a%, the probability of recognizing the gesture as B gesture is B%, the probability of recognizing the gesture as C%, and the probability of recognizing the gesture as D%. Wherein the sum of a%, b%, c% and d% is 1. Finally, the optimal recognition model outputs gesture types with probabilities higher than 95%.
Optionally, the trace diagrams of the gestures of the multiple types in the training set F include four gesture actions: up and down movement, left and right movement, palm overturning movement and button actions by fingers.
A schematic diagram of the actions of the four gestures and a trace diagram obtained according to the first embodiment are given below as schematic examples. Fig. 4a is a schematic diagram of a finger gesture, and fig. 4b is a trace diagram of a button. Fig. 5a is a schematic diagram of the palm tilting motion, and fig. 5b is a trace diagram of the palm tilting motion. Fig. 6a is a schematic diagram of a left-right movement gesture, and fig. 6b is a trace diagram of the left-right movement gesture. Fig. 7a is a schematic diagram of a vertical movement gesture, and fig. 7b is a trace diagram of the vertical movement gesture.
In the second embodiment of the present invention, the four gestures are adopted altogether, the 132 sets of gesture trace diagrams (33 gestures) are adopted for training, the 28 sets of gesture trace diagrams (7 gestures) are adopted for testing, and all gesture types corresponding to RD images of the tested gestures are recognized correctly. The method provided by the second embodiment of the invention has higher recognition rate.
According to the gesture recognition method and system based on the millimeter wave radar, provided by the embodiment of the invention, in the gesture recognition method, the trace diagrams of various gestures are trained by using the convolutional neural network model to obtain the optimized recognition model, and the trace diagrams of the recognized gestures are input into the optimized recognition model, so that the gesture type can be obtained quickly and accurately. The gesture recognition method is simple, the data processing amount is small, and the calculation is simple.
Fig. 8 is a schematic structural diagram of a gesture recognition system according to a third embodiment of the present invention.
As shown in fig. 8, the system includes: the device comprises a modeling module, a training module and a gesture recognition module.
The modeling module is used for constructing a convolutional neural network model. The structure of the convolutional neural network model sequentially comprises: input layer, first convolution layer, first excitation layer, first pooling layer, second convolution layer, excitation layer, second pooling layer, first fully-connected layer, second fully-connected layer, and output layer.
The training module is used for acquiring a plurality of gesture track graphs as a training set F, and training the convolutional neural network model based on the training set F to obtain an optimized recognition model. Optionally, the training process of the convolutional neural network model can estimate the weights and the biases of all layers by using training sets with known classification results according to an Adam algorithm, and the weights and the biases can be more and more accurate according to the training sets which are continuously input.
Optionally, training the convolutional neural network model includes the following steps:
in step 1, since the input layer uses only gray information, each gesture data in the training data set L is set to a matrix of 90×90×1.
And 2, filtering the input layer data sequentially through the first convolution layer, the first excitation layer and the first pooling layer. The first convolution layer adopts 32 filters, the number of channels is 1, the convolution kernel is 3x3, the convolution step length is 1, the packing mode adopts the Same mode, namely the input and output sizes are consistent, the first excitation layer adopts a ReLU function, and a matrix with 90x90x32 is output after the first convolution and the excitation layer. The first pooling layer adopts maxpooling mode, the size of the filter is 2x2, the step length is 2, and the output after passing through the first pooling layer is 45x45x32.
And step 3, filtering the output data of the first pooling layer through the second convolution layer and the second pooling layer in sequence. The second convolution layer is similar to the first convolution layer except that the channel number is 32, the excitation function is a matrix with the output of 45x45x32 after passing through the second convolution layer and the output of 23x23x32 after passing through the second pooling layer, and the parameters are similar to those of the first pooling layer.
And 4, pulling the output of the second pooling layer into a one-dimensional vector, and sequentially passing through the first full-connection layer and the second full-connection layer to obtain a matrix with the output of 4x 1.
And 5, calculating the error between the output value and the target value obtained according to each layer in the steps 1-4, and transmitting the error back to the network when the error is larger than a preset expected value, namely updating the weight according to the obtained error between the output value and the target value.
Inputting the track diagrams of multiple gestures in all training sets F into a convolutional neural network model and training to finally obtain an optimized recognition model with accurate parameters.
And the gesture recognition module inputs the track graph of the recognized gesture into the optimized recognition model so as to recognize the gesture type of the recognized gesture.
The step of recognizing the gesture type of the recognized gesture comprises the steps of respectively obtaining probabilities of the recognized gesture as all types of gestures in the training set F based on the optimized recognition model, wherein the sum of the probabilities of all types of gestures is 1; gestures with a probability higher than 95% are determined as the type of the recognized gesture.
Optionally, when the convolution neural network model calculates that the probability of no gesture is higher than 95% in all types of gesture probabilities, the model outputs "the gesture cannot be recognized".
Optionally, the trace diagrams of the gestures of the multiple types in the training set F include four gesture actions: up and down movement, left and right movement, palm overturning movement and button actions by fingers.
According to the gesture recognition method and system based on the millimeter wave radar, provided by the embodiment of the invention, in the gesture recognition method, the trace diagrams of various gestures are trained by using the convolutional neural network model to obtain the optimized recognition model, and the trace diagrams of the recognized gestures are input into the optimized recognition model, so that the gesture type can be obtained quickly and accurately. The gesture recognition method is simple, the data processing amount is small, and the calculation is simple.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explanation of the principles of the present invention and are in no way limiting of the invention. Accordingly, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention. Furthermore, the appended claims are intended to cover all such changes and modifications that fall within the scope and boundary of the appended claims, or equivalents of such scope and boundary.

Claims (4)

1. The gesture recognition method based on the millimeter wave radar is characterized by comprising the following steps of:
constructing a convolutional neural network model;
Acquiring a track diagram of various gestures as a training set F, and training the convolutional neural network model based on the training set F to obtain an optimized recognition model; the gesture track graph is a moving track of a moving target corresponding to the maximum peak value under a distance-Doppler coordinate system;
Inputting a track graph of the identified gesture into the optimized identification model to identify the gesture type of the identified gesture;
The gesture track map acquisition method comprises the following steps:
Acquiring echo data generated by scanning a gesture by a millimeter wave radar;
Obtaining a plurality of RD images based on the echo data;
searching the spectrum peak of each RD image and solving the maximum peak of the image, thereby obtaining a plurality of maximum peak points;
Sequentially connecting a plurality of maximum peak points under an RD coordinate system to obtain a track diagram of the gesture;
the maximum peak point acquisition method comprises the following steps:
Searching spectral peaks of each RD image to obtain row coordinates i and column coordinates j corresponding to hand actions and a value M ij, wherein M ij is a peak point of each RD image;
Taking i, j and M ij in a plurality of peak points M ij in each RD chart as row vectors, and forming a new matrix O by a plurality of groups of row vectors;
and obtaining the maximum value of the third column of the matrix O, and marking the row vector where the maximum value is located as Vmax, wherein Vmax is the maximum peak point in the RD chart.
2. The millimeter wave radar-based gesture recognition method according to claim 1, wherein the structure of the convolutional neural network model sequentially comprises: input layer, first convolution layer, first excitation layer, first pooling layer, second convolution layer, second excitation layer, second pooling layer, first fully-connected layer, second fully-connected layer, and output layer.
3. The millimeter wave radar-based gesture recognition method of claim 1, wherein the step of recognizing the gesture type of the recognition gesture comprises:
Based on the optimized recognition model, probabilities that the recognized gestures are all gesture types in the training set F are respectively obtained, wherein the sum of the gesture probabilities of all types is 1;
Gestures with a probability higher than 95% are determined as the type of the recognized gesture.
4. A millimeter wave radar based gesture recognition method according to any one of claims 1-3, wherein said gesture comprises one or more of a back and forth movement, a side to side movement, a button and a flip.
CN201910063997.8A 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar Active CN111476058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910063997.8A CN111476058B (en) 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910063997.8A CN111476058B (en) 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar

Publications (2)

Publication Number Publication Date
CN111476058A CN111476058A (en) 2020-07-31
CN111476058B true CN111476058B (en) 2024-05-14

Family

ID=71743314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910063997.8A Active CN111476058B (en) 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar

Country Status (1)

Country Link
CN (1) CN111476058B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415510B (en) * 2020-11-05 2023-08-04 深圳大学 Dual-station radar gesture recognition method, device, system and storage medium
CN112363156A (en) * 2020-11-12 2021-02-12 苏州矽典微智能科技有限公司 Air gesture recognition method and device and intelligent equipment
US11474232B2 (en) 2021-03-19 2022-10-18 KaiKuTek Inc. Range doppler angle detection method and range doppler angle detection device
CN113267773B (en) * 2021-04-14 2023-02-21 北京航空航天大学 Millimeter wave radar-based accurate detection and accurate positioning method for indoor personnel
TWI756122B (en) * 2021-04-30 2022-02-21 開酷科技股份有限公司 Distance Doppler Radar Angle Sensing Method and Device
CN113762130B (en) * 2021-09-01 2024-02-13 东南大学 Millimeter wave radar gesture detection and recognition method
CN113791411B (en) * 2021-09-07 2024-05-28 北京航空航天大学杭州创新研究院 Millimeter wave radar gesture recognition method and device based on track judgment
CN114236492B (en) * 2022-02-23 2022-06-07 南京一淳科技有限公司 Millimeter wave radar micro gesture recognition method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic
CN108334814A (en) * 2018-01-11 2018-07-27 浙江工业大学 A kind of AR system gesture identification methods based on convolutional neural networks combination user's habituation behavioural analysis
CN109188414A (en) * 2018-09-12 2019-01-11 北京工业大学 A kind of gesture motion detection method based on millimetre-wave radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic
CN108334814A (en) * 2018-01-11 2018-07-27 浙江工业大学 A kind of AR system gesture identification methods based on convolutional neural networks combination user's habituation behavioural analysis
CN109188414A (en) * 2018-09-12 2019-01-11 北京工业大学 A kind of gesture motion detection method based on millimetre-wave radar

Also Published As

Publication number Publication date
CN111476058A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111476058B (en) Gesture recognition method based on millimeter wave radar
CN110765974B (en) Micro gesture recognition method based on millimeter wave radar and convolutional neural network
Zhang et al. Latern: Dynamic continuous hand gesture recognition using FMCW radar sensor
CN107169435B (en) Convolutional neural network human body action classification method based on radar simulation image
CN109188414A (en) A kind of gesture motion detection method based on millimetre-wave radar
CN111461037B (en) End-to-end gesture recognition method based on FMCW radar
CN111157988B (en) Gesture radar signal processing method based on RDTM and ATM fusion
Wu et al. Dynamic hand gesture recognition using FMCW radar sensor for driving assistance
CN110647788B (en) Human daily behavior classification method based on micro-Doppler characteristics
Ni et al. Open-set human identification based on gait radar micro-Doppler signatures
WO2023029390A1 (en) Millimeter wave radar-based gesture detection and recognition method
CN115343704A (en) Gesture recognition method of FMCW millimeter wave radar based on multi-task learning
CN115877376A (en) Millimeter wave radar gesture recognition method and recognition system based on multi-head self-attention mechanism
CN114708663A (en) Millimeter wave radar sensing gesture recognition method based on few-sample learning
CN115902878A (en) Millimeter wave radar human behavior recognition method
Gan et al. Gesture recognition system using 24 GHz FMCW radar sensor realized on real-time edge computing platform
Mauro et al. Few-shot user-definable radar-based hand gesture recognition at the edge
KR20220141748A (en) Method and computer readable storage medium for extracting target information from radar signal
CN116794602A (en) Millimeter wave radar dynamic gesture recognition method applied to interference environment
CN114724245B (en) Incremental learning human body action recognition method based on CSI
CN114511873B (en) Static gesture recognition method and device based on millimeter wave radar imaging
CN116524537A (en) Human body posture recognition method based on CNN and LSTM combination
Zhao et al. Interference suppression based gesture recognition method with FMCW radar
Wang et al. A novel F-RCNN based hand gesture detection approach for FMCW systems
CN115937977A (en) Few-sample human body action recognition method based on multi-dimensional feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant