CN112232144A - Personnel overboard detection and identification method based on improved residual error neural network - Google Patents

Personnel overboard detection and identification method based on improved residual error neural network Download PDF

Info

Publication number
CN112232144A
CN112232144A CN202011035521.2A CN202011035521A CN112232144A CN 112232144 A CN112232144 A CN 112232144A CN 202011035521 A CN202011035521 A CN 202011035521A CN 112232144 A CN112232144 A CN 112232144A
Authority
CN
China
Prior art keywords
neural network
sample data
time
water
frequency characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011035521.2A
Other languages
Chinese (zh)
Inventor
姜喆
王天星
段一琛
杨舸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011035521.2A priority Critical patent/CN112232144A/en
Publication of CN112232144A publication Critical patent/CN112232144A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a personnel overboard detection and identification method based on an improved residual error neural network. According to the method, on the basis of the ResNet34 residual neural network, an SE module is added into each residual block, so that an improved residual neural network is realized; and then processing the collected personnel falling water and non-personnel falling water audio data into a two-classification characteristic diagram data set to train the improved residual convolutional neural network, so as to obtain a residual neural network model with higher detection and identification precision. Finally, the audio data collected in real time are converted into a time-frequency characteristic diagram and input into the trained residual neural network model, and then a real-time recognition result can be obtained. The residual error neural network model integrates the drowning detection process and the identification process, replaces most processing flows with a single neural network, and can obtain higher accuracy.

Description

Personnel overboard detection and identification method based on improved residual error neural network
Technical Field
The invention belongs to the field of signal processing, and particularly relates to a man overboard detection and identification method.
Background
Conventionally, news that people happened to fall into water unfortunately in lakes, reservoirs, and the like is often not fresh. According to the report of the world health organization, 372000 people die from drowning every year in the world, and on average 42 people die every hour every day, wherein not only drowning people but also rescue people exist. The gold time for rescuing after people fall into water is only 5 minutes, and when the water falling event happens suddenly, people falling into water cannot be found at the first time depending on manpower, so that casualties of a large number of people falling into water are caused. In addition, under the condition of severe weather, the visibility is low, and the drowning event is difficult to detect only by means of vision, so that a detection method based on an acoustic signal is considered to be introduced into the underwater environment, but the complexity and the variety of the underwater environment bring great difficulty to the detection and the identification of the drowning event.
In the traditional underwater sound target detection and identification field, the detection and identification are carried out in two stages, the processing process is complex, and the detection and identification precision is deficient to some extent, so that the final identification effect is not ideal, and the existing drowning detection and identification method cannot quickly and accurately identify whether personnel drowning exists.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a personnel overboard detection and identification method based on an improved residual error neural network. According to the method, on the basis of the ResNet34 residual neural network, an SE module is added into each residual block, so that an improved residual neural network is realized; and then processing the collected personnel falling water and non-personnel falling water audio data into a two-classification characteristic diagram data set to train the improved residual convolutional neural network, so as to obtain a residual neural network model with higher detection and identification precision. Finally, the audio data collected in real time are converted into a time-frequency characteristic diagram and input into the trained residual neural network model, and then a real-time recognition result can be obtained. The residual error neural network model integrates the drowning detection process and the identification process, replaces most processing flows with a single neural network, and can obtain higher accuracy.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: placing hydrophones in water and collecting audio signals of the surrounding environment; dividing the collected audio signals into 5 conditions, namely falling into water by a person, falling into water by the person and struggling, falling into water by small-sized sundries, falling into water by large-sized sundries and no object falling into water; the method comprises the following steps of taking audio signals of two situations of falling water of people and struggling as positive sample data, and taking audio signals of three situations of falling water of small sundries, falling water of large sundries and no falling water object as negative sample data;
step 2: performing sliding window slicing processing on each audio signal of the positive sample data and the negative sample data, and then performing short-time Fourier transform to obtain a time-frequency characteristic graph of each audio signal; then, the size of each time-frequency characteristic graph is adjusted to be l1*l2And carrying out pixel value normalization processing; labeling all the time-frequency characteristic graphs after the processing: the label of the time-frequency characteristic graph corresponding to the positive sample data is 0, and the label of the time-frequency characteristic graph corresponding to the negative sample data is 1; the time-frequency characteristic graph corresponding to the labeled positive sample data forms a positive sample data set, and the time-frequency characteristic graph corresponding to the labeled negative sample data forms a negative sample data set;
and step 3: randomly selecting a% of time-frequency characteristic graphs from a positive sample data set as a positive training set, and using the rest part of the positive sample data set as a positive test set, wherein a is more than 50 and less than 100; randomly selecting b% of time-frequency characteristic graphs from the negative sample data set as a negative training set, and taking the rest part of the negative sample data set as a negative test set, wherein b is more than 50 and less than 100;
combining the positive training set and the negative training set, and randomly disordering the sequence to form an overall training set; collecting the positive test set and the negative test set to form a total test set;
and 4, step 4: constructing an improved residual error neural network model:
step 4-1: constructing a residual error neural network model with 5 layers on the basis of a ResNet34 module, wherein the 1 st layer consists of 2 convolutional layers and 2 batch normalization layers, and the 2 nd to 5 th layers consist of 3, 4, 6 and 3 residual error blocks respectively; adding an SE module into each residual block in the improved residual neural network model, wherein the SE module consists of p global average pooling layers and q full-connection layers;
step 4-2: defining a loss function:
loss=-αt(1-pt)γlog(pt)
wherein p istIs the probability that the improved residual error neural network model predicts whether the sample belongs to the positive class or the negative class; alpha is alphatIs a weight coefficient, αtE (0, 1); gamma is a modulation coefficient, gamma belongs to (0, 1);
and 5: training the improved residual error neural network model constructed in the step 4 by adopting an overall training set, using the loss function defined in the step 4-2 as a target function, optimizing by adopting an Adam algorithm, and training a round B in total; testing and identifying accuracy of the improved residual error neural network model obtained by each round of training by using an overall test set, and saving the neural network model with the highest accuracy in the B round of training as an optimal model;
drawing a confusion matrix graph by using the optimal model, and calculating precision ratio and recall ratio of testing the overall test set by using the optimal model;
step 6: using the optimal model trained in the step 5 as a final detection recognition model; carrying out sliding window slicing processing on an audio signal acquired by a hydrophone in real time, and then carrying out short-time Fourier transform to obtain a time-frequency characteristic diagram of the audio signal; and inputting the time-frequency characteristic diagram into a final detection and identification model, and outputting a result of accurately identifying whether the person falls into water by the final detection and identification model.
Preferably, the l1=224,l2=224。
Preferably, a is 70 and b is 70.
Preferably, p is 1 and q is 2.
Preferably, B is 100.
The invention has the beneficial effects that: due to the adoption of the personnel drowning detection and identification method based on the improved residual error neural network, the drowning detection process and the identification process can be integrated, a single neural network is used for replacing most processing flows, and higher accuracy can be obtained. Meanwhile, the method improves the precision ratio and the recall ratio to a great extent, and reduces the waste of a large amount of manpower and material resources caused by low precision ratio and the drowning casualties caused by low recall ratio.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of the residual block improved by the method of the present invention.
FIG. 3 is a final confusion matrix map of the optimal model obtained by the present invention.
Fig. 4 is a training loss curve of an embodiment of the present invention.
FIG. 5 is a training accuracy curve according to an embodiment of the present invention.
FIG. 6 is a test accuracy curve according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, the present invention provides a method for detecting and identifying man-over-water based on an improved residual neural network, which includes the following steps:
step 1: placing hydrophones in water and collecting audio signals of the surrounding environment; dividing the collected audio signals into 5 conditions, namely falling into water by a person, falling into water by the person and struggling, falling into water by small-sized sundries, falling into water by large-sized sundries and no object falling into water; the method comprises the following steps of taking audio signals of two situations of falling water of people and struggling as positive sample data, and taking audio signals of three situations of falling water of small sundries and falling water of large sundries and no falling water object as negative sample data;
step 2: performing sliding window slicing processing on each audio signal of the positive sample data and the negative sample data, and then performing short-time Fourier transform to obtain a time-frequency characteristic graph of each audio signal; then adjusting the size of each time-frequency feature graph to 224 x 224, and carrying out pixel value normalization processing; labeling all the time-frequency characteristic graphs after the processing: the label of the time-frequency characteristic graph corresponding to the positive sample data is 0, and the label of the time-frequency characteristic graph corresponding to the negative sample data is 1; the time-frequency characteristic graph corresponding to the labeled positive sample data forms a positive sample data set, and the time-frequency characteristic graph corresponding to the labeled negative sample data forms a negative sample data set;
and step 3: randomly selecting 70% of time-frequency characteristic graphs in the positive sample data set as a positive training set, and using the rest of the positive sample data set as a positive test set; randomly selecting 70% of time-frequency characteristic graphs in the negative sample data set as a negative training set, and using the rest of the negative sample data set as a negative test set;
combining the positive training set and the negative training set, and randomly disordering the sequence to form an overall training set; collecting the positive test set and the negative test set to form a total test set;
and 4, step 4: constructing an improved residual error neural network model:
step 4-1: constructing a residual error neural network model with 5 layers on the basis of a ResNet34 module, wherein the 1 st layer consists of 2 convolutional layers and 2 batch normalization layers, and the 2 nd to 5 th layers consist of 3, 4, 6 and 3 residual error blocks respectively; adding an SE module into each residual block in the improved residual neural network model, wherein the SE module consists of 1 global average pooling layer and 2 full-connection layers;
the SE module is used for adaptively recalibrating the channel-type feature response, and can learn to use global information to selectively emphasize information features and suppress less useful features, so that the obvious performance improvement is generated on the existing network model at the cost of increasing tiny computing cost;
the structure of the residual block added to the SE module is shown in FIG. 2;
step 4-2: the loss function is defined using Focal loss:
loss=-αt(1-pt)γlog(pt)
wherein pt is that the improved residual neural network model prediction samples belong to a positive class or a negative classThe probability of (d); alpha is alphatIs a weight coefficient for reducing the weight of the loss of the class sample with too high number of samples to the total loss, alphatE (0, 1); gamma is a modulation coefficient and is used for reducing the weight of the loss of the easily classified samples in the total loss, and gamma belongs to (0, 1);
and 5: training the improved residual error neural network model constructed in the step 4 by adopting an overall training set, using the loss function defined in the step 4-2 as a target function, optimizing by adopting an Adam algorithm, and training for 100 rounds in total; testing and identifying accuracy of the improved residual error neural network model obtained by each training round by using a total test set, and saving the neural network model with the highest accuracy in 100 training rounds as an optimal model;
drawing a confusion matrix diagram by using the optimal model, and calculating precision ratio and recall ratio of testing the overall test set by using the optimal model as shown in FIG. 3; the three visualization curves generated during training and testing are shown in fig. 4, 5 and 6.
Step 6: using the optimal model trained in the step 5 as a final detection recognition model; carrying out sliding window slicing processing on an audio signal acquired by a hydrophone in real time, and then carrying out short-time Fourier transform to obtain a time-frequency characteristic diagram of the audio signal; and inputting the time-frequency characteristic diagram into a final detection and identification model, and outputting a result of accurately identifying whether the person falls into water by the final detection and identification model.
The invention realizes the integration of the drowning detection and identification process, and trains by using the data collected in the underwater acoustic environment to obtain an optimal detection and identification neural network model. In practical application, collected signal waveforms are preprocessed into a time-frequency characteristic diagram form and input into a trained model, and the model can quickly output an accurate recognition result.
As shown in table 1, compared with the conventional detection method plus the support vector machine identification method, the improved residual neural network model has higher accuracy and a more concise data processing process; compared with a five-layer convolutional neural network detection and identification method, the improved residual neural network has higher feature extraction capability and higher accuracy rate caused by the improved residual neural network. The accuracy of the ResNet50 and SE-ResNext50(32 x 4d) models in the data of Table 1 is lower than that of the ResNet34 model because the complexity of the models is too high, resulting in over-fitting of the training set and ultimately poor results on the test set, thus also proving that complex models are not suitable for use herein. The model comparison results in table 1 also show that the recognition accuracy can be further improved by adding an SE module to the original ResNet34 module. Meanwhile, the method improves the precision ratio and the recall ratio to a great extent, reduces the waste of a large amount of manpower and material resources caused by low precision ratio and the drowning casualties caused by low recall ratio, and proves the effectiveness and the reliability of the method for detecting and identifying the drowning by using the improved residual convolutional neural network.
TABLE 1 recognition accuracy of six models
Figure BDA0002704924230000051

Claims (5)

1. A personnel overboard detection and identification method based on an improved residual error neural network is characterized by comprising the following steps:
step 1: placing hydrophones in water and collecting audio signals of the surrounding environment; dividing the collected audio signals into 5 conditions, namely falling into water by a person, falling into water by the person and struggling, falling into water by small-sized sundries, falling into water by large-sized sundries and no object falling into water; the method comprises the following steps of taking audio signals of two situations of falling water of people and struggling as positive sample data, and taking audio signals of three situations of falling water of small sundries, falling water of large sundries and no falling water object as negative sample data;
step 2: performing sliding window slicing processing on each audio signal of the positive sample data and the negative sample data, and then performing short-time Fourier transform to obtain a time-frequency characteristic graph of each audio signal; then, the size of each time-frequency characteristic graph is adjusted to be l1*l2And carrying out pixel value normalization processing; labeling all the time-frequency characteristic graphs after the processing: class IIIThe label of the time-frequency characteristic graph corresponding to the sample data is 0, and the label of the time-frequency characteristic graph corresponding to the negative sample data is 1; the time-frequency characteristic graph corresponding to the labeled positive sample data forms a positive sample data set, and the time-frequency characteristic graph corresponding to the labeled negative sample data forms a negative sample data set;
and step 3: randomly selecting a% of time-frequency characteristic graphs from a positive sample data set as a positive training set, and using the rest part of the positive sample data set as a positive test set, wherein a is more than 50 and less than 100; randomly selecting b% of time-frequency characteristic graphs from the negative sample data set as a negative training set, and taking the rest part of the negative sample data set as a negative test set, wherein b is more than 50 and less than 100;
combining the positive training set and the negative training set, and randomly disordering the sequence to form an overall training set; collecting the positive test set and the negative test set to form a total test set;
and 4, step 4: constructing an improved residual error neural network model:
step 4-1: constructing a residual error neural network model with 5 layers on the basis of a ResNet34 module, wherein the 1 st layer consists of 2 convolutional layers and 2 batch normalization layers, and the 2 nd to 5 th layers consist of 3, 4, 6 and 3 residual error blocks respectively; adding an SE module into each residual block in the improved residual neural network model, wherein the SE module consists of p global average pooling layers and q full-connection layers;
step 4-2: defining a loss function:
loss=-αt(1-pt)γlog(pt)
wherein p istIs the probability that the improved residual error neural network model predicts whether the sample belongs to the positive class or the negative class; alpha is alphatIs a weight coefficient, αtE (0, 1); gamma is a modulation coefficient, gamma belongs to (0, 1);
and 5: training the improved residual error neural network model constructed in the step 4 by adopting an overall training set, using the loss function defined in the step 4-2 as a target function, optimizing by adopting an Adam algorithm, and training a round B in total; testing and identifying accuracy of the improved residual error neural network model obtained by each round of training by using an overall test set, and saving the neural network model with the highest accuracy in the B round of training as an optimal model;
drawing a confusion matrix graph by using the optimal model, and calculating precision ratio and recall ratio of testing the overall test set by using the optimal model;
step 6: using the optimal model trained in the step 5 as a final detection recognition model; carrying out sliding window slicing processing on an audio signal acquired by a hydrophone in real time, and then carrying out short-time Fourier transform to obtain a time-frequency characteristic diagram of the audio signal; and inputting the time-frequency characteristic diagram into a final detection and identification model, and outputting a result of accurately identifying whether the person falls into water by the final detection and identification model.
2. The method for detecting and identifying man over water based on the improved residual neural network as claimed in claim 1, wherein l1=224,l2=224。
3. The improved residual neural network based man-over-board detection and identification method as claimed in claim 1, wherein a-70 and b-70 are provided.
4. The improved residual neural network-based man over water detection and identification method as claimed in claim 1, wherein p is 1 and q is 2.
5. The improved residual neural network-based man over water detection and identification method as claimed in claim 1, wherein B is 100.
CN202011035521.2A 2020-09-27 2020-09-27 Personnel overboard detection and identification method based on improved residual error neural network Pending CN112232144A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011035521.2A CN112232144A (en) 2020-09-27 2020-09-27 Personnel overboard detection and identification method based on improved residual error neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011035521.2A CN112232144A (en) 2020-09-27 2020-09-27 Personnel overboard detection and identification method based on improved residual error neural network

Publications (1)

Publication Number Publication Date
CN112232144A true CN112232144A (en) 2021-01-15

Family

ID=74119359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011035521.2A Pending CN112232144A (en) 2020-09-27 2020-09-27 Personnel overboard detection and identification method based on improved residual error neural network

Country Status (1)

Country Link
CN (1) CN112232144A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112599123A (en) * 2021-03-01 2021-04-02 珠海亿智电子科技有限公司 Lightweight speech keyword recognition network, method, device and storage medium
CN114359373A (en) * 2022-01-10 2022-04-15 杭州巨岩欣成科技有限公司 Swimming pool drowning prevention target behavior identification method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2502982A (en) * 2012-06-12 2013-12-18 Jeremy Ross Nedwell Swimming pool entry alarm and swimmer inactivity alarm
CN111325143A (en) * 2020-02-18 2020-06-23 西北工业大学 Underwater target identification method under unbalanced data set condition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2502982A (en) * 2012-06-12 2013-12-18 Jeremy Ross Nedwell Swimming pool entry alarm and swimmer inactivity alarm
CN111325143A (en) * 2020-02-18 2020-06-23 西北工业大学 Underwater target identification method under unbalanced data set condition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIE HU 等: "Squeeze-and-Excitation Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112599123A (en) * 2021-03-01 2021-04-02 珠海亿智电子科技有限公司 Lightweight speech keyword recognition network, method, device and storage medium
CN114359373A (en) * 2022-01-10 2022-04-15 杭州巨岩欣成科技有限公司 Swimming pool drowning prevention target behavior identification method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Scheifele et al. Indication of a Lombard vocal response in the St. Lawrence River beluga
CN110245608B (en) Underwater target identification method based on half tensor product neural network
CN108648748B (en) Acoustic event detection method under hospital noise environment
CN106653032B (en) Based on the animal sounds detection method of multiband Energy distribution under low signal-to-noise ratio environment
CN111680706A (en) Double-channel output contour detection method based on coding and decoding structure
CN111179273A (en) Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN112232144A (en) Personnel overboard detection and identification method based on improved residual error neural network
CN108680245A (en) Whale globefish class Click classes are called and traditional Sonar Signal sorting technique and device
CN114155879B (en) Abnormal sound detection method for compensating abnormal perception and stability by using time-frequency fusion
CN113191178B (en) Underwater sound target identification method based on auditory perception feature deep learning
CN111986699B (en) Sound event detection method based on full convolution network
CN115188387B (en) Effective marine mammal sound automatic detection and classification method
CN115798516B (en) Migratable end-to-end acoustic signal diagnosis method and system
CN116386649A (en) Cloud-edge-collaboration-based field bird monitoring system and method
CN113758709A (en) Rolling bearing fault diagnosis method and system combining edge calculation and deep learning
Fristrup et al. Characterizing acoustic features of marine animal sounds
Xie et al. Acoustic feature extraction using perceptual wavelet packet decomposition for frog call classification
CN116884416A (en) Wild animal audio acquisition and detection system, method, storage medium and electronic equipment
CN107886049B (en) Visibility recognition early warning method based on camera probe
CN110322894B (en) Sound-based oscillogram generation and panda detection method
CN114581705A (en) Fruit ripening detection method and system based on YOLOv4 model and convolutional neural network
CN112235727B (en) Personnel flow monitoring and analyzing method and system based on MAC data
CN114882906A (en) Novel environmental noise identification method and system
CN114386572A (en) Motor multi-signal deep learning detection method
CN113571050A (en) Voice depression state identification method based on Attention and Bi-LSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115

RJ01 Rejection of invention patent application after publication