CN112562739A - Large-scale feeding type broiler respiratory disease sound data analysis system - Google Patents

Large-scale feeding type broiler respiratory disease sound data analysis system Download PDF

Info

Publication number
CN112562739A
CN112562739A CN202011309454.9A CN202011309454A CN112562739A CN 112562739 A CN112562739 A CN 112562739A CN 202011309454 A CN202011309454 A CN 202011309454A CN 112562739 A CN112562739 A CN 112562739A
Authority
CN
China
Prior art keywords
sound
sound data
data
cloud
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011309454.9A
Other languages
Chinese (zh)
Inventor
杨继帅
王锁成
刘玉庆
王镜霖
张庆
孙静
孙钡蓓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Fanzai Intelligent Technology Co ltd
Original Assignee
Shandong Fanzai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Fanzai Intelligent Technology Co ltd filed Critical Shandong Fanzai Intelligent Technology Co ltd
Priority to CN202011309454.9A priority Critical patent/CN112562739A/en
Publication of CN112562739A publication Critical patent/CN112562739A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to the technical field of electronic information, and particularly relates to a system for analyzing sound data of respiratory diseases of mass-fed broiler chickens, which comprises the following components: collecting sound data; analyzing sound data; the sound data are transmitted from the cloud to the enhanced neural network of the cloud server for analysis, and the analysis result is uploaded to the cloud backup; optimizing the algorithm to optimize the result of the sound data analysis in the step two; constructing a positioning model and measuring the position of a sound source area; and data are collected and analyzed for a long time, algorithm optimization is carried out, and the analysis result is transmitted to a local server, so that early warning and warning of disease attack are realized. The invention can enable the machine to replace manual work to monitor the cry of the chicken flock continuously 24 hours a day, and if abnormal cry is frequently found, the breeding factory can be reminded to pay attention and treat at the early stage of disease occurrence, thereby achieving the purposes of reducing cost and improving profit.

Description

Large-scale feeding type broiler respiratory disease sound data analysis system
Technical Field
The invention belongs to the technical field of electronic information, and particularly relates to a large-scale feeding type broiler respiratory disease sound data analysis system.
Background
According to the estimation of poultry pathological analysis experts of Shandong agriculture university, the accuracy rate of manually judging the association between the sound and the respiratory diseases is expected to be more than 95%, at present, the culturist with experience in the large-scale cultivation factory can judge whether the broiler chicken is sick or not according to the sound of the broiler chicken, but the manual judgment often has the problems of delay or omission, so that the early treatment can not be started any more, the medicine, material and other costs of the cultivation factory are increased, and the death rate is increased.
Disclosure of Invention
In order to solve the problems mentioned in the background technology, the invention discloses a large-scale feeding type broiler respiratory disease sound data analysis system.
The design purpose of the scheme is to enable the machine to replace manual work to monitor the cry of the chicken flocks continuously for 24 hours a day, and if the abnormal cry is frequently found, the breeding plant can be reminded to pay attention to and treat the disease in the early stage, so that the cost is reduced, and the profit is improved. The technical scheme is as follows:
a system for analyzing respiratory disease sound data of mass-feeding type broiler chickens, which comprises,
the method comprises the following steps: the method comprises the steps that sound data are collected, a plurality of sound collection sensors are arranged in a coop, the sound collection sensors transmit the collected sound data to a cloud end through a network, and the sound data are converted into audio data in an OPUS format at the cloud end to be stored;
step two: analyzing the sound data, wherein the sound data is transmitted from the cloud to an enhanced neural network of a cloud server for analysis, and an analysis result is uploaded to a cloud backup;
step three: performing algorithm optimization, namely transmitting the abnormal sound data obtained by analysis in the step two to a local server from a cloud end, performing expert marking on the local server to generate a local label file, and additionally updating the local label file to a cloud end label file in the cloud end server to optimize the sound data analysis result in the step two;
step four: constructing a positioning model, measuring the direction of a sound source area, setting the relative position between a sound collecting sensor and the broiler flock, and fixing a sound receiving model to position the specific position of the broiler chicken which emits abnormal sound;
step five: and (4) collecting and analyzing data for a long time, optimizing the algorithm in the third step, and transmitting the analysis result in the second step to a local server to realize early warning and warning of the onset of disease.
Further, the neural network for analyzing the sound data includes:
and (3) audio decomposition algorithm: performing Fourier transform on the audio signal in the time domain, and obtaining the frequency spectrum expression of the audio signal in frequency and energy abundance (amplitude and synthetic index of Fourier expansion coefficient) indexes in a minimum time window (1/420 seconds);
for audio data, the original fourier transform equation is as follows:
Gf(∈,u)=∫f(t)g(t-u)ej∈tdt
for short-time fourier transforms there are:
Figure BDA0002789292540000021
where z (u) is the source signal and g (u-t) is the window function
Each piece of the analyzed audio data produces three graphs, i.e., spectral representations, wherein the spectral representations of the entire audio data are presented on a "FULL speech" graph; at a specific frequency, the audio frequency moves forward along a time axis, and the frequency spectrum with uniform audio abundance is completely presented on a BACKGROUND graph; moving forward along a time axis at a specific frequency, and when a singular point appears in an audio abundance function relative to equal uniformly distributed numerical values before and after the time point at a specific time point, the frequency spectrum expression of the singular point appears on a FOREGROUND graph; therefore, the technology can generate a visual image of sound frequency and detect abnormal points for the examination of experts or chicken farm staff;
and (3) analysis algorithm: after the decomposition algorithm converts and presents the audio data, the expert distinguishes and marks the sound characteristics contained in the audio data according to the audio data and the three images, the sound characteristics comprise coughing, snorting, sneezing and vocalizing sounds, local label files are formed after marking, the local label files are synchronized to the cloud to form cloud label files for training the robust neural network, the robust neural network preliminarily forms parameterized characteristics with identification capability, and then the parameterized characteristics with stable identification capability of the robust neural network are optimized through the algorithm in step three.
The reinforced neural network part introduces a CNN structure which has strong feature extraction capability, and mainly adopts a convolution layer, a pooling layer, an activation function layer, a full-link layer and an output layer. For the convolutional layer, the following function is shown:
Figure BDA0002789292540000031
wherein
Figure BDA0002789292540000032
Is the weight of the l-th layer,
Figure BDA0002789292540000033
is the bias of the l-th layer.
Figure BDA0002789292540000034
Is the input tile for the (i, j) position of the l-th layer. The weights of the feature maps are shared (weight sharing reduces the amount of parameters). One of the biggest advantages of the CNN structure is the weight sharing mechanism, because it can significantly reduce the computational complexity when the number of parameters is smaller, and also make training and optimization simpler. Pooling layers are an important module in CNN architecture, and the most important goal of such layers is to reduce the size of the feature map by fusing sub-regions using some function, such as averaging or max/min:
Figure BDA0002789292540000035
wherein y isi,j,kIs the output of the pooling operator at position (i, j) of the kth feature map, ai,j,kIs the pooling region R in the kth feature mapijCharacteristic value at the middle position (m, n). It is particularly noted that this formula corresponds to an average pooling when p is 1, and becomes a maximum pooling as p approaches infinity. Commonly used activation functions are the ReLU activation function, the leakyreu function, which is defined as follows:
ai,j,k=max(zi,j,k,0)
to avoid the gradient vanishing problem, we can choose to use the LeakyReLU function:
Figure BDA0002789292540000036
wherein λ is in the range of (0, 1). Leaky ReLU does not force the negative part to be zero, but instead it allows for a small non-zero gradient. For a fully connected layer, the following function is expressed:
Figure BDA0002789292540000037
after the classification output is obtained, a Softmax loss function can be calculated between the predicted value and the true value of the output:
Figure BDA0002789292540000038
and in this way optimize our entire CNN network.
Further, the system provides a corresponding interface to support other systems to call data, provides a sensor interface, and other systems can control the sensors according to the sensor interface, specifically can control the number of the sensors, and modifies the settings of the sensors, such as the maximum frequency (8192Hz), the minimum frequency (0Hz), and the like of the sound signals collected by the sensors. Other systems can call the sensor interface to obtain the parameter setting of the sensor, so that other systems can know the state of the sensor conveniently; the module also provides calling of the reinforced neural network for other systems, specifically, other systems can set certain parameters of the reinforced neural network through the interface and can also obtain the state of the reinforced neural network so that other systems can know the state of the reinforced neural network conveniently. For example, the confidence level of the augmented neural network can be set, and the recognition of the augmented neural network can be quantitatively accurate by setting different confidence values. The module also provides an acquisition and control interface of cloud data for other systems. Other systems can call the interface to acquire the sound signal data stored on the cloud server, or modify the sound signal data, the tag data and the like stored on the cloud service area, and can also upload the sound data collected by the sensor through the interface and upload the tag data marked manually.
Further, a filter is installed inside the sound collection sensor.
The invention has the beneficial effects that:
1. the sound data analysis system for respiratory diseases of the large-scale raised broiler chickens utilizes the sensors, and efficiently filters sound signals collected by the sensors through the filters, so that accurate collection of the sound signals is realized.
2. Compared with the traditional manual screening, the neural network has the advantages of high identification precision, less occupied resources and the like, and can better help an administrator to manage the broilers. The human expert discriminates and labels the sound characteristics contained in the audio data according to the audio and image data, and particularly pays attention to the discrimination and judgment of coughing, snorting, sneezing and vocalizing sounds to form the label. The CNN structure quoted here carries out feature capture on the audio data containing the label, and the parameterization depth and strength of the response feature are continuously increased, so that the process becomes artificial neural network training, and the process is increased to extract and judge the feature. After a large amount of data training, the CNN structure initially has the ability of identifying whether the audio data contains the sound features of interest and the marking behavior. Finally, the identification efficiency of the reinforced neural network is higher through the reinforcement of the network.
3. The invention can provide real-time monitoring for 24 hours a day all day, when abnormal sound occurs, the sensor can collect the sound in time and transmit the sound to the neural network, and after the neural network identification is strengthened, the result is immediately notified to the administrator, so that the administrator can process and treat the broilers with diseases in a short time.
4. The invention can also store the sick data of the broilers and carry out visual analysis on the data so as to facilitate better management of the broilers by administrators and prevent certain diseases which are easy to appear in advance. The reinforcing neural network can store input data on a cloud end or a local disk, and an administrator can obtain the data at any time for analysis. Meanwhile, the new scheme can also visualize data, so that an administrator can visually know the disease condition of the broiler chicken, and the new scheme is of great help for the administrator to prevent diseases of the broiler chicken.
5. The system for analyzing the respiratory disease sound data of the large-scale raised broiler chickens is high in identification precision and efficiency; the average age of the broiler chicken is small (about 45 days), the difference of physique characteristics related to sound is smaller than that of adult chicken, and the early warning effectiveness can be 1-2 thousandth in the broiler chicken feeding environment by using the developed algorithm module. Through strengthening the neural network, after training, the precision of discernment can reach expert's level, and the discernment time is short, has very high efficiency, satisfies the requirement of real-time. And because manual intervention is not needed, the scheme can save time and labor.
Drawings
Fig. 1 is a schematic view of the flow structure of the present solution.
Detailed Description
In order to make the technical solution of the present invention more clear and definite for those skilled in the art, the technical solution of the present invention is described in detail below with reference to fig. 1, but the embodiment of the present invention is not limited thereto.
A system for analyzing respiratory disease sound data of mass-feeding type broiler chickens, which comprises,
the method comprises the following steps: the method comprises the steps that sound data are collected, a plurality of sound collection sensors are arranged in a coop, the sound collection sensors transmit the collected sound data to a cloud end through a network, and the sound data are converted into audio data in an OPUS format at the cloud end to be stored;
and filtering invalid data collected by the sensor in the background of large-range feeding of the chicken coop. When recording audio data with a sensor, a mobile phone or a recording pen, the effective identification range should be tested first. The sensor collects data in real time, and the data of a certain time length is analyzed at regular intervals. In the method, 30 minutes are used as initial setting at regular intervals; here, "fixed time length" was set to 5 minutes as an initial setting.
The filter is also needed to be installed in the sensor, and the filter is used for filtering invalid data collected by the sensor, namely, the invalid data collected by the sensor is filtered, so that more effective data can be obtained in the process of data conversion later, and the trained reinforced neural network has higher identification precision and better robustness.
Step two: analyzing the sound data, wherein the sound data is transmitted from the cloud to an enhanced neural network of a cloud server for analysis, and an analysis result is uploaded to a cloud backup; the cloud server is a central processing unit, and a computer and other embedded devices can also be used as the central processing unit so as to operate a network strengthening algorithm.
Step three: performing algorithm optimization, namely transmitting the abnormal sound data obtained by analysis in the step two to a local server from a cloud end, performing expert marking on the local server to generate a local label file, and additionally updating the local label file to a cloud end label file in the cloud end server to optimize the sound data analysis result in the step two;
data communication needs to be established between a central processing unit (cloud) and a local terminal (terminal equipment such as a notebook PC or a mobile phone). The communication may be local area network (the augmented neural network is deployed on a local server) or the internet (the augmented neural network is deployed in the cloud). After the data communication is established, the data can be transmitted to the enhanced neural network after the local terminal receives the data collected by the sensor, the network identifies the sound data, and the result is returned to the local terminal equipment.
Step four: constructing a positioning model, measuring the direction of a sound source area, setting the relative position between a sound collecting sensor and the broiler flock, and fixing a sound receiving model to position the specific position of the broiler chicken which emits abnormal sound;
step five: and (4) collecting and analyzing data for a long time, optimizing the algorithm in the third step, and transmitting the analysis result in the second step to a local server to realize early warning and warning of the onset of disease. The local server is provided with a visualization device, the reinforcing neural network needs to display an analysis result on the visualization device after identifying the data transmitted by the sensor, and the visualization device is helpful for a manager to monitor and manage the broilers in real time and analyze diseases.
Further, the neural network for analyzing the sound data includes:
and (3) audio decomposition algorithm: performing Fourier transform on the audio signal in the time domain, and obtaining the frequency spectrum expression of the audio signal in frequency and energy abundance (amplitude and synthetic index of Fourier expansion coefficient) indexes in a minimum time window (1/420 seconds);
for audio data, the original fourier transform equation is as follows:
Gf(∈,u)=∫f(t)g(t-u)ej∈tdt
for short-time fourier transforms there are:
Figure BDA0002789292540000071
where z (u) is the source signal and g (u-t) is the window function
Each piece of the analyzed audio data produces three graphs, i.e., spectral representations, wherein the spectral representations of the entire audio data are presented on a "FULL speech" graph; at a specific frequency, the audio frequency moves forward along a time axis, and the frequency spectrum with uniform audio abundance is completely presented on a BACKGROUND graph; moving forward along a time axis at a specific frequency, and when a singular point appears in an audio abundance function relative to equal uniformly distributed numerical values before and after the time point at a specific time point, the frequency spectrum expression of the singular point appears on a FOREGROUND graph; therefore, the technology can generate a visual image of sound frequency and detect abnormal points for the examination of experts or chicken farm staff;
and (3) analysis algorithm: after the decomposition algorithm converts and presents the audio data, the expert distinguishes and marks the sound characteristics contained in the audio data according to the audio data and the three images, the sound characteristics comprise coughing, snorting, sneezing and vocalizing sounds, local label files are formed after marking, the local label files are synchronized to the cloud to form cloud label files for training the robust neural network, the robust neural network preliminarily forms parameterized characteristics with identification capability, and then the parameterized characteristics with stable identification capability of the robust neural network are optimized through the algorithm in step three.
The reinforced neural network part introduces a CNN structure which has strong feature extraction capability, and mainly adopts a convolution layer, a pooling layer, an activation function layer, a full-link layer and an output layer. For the convolutional layer, the following function is shown:
Figure BDA0002789292540000072
wherein
Figure BDA0002789292540000073
Is the weight of the l-th layer,
Figure BDA0002789292540000074
is the bias of the l-th layer.
Figure BDA0002789292540000075
Is the input tile for the (i, j) position of the l-th layer. The weights of the feature maps are shared (weight sharing reduces the number of parameters). One of the biggest advantages of the CNN structure is the weight sharing mechanism, because it can significantly reduce the computational complexity when the number of parameters is smaller, and also make training and optimization simpler. Pooling layers are an important module in CNN architecture, and the most important goal of such layers is to reduce the size of the feature map by fusing sub-regions using some function, such as averaging or max/min:
Figure BDA0002789292540000081
wherein y isi,j,kIs the output of the pooling operator at position (i, j) of the kth feature map, ai,j,kIs the pooling region R in the kth feature mapijCharacteristic value at the middle position (m, n). It is particularly noted that this formula corresponds to an average pooling when p is 1, and becomes a maximum pooling as p approaches infinity. Commonly used activation functions are the ReLU activation function, the leakyreu function, which is defined as follows:
ai,j,k=max(zi,j,k,0)
to avoid the gradient vanishing problem, we can choose to use the LeakyReLU function:
Figure BDA0002789292540000082
wherein λ is in the range of (0, 1). Leaky ReLU does not force the negative part to be zero, but instead it allows for a small non-zero gradient. For a fully connected layer, the following function is expressed:
Figure BDA0002789292540000083
after the classification output is obtained, a Softmax loss function can be calculated between the predicted value and the true value of the output:
Figure BDA0002789292540000084
and in this way optimize our entire CNN network.
Further, the system provides a corresponding interface to support other systems to call data, provides a sensor interface, and other systems can control the sensors according to the sensor interface, specifically can control the number of the sensors, and modifies the settings of the sensors, such as the maximum frequency (8192Hz), the minimum frequency (0Hz), and the like of the sound signals collected by the sensors. Other systems can call the sensor interface to obtain the parameter setting of the sensor, so that other systems can know the state of the sensor conveniently; the module also provides calling of the reinforced neural network for other systems, specifically, other systems can set certain parameters of the reinforced neural network through the interface and can also obtain the state of the reinforced neural network so that other systems can know the state of the reinforced neural network conveniently. For example, the confidence level of the augmented neural network can be set, and the recognition of the augmented neural network can be quantitatively accurate by setting different confidence values. The module also provides an acquisition and control interface of cloud data for other systems. Other systems can call the interface to acquire the sound signal data stored on the cloud server, or modify the sound signal data, the tag data and the like stored on the cloud service area, and can also upload the sound data collected by the sensor through the interface and upload the tag data marked manually.
While the invention has been described with respect to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (3)

1. A large-scale feeding type broiler respiratory disease sound data analysis system which is characterized in that,
the method comprises the following steps: the method comprises the steps that sound data are collected, a plurality of sound collection sensors are arranged in a coop, the sound collection sensors transmit the collected sound data to a cloud end through a network, and the sound data are converted into audio data in an OPUS format at the cloud end to be stored;
step two: analyzing the sound data, wherein the sound data is transmitted from the cloud to an enhanced neural network of a cloud server for analysis, and an analysis result is uploaded to a cloud backup;
step three: performing algorithm optimization, namely transmitting the abnormal sound data obtained by analysis in the step two to a local server from a cloud end, performing expert marking on the local server to generate a local label file, and additionally updating the local label file to a cloud end label file in the cloud end server to optimize the sound data analysis result in the step two;
step four: constructing a positioning model, measuring the direction of a sound source area, setting the relative position between a sound collecting sensor and the broiler flock, and fixing a sound receiving model to position the specific position of the broiler chicken which emits abnormal sound;
step five: and (4) collecting and analyzing data for a long time, optimizing the algorithm in the third step, and transmitting the analysis result in the second step to a local server to realize early warning and warning of the onset of disease.
2. The system for analyzing the sound data of the respiratory diseases of the large-scale raised broiler chicken according to claim 1, wherein the reinforced neural network used for the sound data analysis comprises:
and (3) audio decomposition algorithm: carrying out Fourier transform on the audio signal in the time domain, and obtaining the frequency spectrum representation of the audio signal in frequency and energy abundance indexes in a minimum time window;
generating three graphs of the analyzed audio data of each segment, wherein the spectral representation of the entire audio data is presented on the "FULL speech" graph; at a specific frequency, the audio frequency moves forward along a time axis, and the frequency spectrum with uniform audio abundance is completely presented on a BACKGROUND graph; moving forward along a time axis at a specific frequency, and when a singular point appears in an audio abundance function relative to equal uniformly distributed numerical values before and after the time point at a specific time point, the frequency spectrum expression of the singular point appears on a FOREGROUND graph; therefore, the technology can generate a visual image of sound frequency and detect abnormal points for the examination of experts or chicken farm staff;
and (3) analysis algorithm: after the decomposition algorithm converts and presents the audio data, the expert distinguishes and marks the sound characteristics contained in the audio data according to the audio data and the three images, the sound characteristics comprise coughing, snorting, sneezing and vocalizing sounds, local label files are formed after marking, the local label files are synchronized to the cloud to form cloud label files for training the robust neural network, the robust neural network preliminarily forms parameterized characteristics with identification capability, and then the parameterized characteristics with stable identification capability of the robust neural network are optimized through the algorithm in step three.
3. The system for analyzing the sound data of the respiratory diseases of the large-scale raised broiler chicken according to claim 1, wherein a filter is installed in the sound collection sensor.
CN202011309454.9A 2020-11-20 2020-11-20 Large-scale feeding type broiler respiratory disease sound data analysis system Pending CN112562739A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309454.9A CN112562739A (en) 2020-11-20 2020-11-20 Large-scale feeding type broiler respiratory disease sound data analysis system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309454.9A CN112562739A (en) 2020-11-20 2020-11-20 Large-scale feeding type broiler respiratory disease sound data analysis system

Publications (1)

Publication Number Publication Date
CN112562739A true CN112562739A (en) 2021-03-26

Family

ID=75044239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309454.9A Pending CN112562739A (en) 2020-11-20 2020-11-20 Large-scale feeding type broiler respiratory disease sound data analysis system

Country Status (1)

Country Link
CN (1) CN112562739A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113456055A (en) * 2021-07-05 2021-10-01 自牧机器人(青岛)有限公司 Poultry respiratory tract real-time monitoring system based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009129A (en) * 2018-08-20 2018-12-18 南京农业大学 Sow respiratory disease method for early warning based on acoustic analysis
CN109493874A (en) * 2018-11-23 2019-03-19 东北农业大学 A kind of live pig cough sound recognition methods based on convolutional neural networks
CN110189756A (en) * 2019-06-28 2019-08-30 北京派克盛宏电子科技有限公司 It is a kind of for monitoring the method and system of live pig abnormal sound
US20200005766A1 (en) * 2019-08-15 2020-01-02 Lg Electronics Inc. Deeplearning method for voice recognition model and voice recognition device based on artificial neural network
CN110782905A (en) * 2019-11-05 2020-02-11 秒针信息技术有限公司 Positioning method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009129A (en) * 2018-08-20 2018-12-18 南京农业大学 Sow respiratory disease method for early warning based on acoustic analysis
CN109493874A (en) * 2018-11-23 2019-03-19 东北农业大学 A kind of live pig cough sound recognition methods based on convolutional neural networks
CN110189756A (en) * 2019-06-28 2019-08-30 北京派克盛宏电子科技有限公司 It is a kind of for monitoring the method and system of live pig abnormal sound
US20200005766A1 (en) * 2019-08-15 2020-01-02 Lg Electronics Inc. Deeplearning method for voice recognition model and voice recognition device based on artificial neural network
CN110782905A (en) * 2019-11-05 2020-02-11 秒针信息技术有限公司 Positioning method, device and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113456055A (en) * 2021-07-05 2021-10-01 自牧机器人(青岛)有限公司 Poultry respiratory tract real-time monitoring system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN109243470B (en) Broiler cough monitoring method based on audio technology
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN110580916B (en) Weight acoustic measurement model creation method and weight measurement method and device
CN106847293A (en) Facility cultivation sheep stress behavior acoustical signal monitoring method
CN112164408A (en) Pig coughing sound monitoring and early warning system based on deep learning
CN108281177B (en) Internet of things intensive care system
CN114596448A (en) Meat duck health management method and management system thereof
CN112331231B (en) Broiler feed intake detection system based on audio technology
CN112544503B (en) Monitoring and early warning system and method for intelligent beehive
NO20210919A1 (en) Systems and methods for predicting growth of a population of organisms
CN109063589A (en) Instrument and equipment on-line monitoring method neural network based and system
CN112562739A (en) Large-scale feeding type broiler respiratory disease sound data analysis system
CN108345857A (en) A kind of region crowd density prediction technique and device based on deep learning
CN117994650A (en) Intelligent agricultural management system based on big data
CN113989538A (en) Depth image-based chicken flock uniformity estimation method, device, system and medium
CN109828623B (en) Production management method and device for greenhouse crop context awareness
CN117253192A (en) Intelligent system and method for silkworm breeding
CH719673A2 (en) AI-BASED REAL-TIME ACOUSTIC WILDLIFE MONITORING SYSTEM
CN109242219B (en) Prediction method and prediction device for layer feeding behavior
CN116723614B (en) AI-based adaptive fish propagation environment illumination control method
CN118173102B (en) Bird voiceprint recognition method in complex scene
CN118248337B (en) AI-based health condition monitoring system for old people
CN118097391A (en) Multi-mode fusion fish swarm ingestion intensity classification method, system, equipment and medium
CN117237820B (en) Method and device for determining pest hazard degree, electronic equipment and storage medium
CN116421152B (en) Sleep stage result determining method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326