CN116229518A - Bird species observation method and system based on machine learning - Google Patents
Bird species observation method and system based on machine learning Download PDFInfo
- Publication number
- CN116229518A CN116229518A CN202310258842.6A CN202310258842A CN116229518A CN 116229518 A CN116229518 A CN 116229518A CN 202310258842 A CN202310258842 A CN 202310258842A CN 116229518 A CN116229518 A CN 116229518A
- Authority
- CN
- China
- Prior art keywords
- picture
- target
- target picture
- bird
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000010801 machine learning Methods 0.000 title claims abstract description 21
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 238000012544 monitoring process Methods 0.000 claims abstract description 13
- 230000009471 action Effects 0.000 claims abstract description 8
- 241000894007 species Species 0.000 claims description 83
- 238000001514 detection method Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 230000033001 locomotion Effects 0.000 claims description 4
- 241000269799 Perca fluviatilis Species 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 3
- 230000006872 improvement Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of bird species observation, in particular to a machine learning-based bird species observation method and system. Which comprises selecting different recognition schemes according to the target picture quality: carrying out species identification on birds in the clear pictures by adopting a fine-granularity picture identification algorithm; species identification is carried out on birds in the fuzzy picture by adopting a multi-mode information identification algorithm. According to the bird species identification method, the quality of the target picture is identified through the established identification model, the target picture is divided into a clear picture or a fuzzy picture according to the quality of the target picture, different identification schemes are selected according to the quality of the target picture, the species identification precision under a monitoring scene is improved, and the expert knowledge of an on-state student and bird action information are combined into the bird identification model, so that the bird identification model can also identify bird species under the condition that the picture is fuzzy, the bird species identification work in the pictures collected under different monitoring states is adapted, and the identification efficiency is improved.
Description
Technical Field
The invention relates to the technical field of bird species observation, in particular to a machine learning-based bird species observation method and system.
Background
Bird species observation equipment is mainly used for species fine-grained image recognition in a field environment, and mainly has the following difficulties:
difficulty 1. How to accurately judge the fine granularity class is a great challenge at present due to the fact that fine granularity images have the characteristics of small inter-class difference and large intra-class difference.
Difficulty 2. The long tail distribution characteristics existing in the nature can be mapped into a network, so that the long tail distribution of the data in the network is realized, the data is over-fitted, and the recognition accuracy of the model is affected.
And 3, how to continuously, stably and with low power consumption the processing equipment at the edge end works.
The main paradigms currently addressing the above problems include:
(1) Fine-grained identification with location classification subnetworks
Two sub-networks are used, a positioning sub-network and a classification sub-network.
Locating the sub-network to locate the critical section may result in more distinct intermediate level (section level) representations. The learning ability of the classification subnetwork is further enhanced, and these methods connect multiple partial-level features into one overall image representation and input it into the underlying classification subnetwork for final recognition.
The classification sub-network follows and is used for identification. The framework of these two collaborative sub-networks forms the first paradigm, namely with fine-grained identification of the location-classifying sub-network, however, the need to manually annotate the part of interest limits the scalability. Trend: corresponding parts are found first and then their appearances are compared, it is desirable to capture semantic parts (e.g., head and torso) to share between fine-grained categories, while it is desirable to find subtle differences between representations of these parts.
(2) Having end-to-end feature encoding
This paradigm tends to learn more discriminative feature representations directly by developing powerful depth models for fine-grained recognition, bilinear CNNs: the images are represented as pooled outer products from features of two depth CNNs, thus the convolution-activated higher-order statistics are encoded to enhance the mid-level learning ability, bilinear CNNs achieve significant fine-grained recognition performance due to their high model capacity. However, the extremely high dimensionality of the bilinear features still makes it impractical for practical use, especially for large-scale applications.
In order to address the above problems, there is a need for a machine learning based bird species observation method and system.
Disclosure of Invention
The invention aims to provide a bird species observation method and system based on machine learning, which are used for solving the problems in the background technology.
In order to achieve the above object, one of the objects of the present invention is to provide a machine learning-based bird species observation method, comprising the steps of:
s1, a camera of a shooting place collects a target picture and a target video, and the target picture is transmitted to an edge processor;
s2, preprocessing the target picture by an edge processor, and detecting an identified target through a target detection model;
s3, returning the original target video of the detection target to the back-end server;
s4, the back-end server establishes an identification model, the identification model identifies the quality of the target picture, and the target picture is divided into a clear picture or a fuzzy picture according to the quality of the target picture;
s5, selecting different identification schemes according to the quality of the target picture:
carrying out species identification on birds in the clear pictures by adopting a fine-granularity picture identification algorithm;
species identification is carried out on birds in the fuzzy picture by adopting a multi-mode information identification algorithm;
s6, outputting bird species information identified by the currently acquired target picture.
As a further improvement of the present technical solution, the method for acquiring the target picture and the target video in S1 includes the following steps:
s1.1, planning a shooting area according to a bird perch rule, and installing a corresponding camera;
s1.2, establishing a shooting interval, and returning the target picture and the target video shot by the shooting point in real time according to the shooting interval.
As a further improvement of the present technical solution, the target picture preprocessing method in S2 includes the following steps:
s2.1, decoding the target picture to generate a decoded picture;
s2.2, performing frame extraction processing on the decoded picture, recording time sequence information of different frames, and generating a frame extraction picture;
s2.3, carrying out target recognition on each frame-extracted picture, and eliminating pictures without bird species.
As a further improvement of the technical scheme, the edge processor in the S2 adopts a low-power-consumption processor, and the low-power-consumption processor is a carrier in the detection process of the target detection model and is used for rapidly acquiring and processing real-time image information.
As a further improvement of the present technical solution, the method for detecting the identified target by the target detection model in S2 includes the following steps:
s2.4, combining all environmental factors of the monitoring area, and carrying out feature recognition on all factors of the target picture;
s2.5, establishing a factor characteristic database, and comparing the factors of each target picture according to the factor characteristic database.
As a further improvement of the present technical solution, the method for identifying the quality of the target picture identified by the identification model in S4 includes the following steps:
s4.1, calculating the number of picture pixels of each target picture to obtain the spatial complexity of the target picture;
s4.2, determining the noise quantity in each target picture, and obtaining the noise complexity of the target picture;
s4.3, combining the spatial complexity and the noise complexity of the target picture to obtain the definition of the target picture;
s4.4, planning a target picture definition threshold, defining a target picture which is lower than the target picture definition threshold as a fuzzy picture, and defining a target picture which is not lower than the target picture definition threshold as a clear picture.
As a further improvement of the technical scheme, the definition judgment of the target picture in S4.3 adopts a definition calculation algorithm, and the algorithm formula is as follows:
wherein ρ (Clarity) is the sharpness value of the target picture, K (Space) is the spatial complexity of the target picture, N (noise) is the noise complexity of the target picture, F (ρ) sharpness determination function, ρ is the sharpness value of the current target picture , For the target picture definition threshold, when the definition value rho of the current target picture is lower than the target picture definition threshold +.>When the sharpness judging function F (ρ) is output as 0, the target picture is marked as a blurred picture, and when the sharpness value ρ of the current target picture is not lower than the sharpness threshold value ρ of the target picture +.>And when the definition judging function F (rho) is output to be 1, marking the target picture as a clear picture.
As a further improvement of the technical scheme, the S5 fine granularity picture identification algorithm includes the following steps:
s5.1, integrating various bird species in a monitoring area, and determining the characteristics of each bird;
s5.2, combining the characteristics of each bird to generate a bird characteristic database;
s5.3, comparing birds identified in each clear target picture with the bird feature database, and selecting a corresponding target bird in the bird feature database as the bird species identified in the target picture.
As a further improvement of the present technical solution, the multi-modal information recognition algorithm in S5 includes the following steps:
s5.4, determining the shooting time sequence information of the blurred picture and the recorded geographical position information;
s5.5, establishing expert knowledge of an on-state student and a bird action information database;
s5.6, pre-separating out bird types of birds identified in the blurred picture by combining shooting time sequence information of the blurred picture and recorded geographical position information;
s5.7, comparing expert knowledge of an on-state student with a bird motion information database, and precisely comparing birds identified in the fuzzy picture to obtain the type of the bird identified in the fuzzy picture.
The invention also provides an observation system for a bird species observation method based on machine learning, which comprises a plurality of cameras, an edge processor, a rear end server, a picture quality classification module, an identification scheme distribution module and an identification result output module, wherein the edge processor is used for receiving a camera to acquire a target picture and a target video, preprocessing the target picture, detecting an identified target through a target detection model, the rear end server is connected with the output end of the edge processor, the rear end server is used for receiving an original target video of a feedback detection target, the rear end server establishes an identification model, the identification model is used for identifying the quality of the target picture, the picture quality classification module is connected with the output end of the rear end server, the picture quality classification module comprises a classification unit, the picture quality classification module is connected with the input end of the identification scheme distribution module, the identification scheme distribution module is used for detecting the identified target through a target detection model, the rear end server is used for receiving an original target video of the feedback detection target, the rear end server establishes an identification model, the picture quality is connected with the output end of the rear end server through the identification model, the picture quality classification module comprises a classification unit, the picture quality classification unit is used for classifying the target pictures into clear pictures or fuzzy pictures, the picture quality classification module output end is connected with the input end of the identification scheme distribution module, the target picture is connected with the bird species identification module, the bird species identification algorithm is used for acquiring the bird species identification algorithm, and the bird species identification module is used for identifying the bird species, and the bird species is detected by the bird species, and the bird species is detected by the bird species and the bird species.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the bird species observation method and system based on machine learning, the quality of the target picture is identified through the established identification model, the target picture is divided into clear pictures or fuzzy pictures according to the quality of the target picture, different identification schemes are selected according to the quality of the target picture, the species identification precision under a monitoring scene is improved, the expertise of an on-state student and bird action information are combined into the bird identification model, so that the bird identification model can also perform bird species identification under the condition that the picture is fuzzy, and the bird species identification work in the pictures acquired under different monitoring states is adapted, and the identification efficiency is improved.
Drawings
FIG. 1 is an overall flow chart of embodiment 1 of the present invention;
fig. 2 is a flowchart of a method for acquiring a target picture and a target video according to embodiment 1 of the present invention;
fig. 3 is a flowchart of a target picture preprocessing method according to embodiment 1 of the present invention;
FIG. 4 is a flowchart of a detection method for detecting an identified object by the object detection model in embodiment 1 of the present invention;
FIG. 5 is a flowchart of a method for identifying quality of a target picture by using an identification model according to embodiment 1 of the present invention;
FIG. 6 is a flowchart of a fine granularity picture recognition algorithm according to embodiment 1 of the present invention;
FIG. 7 is a flowchart of a multi-modal information recognition algorithm according to embodiment 1 of the present invention;
fig. 8 is a schematic diagram of the overall system structure of embodiment 1 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-8, an objective of the present embodiment is to provide a machine learning-based bird species observation method, which includes the following steps:
s1, a camera of a shooting place collects a target picture and a target video, and the target picture is transmitted to an edge processor;
s2, preprocessing the target picture by an edge processor, and detecting an identified target through a target detection model;
s3, returning the original target video of the detection target to the back-end server;
s4, the back-end server establishes an identification model, the identification model identifies the quality of the target picture, and the target picture is divided into a clear picture or a fuzzy picture according to the quality of the target picture;
s5, selecting different identification schemes according to the quality of the target picture:
carrying out species identification on birds in the clear pictures by adopting a fine-granularity picture identification algorithm;
species identification is carried out on birds in the fuzzy picture by adopting a multi-mode information identification algorithm;
s6, outputting bird species information identified by the currently acquired target picture.
When the bird species detecting device is particularly used, firstly, a camera for observing bird species is installed in a shooting place, a target picture and a target video are acquired through the camera, the target picture is transmitted to an edge processor, the edge processor decodes, frames and identifies the target picture, the identified target is detected through a target detection model, then the original target video of the detected target is returned to a rear-end server, and as the camera is fixed on certain towers, the camera shoots the picture of a bird, the two situations can be roughly divided: firstly, birds are in the place very close to the camera, clear pictures can be shot, some birds are in the place very far away from the camera, the shot pictures are more fuzzy, at the moment, a rear end server establishes an identification model, the identification model identifies the quality of a target picture, the target picture is divided into clear pictures or fuzzy pictures according to the quality of the target picture, and different identification schemes are selected according to the quality of the target picture:
the bird species identification in the clear picture is carried out by adopting a fine-grained picture identification algorithm, namely, various bird characteristic information is stored in a database and is compared with the bird characteristic identified in the clear target picture, so that the bird species identification information in the clear picture is obtained, wherein the current identified clear target picture belongs to the bird in the database;
species identification is carried out on birds in the fuzzy picture by adopting a multi-modal information identification algorithm, namely continuous frame information is combined with geographic position information and shooting information, and an on-state professional knowledge and bird action information database is established at the same time, and species identification is carried out on the birds in the fuzzy picture by adopting the multi-modal information in the process of carrying out species identification on the birds in the fuzzy picture, so that bird species identification information in the fuzzy picture is obtained;
and then outputting bird species information identified by the currently acquired target picture.
According to the bird species identification method, the quality of the target picture is identified through the established identification model, the target picture is divided into the clear picture or the fuzzy picture according to the quality of the target picture, different identification schemes are selected according to the quality of the target picture, different bird species identification models are used for birds with different definitions, the species identification precision under a monitoring scene is improved, in the bird species identification process in the fuzzy picture, the expert knowledge of an on-state student and bird action information are combined into the bird identification model, so that the bird species identification can be carried out by the bird identification model under the condition that the picture is fuzzy, the bird species identification work in the pictures collected under different monitoring states is adapted, and the identification efficiency is improved.
Further, the method for acquiring the target picture and the target video in the S1 includes the following steps:
s1.1, planning a shooting area according to a bird perch rule, and installing a corresponding camera;
s1.2, establishing a shooting interval, and returning the target picture and the target video shot by the shooting point in real time according to the shooting interval.
When the device is specifically used, because the habitat of birds in the same season is relatively fixed, but the moving track is quite wide, in the process of planning a shooting area, firstly, the habitat of the birds, such as a nesting place, a predation common area and the like, needs to be determined, then, the shooting area is planned according to the habitat of the birds, corresponding cameras are installed, shooting intervals are established, and target pictures and target videos shot by shooting points are returned in real time according to the shooting intervals.
Still further, the target picture preprocessing method in S2 includes the following steps:
s2.1, decoding the target picture to generate a decoded picture;
s2.2, performing frame extraction processing on the decoded picture, recording time sequence information of different frames, and generating a frame extraction picture;
s2.3, carrying out target recognition on each frame-extracted picture, and eliminating pictures without bird species.
When the method is specifically used, firstly, decoding processing is carried out on a target picture through an edge processor to generate a decoded picture, then frame extraction processing is carried out on the decoded picture, time sequence information of different frames is recorded to generate frame extraction pictures, a reference basis is provided for carrying out fuzzy picture target recognition processing in the later period, and as the movement of birds in the shot target picture is random, the flying track of birds is changed in the shooting process, so that the continuous pictures shot by a camera can be separated from a shooting area due to the birds, and the target pictures shot by the camera are free from any birds, at the moment, the frame extraction pictures are subjected to target recognition, the pictures without bird species are removed, and the bird species recognition efficiency of the later-period target picture is improved.
Specifically, the edge processor in S2 adopts a low-power consumption processor, and the low-power consumption processor is a carrier in the detection process of the target detection model, and is used for quickly acquiring and processing real-time image information, so that video data can be preprocessed from a camera head end and then transmitted through a low-bandwidth network, thus the transmission bandwidth of the data can be effectively reduced, the electric power is saved, the data collection efficiency is improved, and the low-power consumption edge video processor firstly realizes basic video processing tasks such as image scaling, format conversion, encoding and decoding, and further realizes video analysis, intelligent video identification and the like.
In addition, the detection method for detecting the identified target by the target detection model in S2 includes the following steps:
s2.4, combining all environmental factors of the monitoring area, and carrying out feature recognition on all factors of the target picture;
s2.5, establishing a factor characteristic database, and comparing the factors of each target picture according to the factor characteristic database.
When the method is specifically used, in the process of detecting the target picture, as the shot target picture has environment factors such as birds, trees, rocks, water sources and the like, in order to distinguish different picture types, firstly, various environment factors of the target picture are combined, characteristic identification is carried out on various factors of the target picture, such as the characteristic of the tree is a special color of the tree, the height of the tree and the like, then various factor characteristic databases are established, various factor characteristic databases are established according to the factor characteristic databases in comparison with the factors of the target picture, the environment factors of each target are identified according to the factor characteristic databases in comparison with the factors of the target picture, and multi-mode reference is provided for identifying bird species of later-period blurred pictures.
In addition, the recognition method for recognizing the quality of the target picture by the recognition model in S4 includes the following steps:
s4.1, calculating the number of picture pixels of each target picture to obtain the spatial complexity of the target picture;
s4.2, determining the noise quantity in each target picture, and obtaining the noise complexity of the target picture;
s4.3, combining the spatial complexity and the noise complexity of the target picture to obtain the definition of the target picture;
s4.4, planning a target picture definition threshold, defining a target picture which is lower than the target picture definition threshold as a fuzzy picture, and defining a target picture which is not lower than the target picture definition threshold as a clear picture.
When the method is specifically used, firstly, the number of picture pixels of each target picture is calculated to obtain the space complexity of the target picture, then, noise analysis is carried out on the target pictures, the noise number of each target picture is determined to obtain the noise complexity of the target picture, the definition of the target picture is obtained by combining the space complexity and the noise complexity of the target picture, the definition threshold of the target picture is planned, the target picture lower than the definition threshold of the target picture is defined as a fuzzy picture, the target picture not lower than the definition threshold of the target picture is defined as a clear picture, and quality classification is carried out on each target picture so as to carry out corresponding identification scheme determination according to the classification result in the later period.
Further, the definition judgment of the target picture in S4.3 adopts a definition calculation algorithm, and the algorithm formula is as follows:
wherein ρ (Clarity) is the sharpness value of the target picture, K (Space) is the spatial complexity of the target picture, N (noise) is the noise complexity of the target picture, F (ρ) sharpness determination function, ρ is the sharpness value of the current target picture , As the definition threshold of the target picture, when the current target picture isThe sharpness value ρ is lower than the target picture sharpness threshold +.>When the sharpness judging function F (ρ) is output as 0, the target picture is marked as a blurred picture , When the sharpness value rho of the current target picture is not lower than the sharpness threshold value of the target picture +.>At the time, the sharpness judging function F (ρ) is output as 1 , And marking the target picture as a clear picture.
Still further, the S5 fine granularity picture recognition algorithm includes the steps of:
s5.1, integrating various bird species in a monitoring area, and determining the characteristics of each bird;
s5.2, combining the characteristics of each bird to generate a bird characteristic database;
s5.3, comparing birds identified in each clear target picture with the bird feature database, and selecting a corresponding target bird in the bird feature database as the bird species identified in the target picture.
When the bird species identification method is specifically used, firstly, each bird species in a monitoring area is integrated, characteristics of each bird, such as the characteristic of pecking birds, are determined to be sharp and long in mouth, then, a bird characteristic database is generated by combining the characteristics of each bird, birds identified in each clear target picture are compared with the bird characteristic database, and a corresponding target bird in the bird characteristic database is selected to serve as the bird species identified in the target picture, so that the bird species of the target picture are judged.
Specifically, the multi-modal information recognition algorithm in S5 includes the following steps:
s5.4, determining the shooting time sequence information of the blurred picture and the recorded geographical position information;
s5.5, establishing expert knowledge of an on-state student and a bird action information database;
s5.6, pre-separating out bird types of birds identified in the blurred picture by combining shooting time sequence information of the blurred picture and recorded geographical position information;
s5.7, comparing expert knowledge of an on-state student with a bird motion information database, and precisely comparing birds identified in the fuzzy picture to obtain the type of the bird identified in the fuzzy picture.
When the bird in the fuzzy picture can not identify the specific characteristics of the bird, firstly determining shooting time sequence information of the fuzzy picture and recorded geographic position information, namely determining shooting time of the fuzzy picture and geographic environment of the bird in the fuzzy picture, pre-separating out the bird type of the bird identified in the fuzzy picture according to biological habits of each bird, then comparing expert knowledge of a live-state student with a bird action information database, accurately comparing the bird identified in the fuzzy picture to obtain the bird type identified in the fuzzy picture, accurately identifying the bird type of the fuzzy picture by adopting multi-mode information identification, improving the bird type identification accuracy of the fuzzy picture and reducing identification errors.
The second objective of the present embodiment is to provide an observation system for a bird species observation method based on machine learning, which includes a plurality of cameras, an edge processor, a back-end server, a picture quality classification module, an identification scheme distribution module, and an identification result output module, wherein the edge processor is configured to receive a target picture collected by the cameras and a target video, preprocess the target picture, detect an identified target through the target detection model, the back-end server is connected with an output end of the edge processor, the back-end server is configured to receive an original target video returned to the detected target, the back-end server establishes an identification model, identify a target picture quality through the identification model, the picture quality classification module is connected with an output end of the back-end server, the picture quality classification module includes a classification unit, the classification unit classifies the target picture into a clear picture or a blurred picture, the output end of the picture quality classification module is connected with an input end of the identification scheme distribution module, the identification scheme distribution module identifies bird species in the blurred picture according to a classification result of the target picture, and the bird species in the clear picture is identified by adopting a multi-mode information identification algorithm, and the current bird species in the fuzzy picture is identified by the identification module.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (10)
1. A machine learning based bird species observation method comprising the steps of:
s1, a camera of a shooting place collects a target picture and a target video, and the target picture is transmitted to an edge processor;
s2, preprocessing the target picture by an edge processor, and detecting an identified target through a target detection model;
s3, returning the original target video of the detection target to the back-end server;
s4, the back-end server establishes an identification model, the identification model identifies the quality of the target picture, and the target picture is divided into a clear picture or a fuzzy picture according to the quality of the target picture;
s5, selecting different identification schemes according to the quality of the target picture:
carrying out species identification on birds in the clear pictures by adopting a fine-granularity picture identification algorithm;
species identification is carried out on birds in the fuzzy picture by adopting a multi-mode information identification algorithm;
s6, outputting bird species information identified by the currently acquired target picture.
2. The machine learning based bird species observation method of claim 1, wherein: the acquisition method of the target picture and the target video in the S1 comprises the following steps:
s1.1, planning a shooting area according to a bird perch rule, and installing a corresponding camera;
s1.2, establishing a shooting interval, and returning the target picture and the target video shot by the shooting point in real time according to the shooting interval.
3. The machine learning based bird species observation method of claim 1, wherein: the target picture preprocessing method in the S2 comprises the following steps:
s2.1, decoding the target picture to generate a decoded picture;
s2.2, performing frame extraction processing on the decoded picture, recording time sequence information of different frames, and generating a frame extraction picture;
s2.3, carrying out target recognition on each frame-extracted picture, and eliminating pictures without bird species.
4. A machine learning based bird species observation method as claimed in claim 3 wherein: the edge processor in the S2 adopts a low-power-consumption processor which is a carrier in the detection process of the target detection model and is used for rapidly acquiring and processing real-time image information.
5. The machine learning based bird species observation method of claim 4, wherein: the detection method for detecting the identified target by the target detection model in the S2 comprises the following steps:
s2.4, combining all environmental factors of the monitoring area, and carrying out feature recognition on all factors of the target picture;
s2.5, establishing a factor characteristic database, and comparing the factors of each target picture according to the factor characteristic database.
6. The machine learning based bird species observation method of claim 1, wherein: the identification method for identifying the quality of the target picture by the identification model in the S4 comprises the following steps:
s4.1, calculating the number of picture pixels of each target picture to obtain the spatial complexity of the target picture;
s4.2, determining the noise quantity in each target picture, and obtaining the noise complexity of the target picture;
s4.3, combining the spatial complexity and the noise complexity of the target picture to obtain the definition of the target picture;
s4.4, planning a target picture definition threshold, defining a target picture which is lower than the target picture definition threshold as a fuzzy picture, and defining a target picture which is not lower than the target picture definition threshold as a clear picture.
7. The machine learning based bird species observation method of claim 6, wherein: the definition judgment of the target picture in S4.3 adopts a definition calculation algorithm, and the algorithm formula is as follows:
wherein ρ (Clarity) is the sharpness value of the target picture, K (Space) is the spatial complexity of the target picture, N (noise) is the noise complexity of the target picture, F (ρ) is the sharpness value of the current target picture,for the target picture definition threshold, when the definition value rho of the current target picture is lower than the target picture definition threshold +.>When the sharpness judging function F (ρ) is output as 0, the target picture is marked as a blurred picture, and when the sharpness value ρ of the current target picture is not lower than the sharpness threshold value ρ of the target picture +.>And when the definition judging function F (rho) is output to be 1, marking the target picture as a clear picture.
8. The machine learning based bird species observation method of claim 1, wherein: the S5 fine granularity picture identification algorithm comprises the following steps:
s5.1, integrating various bird species in a monitoring area, and determining the characteristics of each bird;
s5.2, combining the characteristics of each bird to generate a bird characteristic database;
s5.3, comparing birds identified in each clear target picture with the bird feature database, and selecting a corresponding target bird in the bird feature database as the bird species identified in the target picture.
9. The machine learning based bird species observation method of claim 8, wherein: the multi-mode information identification algorithm in S5 comprises the following steps:
s5.4, determining the shooting time sequence information of the blurred picture and the recorded geographical position information;
s5.5, establishing expert knowledge of an on-state student and a bird action information database;
s5.6, pre-separating out bird types of birds identified in the blurred picture by combining shooting time sequence information of the blurred picture and recorded geographical position information;
s5.7, comparing expert knowledge of an on-state student with a bird motion information database, and precisely comparing birds identified in the fuzzy picture to obtain the type of the bird identified in the fuzzy picture.
10. An observation system for use in an observation method comprising the machine learning-based bird species of any one of claims 1-9, characterized by: the bird species identification system comprises a plurality of cameras, an edge processor, a rear end server, a picture quality classification module, an identification scheme distribution module and an identification result output module, wherein the edge processor is used for receiving a target picture collected by the cameras and a target video, preprocessing the target picture, detecting an identified target through a target detection model, the input end of the rear end server is connected with the output end of the edge processor, the rear end server is used for receiving an original target video of the detected target, establishing an identification model, identifying the quality of the target picture through the identification model, the input end of the picture quality classification module is connected with the output end of the rear end server, the picture quality classification module comprises a classification unit, the classification unit divides the target picture into a clear picture or a fuzzy picture, the output end of the picture quality classification module is connected with the input end of the identification scheme distribution module, the identification scheme distribution module is used for identifying bird species in the clear picture by adopting a fine-grained picture identification algorithm according to the classification result of the target picture, identifying bird species in the fuzzy species is identified by adopting a multimode information identification algorithm, and the current bird species identification module is connected with the output end of the identification scheme distribution module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310258842.6A CN116229518B (en) | 2023-03-17 | 2023-03-17 | Bird species observation method and system based on machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310258842.6A CN116229518B (en) | 2023-03-17 | 2023-03-17 | Bird species observation method and system based on machine learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116229518A true CN116229518A (en) | 2023-06-06 |
CN116229518B CN116229518B (en) | 2024-01-16 |
Family
ID=86569461
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310258842.6A Active CN116229518B (en) | 2023-03-17 | 2023-03-17 | Bird species observation method and system based on machine learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116229518B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117809099A (en) * | 2023-12-29 | 2024-04-02 | 百鸟数据科技(北京)有限责任公司 | Method and system for predicting bird category by means of key part prediction network |
CN118015551A (en) * | 2024-04-09 | 2024-05-10 | 山东世融信息科技有限公司 | Floating island type monitoring system applied to field ecological wetland |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416774A (en) * | 2018-03-08 | 2018-08-17 | 中山大学 | A kind of fabric types recognition methods based on fine granularity neural network |
CN110969107A (en) * | 2019-11-25 | 2020-04-07 | 上海交通大学 | Bird population identification analysis method and system based on network model |
CN111985477A (en) * | 2020-08-27 | 2020-11-24 | 平安科技(深圳)有限公司 | Monocular camera-based animal body online claims checking method and device and storage medium |
CN113076861A (en) * | 2021-03-30 | 2021-07-06 | 南京大学环境规划设计研究院集团股份公司 | Bird fine-granularity identification method based on second-order features |
CN113205085A (en) * | 2021-07-05 | 2021-08-03 | 武汉华信数据***有限公司 | Image identification method and device |
WO2021184894A1 (en) * | 2020-03-20 | 2021-09-23 | 深圳市优必选科技股份有限公司 | Deblurred face recognition method and system and inspection robot |
CN113688751A (en) * | 2021-08-30 | 2021-11-23 | 上海城投水务(集团)有限公司制水分公司 | Method and device for analyzing alum blossom characteristics by using image recognition technology |
WO2022078216A1 (en) * | 2020-10-14 | 2022-04-21 | 华为云计算技术有限公司 | Target recognition method and device |
CN114387499A (en) * | 2022-01-19 | 2022-04-22 | 国家海洋环境监测中心 | Island coastal wetland waterfowl identification method, distribution query system and medium |
US11398089B1 (en) * | 2021-02-17 | 2022-07-26 | Adobe Inc. | Image processing techniques to quickly find a desired object among other objects from a captured video scene |
CN114998934A (en) * | 2022-06-27 | 2022-09-02 | 山东省人工智能研究院 | Clothes-changing pedestrian re-identification and retrieval method based on multi-mode intelligent perception and fusion |
CN115035313A (en) * | 2022-06-15 | 2022-09-09 | 云南这里信息技术有限公司 | Black-neck crane identification method, device, equipment and storage medium |
CN115761802A (en) * | 2022-11-21 | 2023-03-07 | 广东鉴面智能科技有限公司 | Dynamic bird identification method and system |
-
2023
- 2023-03-17 CN CN202310258842.6A patent/CN116229518B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416774A (en) * | 2018-03-08 | 2018-08-17 | 中山大学 | A kind of fabric types recognition methods based on fine granularity neural network |
CN110969107A (en) * | 2019-11-25 | 2020-04-07 | 上海交通大学 | Bird population identification analysis method and system based on network model |
WO2021184894A1 (en) * | 2020-03-20 | 2021-09-23 | 深圳市优必选科技股份有限公司 | Deblurred face recognition method and system and inspection robot |
CN111985477A (en) * | 2020-08-27 | 2020-11-24 | 平安科技(深圳)有限公司 | Monocular camera-based animal body online claims checking method and device and storage medium |
WO2022078216A1 (en) * | 2020-10-14 | 2022-04-21 | 华为云计算技术有限公司 | Target recognition method and device |
US11398089B1 (en) * | 2021-02-17 | 2022-07-26 | Adobe Inc. | Image processing techniques to quickly find a desired object among other objects from a captured video scene |
CN113076861A (en) * | 2021-03-30 | 2021-07-06 | 南京大学环境规划设计研究院集团股份公司 | Bird fine-granularity identification method based on second-order features |
CN113205085A (en) * | 2021-07-05 | 2021-08-03 | 武汉华信数据***有限公司 | Image identification method and device |
CN113688751A (en) * | 2021-08-30 | 2021-11-23 | 上海城投水务(集团)有限公司制水分公司 | Method and device for analyzing alum blossom characteristics by using image recognition technology |
WO2023029117A1 (en) * | 2021-08-30 | 2023-03-09 | 上海城市水资源开发利用国家工程中心有限公司 | Method and apparatus for analyzing alum floc feature by using image recognition technology |
CN114387499A (en) * | 2022-01-19 | 2022-04-22 | 国家海洋环境监测中心 | Island coastal wetland waterfowl identification method, distribution query system and medium |
CN115035313A (en) * | 2022-06-15 | 2022-09-09 | 云南这里信息技术有限公司 | Black-neck crane identification method, device, equipment and storage medium |
CN114998934A (en) * | 2022-06-27 | 2022-09-02 | 山东省人工智能研究院 | Clothes-changing pedestrian re-identification and retrieval method based on multi-mode intelligent perception and fusion |
CN115761802A (en) * | 2022-11-21 | 2023-03-07 | 广东鉴面智能科技有限公司 | Dynamic bird identification method and system |
Non-Patent Citations (5)
Title |
---|
ZHUANG, PQ (ZHUANG, PEIQIN); WANG, YL (WANG, YALI) ; QIAO, Y (QIAO, YU): "Wildfish plus plus : A Comprehensive Fish Benchmark for Multimedia Research", 《IEEE TRANSACTIONS ON MULTIMEDIA》, vol. 23, pages 3603 - 3617 * |
周晓健等: "基于信息多模态融合技术在动物识别模型中的应用", 《中国高新科技》, no. 03 * |
孙伟;王小伟;游世军;石昊坤;胡艳辉;: "模式识别在医用超声数字图像特征提取中的应用研究", 《中国医学装备》, no. 02 * |
彭明杰: "基于多模态输入卷积神经网络的蜻蜓识别算法", 《电子世界》, no. 02 * |
李国瑞;何小海;吴晓红;卿粼波;滕奇志;: "基于语义信息跨层特征融合的细粒度鸟类识别", 《计算机应用与软件》, no. 04 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117809099A (en) * | 2023-12-29 | 2024-04-02 | 百鸟数据科技(北京)有限责任公司 | Method and system for predicting bird category by means of key part prediction network |
CN118015551A (en) * | 2024-04-09 | 2024-05-10 | 山东世融信息科技有限公司 | Floating island type monitoring system applied to field ecological wetland |
Also Published As
Publication number | Publication date |
---|---|
CN116229518B (en) | 2024-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116229518B (en) | Bird species observation method and system based on machine learning | |
Yang et al. | Computer vision models in intelligent aquaculture with emphasis on fish detection and behavior analysis: a review | |
Parham et al. | Animal population censusing at scale with citizen science and photographic identification | |
CN112446342B (en) | Key frame recognition model training method, recognition method and device | |
CN115100512A (en) | Monitoring, identifying and catching method and system for marine economic species and storage medium | |
CN112347995B (en) | Unsupervised pedestrian re-identification method based on fusion of pixel and feature transfer | |
CN114266977B (en) | Multi-AUV underwater target identification method based on super-resolution selectable network | |
CN112488071B (en) | Method, device, electronic equipment and storage medium for extracting pedestrian features | |
CN110796074A (en) | Pedestrian re-identification method based on space-time data fusion | |
CN115188066A (en) | Moving target detection system and method based on cooperative attention and multi-scale fusion | |
Zhou et al. | Cross-weather image alignment via latent generative model with intensity consistency | |
CN109117771A (en) | Incident of violence detection system and method in a kind of image based on anchor node | |
CN116977937A (en) | Pedestrian re-identification method and system | |
Laradji et al. | Affinity lcfcn: Learning to segment fish with weak supervision | |
CN113536946A (en) | Self-supervision pedestrian re-identification method based on camera relation | |
CN115359550A (en) | Gait emotion recognition method and device based on Transformer, electronic device and storage medium | |
WO2019003217A1 (en) | System and method for use on object classification | |
Murthi et al. | A semi-automated system for smart harvesting of tea leaves | |
Li et al. | A holistic marine video dataset | |
Zhang et al. | Multi-Moving Camera Pedestrian Tracking with a New Dataset and Global Link Model | |
CN110738692A (en) | spark cluster-based intelligent video identification method | |
CN109120932B (en) | Video significance prediction method of HEVC compressed domain double SVM model | |
VEERAPPAN et al. | Fish counting through underwater fish detection using deep learning techniques | |
CN117935260A (en) | Labeling method, labeling device, electronic equipment and storage medium | |
Heuvel | Diffuse more objects with fewer labels |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |