CN110796632A - Pig counting device - Google Patents

Pig counting device Download PDF

Info

Publication number
CN110796632A
CN110796632A CN201910693494.9A CN201910693494A CN110796632A CN 110796632 A CN110796632 A CN 110796632A CN 201910693494 A CN201910693494 A CN 201910693494A CN 110796632 A CN110796632 A CN 110796632A
Authority
CN
China
Prior art keywords
pig
area
crowded
visible light
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910693494.9A
Other languages
Chinese (zh)
Other versions
CN110796632B (en
Inventor
徐兵
李志轩
张弘强
荣畅畅
王楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Yutonghe Digital Technology Co ltd
Original Assignee
Chongqing Little Rich Kang Kang Agricultural Science And Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Little Rich Kang Kang Agricultural Science And Technology Service Co Ltd filed Critical Chongqing Little Rich Kang Kang Agricultural Science And Technology Service Co Ltd
Priority to CN201910693494.9A priority Critical patent/CN110796632B/en
Publication of CN110796632A publication Critical patent/CN110796632A/en
Application granted granted Critical
Publication of CN110796632B publication Critical patent/CN110796632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

An embodiment of the present invention relates to a pig counting apparatus, comprising: the image acquisition unit is used for acquiring an infrared image and a visible light image of the same area shot at the same time; the infrared image crowded area determining unit is used for determining a pig crowded area in the infrared image, wherein the pig crowded area is an area where a plurality of pigs are crowded together; the visible light image crowded area determining unit is used for determining a pig crowded area in the visible light image; a common congestion area determination unit that determines a common congestion area in the visible light image and the infrared image; the fusion unit fuses the visible light image part and the infrared image part corresponding to the common crowded area part to obtain a fusion image; the identification unit is used for carrying out convolutional neural network identification on the fused images and determining the number of pigs in a common crowded area; and a total counting unit for counting the swinery according to the identification result of the identification unit.

Description

Pig counting device
Technical Field
The invention relates to a pig counting device.
Background
Live pig counting has many realistic meanings. The ear tag of the pig can be used for identifying and counting the pig, but the ear tag is easy to hurt the pig and is easy to fall off or change. On the other hand, if the number of pigs is small, the number of the pigs can be identified by adopting an image identification method to identify the pig faces in the image.
However, the number of pigs in a pig farm is increasing at present, and a local crowding situation occurs, and in this situation, the accuracy of the current image recognition needs to be improved.
Disclosure of Invention
The present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a pig counting method and apparatus which alleviate or overcome the above disadvantages of the prior art, and at least provide a useful alternative.
According to an aspect of the present invention, there is provided a pig counting device 1, a pig counting device, comprising: the image acquisition unit is used for acquiring an infrared image and a visible light image of the same area shot at the same time; the infrared image crowded area determining unit is used for determining a pig crowded area in the infrared image, wherein the pig crowded area is an area where a plurality of pigs are crowded together; the visible light image crowded area determining unit is used for determining a pig crowded area in the visible light image; a common crowded area determination unit that determines a common crowded area in the visible light image and the infrared image, the common crowded area being a pig crowded area identified as a pig crowded area in both the visible light image and the infrared image; the fusion unit is used for fusing the visible light image part and the infrared image part corresponding to the common crowded area part to obtain a fusion image; the identification unit is used for carrying out convolutional neural network identification on the fused images and determining the number of pigs in a common crowded area; and a total counting unit for counting the swinery according to the identification result of the identification unit.
According to one embodiment, the fusion unit performs NSCT transformation on the visible light image portion and the infrared image portion to be fused to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, then performs low-frequency subband coefficient fusion and high-frequency subband coefficient fusion according to a fusion rule, and then performs NSCT inverse transformation to obtain a fused image,
wherein the low frequency coefficient fusion is performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image portion and a visible light image portion is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures of low-frequency subbands representing portions of the infrared image, AfnatureFeatures representing low frequency subbands of a visible light image portion;
then, the characteristic features of the infrared image part are obtained:
Afspcial=Afinf-Afcomm
next, low frequency subbands of the fused image are generated
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of a natural image.
According to one embodiment, the identification unit comprises a common congestion area identification unit and a visible light image pig congestion area identification unit; the common crowded area identification unit identifies common crowded areas by adopting Ic-CNN, SANet or CSRNet, and identifies the number of appointed pig outlines, wherein the appointed pig outlines are outlines determined according to all or part of the front, rear and hind limbs of the pigs; the pig crowding area identifying unit based on the visible light images identifies and counts other pig crowding areas except the common crowding area in the visible light images, firstly determines an infrared image corresponding area corresponding to each pig crowding area in the infrared images, and identifies and counts pigs in the area corresponding to each infrared image as the number of pigs in each pig crowding area in the visible light images.
According to one embodiment, the total counting unit counts the areas outside the pig crowded area in the visible light image based on the pig face identification algorithm, and obtains the total counting result by using the identification result of the pig crowded area identification unit and the identification result of the common crowded area identification unit in the visible light image.
According to one embodiment, the pig crowded area determination unit determines the pig crowded area in the visible light image according to the accuracy of the pig face recognition algorithm.
According to one embodiment, the fusion unit first performs NSCT transform on the visible light image portion and the infrared image portion to be fused to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, then performs low-frequency subband coefficient fusion and high-frequency subband coefficient fusion according to a fusion rule, and then performs NSCT inverse transform to obtain a fused image, where the low-frequency coefficient fusion is performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image portion and a visible light image portion is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures of low-frequency subbands representing portions of the infrared image, AfnatureFeatures representing low frequency subbands of a visible light image portion;
then, the characteristic features of the infrared image part are obtained:
Afspcial=Afinf-Afcomm
next, the low frequency subbands of the fused image are generated:
Figure BDA0002148616800000041
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of natural images, said β being the ratio of the confidence of said pig face recognition algorithm to the confidence of the convolutional neural network recognition algorithm employed by said common congested area recognition unit.
According to one embodiment, the identification unit comprises a common crowded area identification unit and an infrared image pig crowded area identification unit; the common crowded area identification unit identifies common crowded areas by adopting Ic-CNN, SANet or CSRNet, and identifies the number of appointed pig outlines, wherein the appointed pig outlines are outlines determined according to all or part of the front, rear and hind limbs of the pigs; the pig crowded region identification unit identifies and counts pig crowded regions other than the common crowded region in the infrared image, firstly determines a visible light image corresponding region corresponding to each pig crowded region in the visible light image, and identifies and counts pigs in the visible light image corresponding region as the number of pigs in each corresponding pig crowded region in the corresponding infrared image.
According to one embodiment, the total counting unit counts the areas outside the pig crowded area in the infrared image based on a pig contour identification algorithm, and obtains the total identification result by using the identification result of the pig crowded area identification unit and the identification result of the common crowded area identification unit in the infrared image.
According to one embodiment, the infrared image crowded area determination unit determines the pig crowded area in the visible light image according to the precision of the pig contour recognition algorithm.
According to one embodiment, the fusion unit performs NSCT transformation on the visible light image portion and the infrared image portion to be fused to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, then performs low-frequency subband coefficient fusion and high-frequency subband coefficient fusion according to a fusion rule, and then performs NSCT inverse transformation to obtain a fused image,
wherein the low frequency coefficient fusion is performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image portion and a visible light image portion is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures of low-frequency subbands representing portions of the infrared image, AfnatureRepresenting visible light image portionsThe characteristics of the low frequency sub-band of (a);
then, the characteristic features of the infrared image part are obtained:
Afspcial=Afinf-Afcomm
next, the low frequency subbands of the fused image are generated:
Figure BDA0002148616800000051
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of natural images, said β being the ratio of the confidence of said pig contour recognition algorithm to the confidence of the convolutional neural network recognition algorithm employed by said common congested area recognition unit.
Drawings
The drawings are exemplary only, and are not intended as limitations on the scope of the invention.
Fig. 1 shows a block diagram of a pig counting apparatus according to an embodiment of the present invention.
Fig. 2 shows a schematic block diagram of an identification unit according to an embodiment.
Detailed Description
The following detailed description of the embodiments of the present invention is provided in conjunction with the accompanying drawings, and the descriptions are intended to be illustrative, and not limiting.
Fig. 1 shows a block diagram of a pig counting apparatus according to an embodiment of the present invention. As shown in fig. 1, according to an embodiment of the present invention, a pig counting apparatus includes: an image acquisition unit 10 for obtaining an infrared image and a visible light image of the same area photographed at the same time, which can be obtained by external input or reception, and which can be obtained by an infrared camera and a visible light camera; an infrared image crowded area determination unit 20 that determines a pig crowded area in the infrared image, where the pig crowded area is an area where a plurality of pigs are crowded together, for example, a pig slot area during snatching; a visible light image crowded area determination unit 30 that determines a pig crowded area in the visible light image; a common congestion area determination unit 40 that determines a common congestion area in the visible light image and the infrared image, the common congestion area being a pig congestion area that is identified as a pig congestion area in both the visible light image and the infrared image; a fusion unit 50 for fusing the visible light image and the infrared image of the common crowded area portion to obtain a fused image; the identification unit 60 is used for carrying out Convolutional Neural Network (CNN) identification on the fused images and determining the number of pigs in the common crowded area; and a total counting unit 70 for counting the swinery according to the recognition result of the recognition unit 60.
When determining the pig crowded area, the infrared image crowded area determination unit 20 may determine, according to the accuracy of the infrared image recognition algorithm, that is, only the range that will cause the reduction of the recognition accuracy is listed in the pig crowded area, so that the minimum number of pigs in the crowded area may be different according to the algorithm.
The visible-light-image-crowded-area determining unit 30 may determine, when determining the pig crowded-only area, the pig crowded-only area according to the accuracy of the visible-light-image recognition algorithm, that is, only the range that will cause the reduction of the recognition accuracy is listed in the pig crowded-only area, so that the minimum number of pigs in the pig crowded-only area may be different from the minimum number of pigs in the pig crowded-only area of the infrared image.
The fusion unit 50 may perform fusion by various methods known in the present or future. According to one embodiment, first, NSCT (nonsubsampled contourlet transform) transformation may be performed on the visible light image and the infrared image, respectively, to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, then, coefficient fusion may be performed according to a fusion rule, and then, inverse NSCT transformation may be performed to obtain a fused image. The infrared image of the pig is characterized in that the background difference between the pig body and the pigsty is obvious, meanwhile, the temperature difference exists between each part of the pig body, and the difference between the ear part and the mouth part of the pig and the pig body is larger or larger. The gray distribution of each part of the pig body of the visible light image is close, but the difference with the background is small. In conventional fusion, pig face recognition and posture recognition are generally performed for feature extraction. However, the inventor finds that the enhancement of the texture and the detail feature points has no special significance on the identification of the total number of the pigs, and the operation is slow. An algorithm is thus provided that better distinguishes the corresponding part of the visible image from the background, mainly by means of a grey scale homogeneous part of the infrared image. The high frequency coefficients mainly represent texture information and details, and can be fused by various methods in the field. The low frequency coefficient fusion can be performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image and a visible light image is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures, Af, representing low-frequency subbands of the infrared imagenatureRepresenting the characteristics of the low frequency subbands of the visible image.
Then, the characteristic features of the infrared image are obtained:
Afspcial=Afinf-Afcomm
next, low frequency subbands of the fused image are generated
Figure BDA0002148616800000081
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of a natural image.
By adopting the method, the later identification speed is faster, and the pig counting is better.
Fig. 2 shows a schematic illustration of an identification unit according to an embodiment. As shown in fig. 2, according to one embodiment, the identifying unit 60 includes a common congestion area identifying unit 601, a visible light image pig congestion area identifying unit 602, and an infrared image pig congestion area identifying unit 603.
The common congested area identification unit 601 may employ Ic-CNN, SANet, CSRNet, etc. to implement the common congested area density map and swinery identification. When people group identification is carried out, the Ic-CNN is refined from a low-resolution density map to a high-resolution density map, and the rest methods are to extract multi-scale human head characteristic information. The invention is specially used for swinery identification. Ic-CNN, SANet, and CSRNet are all examples of convolutional neural network identification of the present invention, performing end-to-end identification, identifying the number of designated pig contours, which are contours determined from all or part of the front, rear, and hind limb foreheads of the pig.
According to one embodiment, a single-channel picture is generated with the same size as the original image, wherein all pixels are 0, then the point with the designated pig contour is marked as 1, and then the picture is processed by gaussian filtering to form the density picture. It should be noted that unlike the identification of the population density map, the pig head portion of the pig tends to be less warm, with some portions of the pig head portion (e.g., the ears) being obscured by the above fusion, where the designated pig contour may select all or part of the space between the forelimb, hindlimb, and forelimb. In one embodiment, a portion from a first distance behind the forelimb to a second distance in front of the hindlimb may be selected. The first distance and the second distance may be determined according to the breed, growth period and season of the pig, thereby selecting the portion most differentiated from the background of the pig house. A pig contour at the time of recognition is determined based on the portion.
The visible-light-image pig crowded region identification unit 602 identifies and counts pig crowded regions other than the common crowded region in the visible-light image, and according to one embodiment, first determines an infrared-image corresponding region in the infrared image corresponding to each pig crowded region in the visible-light image, and identifies and counts pigs in each infrared-image corresponding region as the number of pigs in each pig crowded region in the visible-light image.
Similarly, the infrared image pig crowded region identification unit 603 identifies and counts pig crowded regions other than the common crowded region in the infrared image, and according to one embodiment, first determines a visible light image corresponding region corresponding to each of the pig crowded regions in the visible light image, and identifies and counts pigs in each of the visible light image corresponding regions as the number of pigs in each of the corresponding pig crowded regions in the infrared image.
According to one embodiment, the total counting unit 70 may identify an area outside the pig crowded area in the visible light image, and obtain the total counting result by using the identification result of the pig crowded area identification unit 602 and the identification result of the common crowded area identification unit 601 in the visible light image. According to one embodiment, when the total counting unit 70 identifies an area outside the pig crowding area in the visible light image, a method based on pig face identification may be used for counting. On the basis of the identified pig faces and further counting, the accuracy can be improved without doubt.
In this case, when the low-frequency subband of the fused image is generated, Af is the valuenatureA front increase coefficient β, wherein β is the ratio of the confidence of the pig face recognition algorithm to the confidence of the convolutional neural network recognition algorithm employed by the common congested area recognition unit 601
Figure BDA0002148616800000101
According to the method, the confidence degree of the following algorithm is considered during fusion, and certain degree of post-feedback is realized, so that the accuracy of identification of the common crowded area is higher.
According to another embodiment, the total counting unit 70 may identify an area outside the pig crowded area in the infrared image, and obtain a total identification result (obtained by adding up) using the identification result of the pig crowded area identification unit 603 in the infrared image and the identification result of the common crowded area identification unit 601. When the total counting unit 70 identifies the area outside the pig crowded area in the infrared image, counting based on a pig contour identification algorithm can be adopted.
In this case, when the low-frequency subband of the fused image is generated, Af is the valuenatureA front increase coefficient β, wherein β is the ratio of the confidence of the pig contour recognition algorithm to the confidence of the convolutional neural network recognition algorithm employed by the common crowd region recognition unit 601
Figure BDA0002148616800000111
Therefore, as can be seen from the above, the infrared image pig crowded region identification unit 603 and the visible image pig crowded region identification unit 602 do not need to be provided at the same time.
According to one embodiment, the total counting unit 70 may identify an area outside the pig crowded area in the infrared image and obtain a first result using the identification result of the infrared image pig crowded area identification unit 603 and the identification result of the common crowded area identification unit 601, and the total counting unit 70 may identify an area outside the pig crowded area in the visible light image and obtain a second counting result using the identification result of the visible light image pig crowded area identification unit 602 and the identification result of the common crowded area identification unit 601, and average or weighted average the first result and the second result to obtain a total counting result.
According to the method, the common crowded area which is difficult to identify is cut out firstly, the common crowded area is identified by using a CNN end-to-end neural network identification method for crowd identification, so that the overall task amount is reduced, and the overall identification speed is improved. Further, in the case of a pig crowded area appearing only in the infrared image or a pig crowded area appearing only in the visible image, the identification is performed by utilizing the characteristic that the area is not crowded in another image, and the accuracy and efficiency of the identification can be similarly improved. The identification of uncongested portions of the visible and infrared images may be performed using the best accuracy identification algorithm, now known or later known.
The pig counting means may be implemented by a computer (or in combination with a camera or the like) comprising storage means and computing means (CPU or the like). The memory of the computer stores computer software which, when executed (including after being compiled and executed), can make the computer realize the pig face recognition device of the invention.
An aspect of the invention also includes the computer software and a medium storing the computer software.
It should be noted that the described embodiments are only some embodiments of the invention, not all embodiments. Based on the idea of the invention, any other embodiments within the scope of the claims of the invention belong to the protection scope of the invention.

Claims (10)

1. A pig counting device, comprising:
the image acquisition unit is used for acquiring an infrared image and a visible light image of the same area shot at the same time;
the infrared image crowded area determining unit is used for determining a pig crowded area in the infrared image, wherein the pig crowded area is an area where a plurality of pigs are crowded together;
the visible light image crowded area determining unit is used for determining a pig crowded area in the visible light image;
a common congestion area determination unit that determines a common congestion area of the visible-light image and the infrared image, the common congestion area being a pig congestion area that is identified as a pig congestion area in both the visible-light image and the infrared image;
the fusion unit is used for fusing the visible light image part and the infrared image part corresponding to the common crowded area part to obtain a fusion image;
the identification unit is used for carrying out convolutional neural network identification on the fusion image and determining the number of pigs in a common crowded area; and
and the total counting unit counts the swinery according to the identification result of the identification unit.
2. The pig counting device according to claim 1, wherein the fusion unit performs NSCT transformation on the visible light image portion and the infrared image portion to be fused to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, performs low-frequency subband coefficient fusion and high-frequency subband coefficient fusion according to a fusion rule, and performs NSCT inverse transformation to obtain a fused image,
wherein the low frequency coefficient fusion is performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image portion and a visible light image portion is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures, Af, representing low-frequency subbands of the infrared image portionnatureFeatures representing low frequency subbands of the visible light image portion;
then, the characteristic features of the infrared image part are obtained:
Afspcial=Afinf-Afcomm
next, a low-frequency subband Af of the fused image is generatedfuse,
Figure FDA0002148616790000021
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of a natural image.
3. The pig counting device according to claim 1, wherein the identification unit comprises a common congestion area identification unit and a visible light image pig congestion area identification unit;
the common crowded area identification unit identifies the common crowded area by adopting Ic-CNN, SANet or CSRNet, and identifies the number of designated pig outlines, wherein the designated pig outlines are outlines determined according to all or part of the front, rear and rear limbs of pigs;
the pig crowding area identification unit with the visible light images identifies and counts other pig crowding areas except the common crowding area in the visible light images, firstly determines an infrared image corresponding area corresponding to each pig crowding area in the infrared images, and identifies and counts pigs in the area corresponding to each infrared image as the number of pigs in each pig crowding area in the visible light images.
4. The pig counting device according to claim 3, wherein the total counting unit counts an area other than the pig crowded area in the visible light image based on a pig face recognition algorithm, and obtains a total counting result using the recognition result of the visible light image pig crowded area recognition unit and the recognition result of the common crowded area recognition unit.
5. The pig counting device according to claim 4, wherein the visible light image crowded area determination unit determines the pig crowded area in the visible light image according to the accuracy of a pig face recognition algorithm.
6. The pig counting device according to claim 4,
the fusion unit firstly carries out NSCT transformation on the visible light image part and the infrared image part to be fused respectively to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, then carries out low-frequency subband coefficient fusion and high-frequency subband coefficient fusion according to a fusion rule, and then obtains a fused image through NSCT inverse transformation,
wherein the low frequency coefficient fusion is performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image portion and a visible light image portion is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures of low-frequency subbands representing portions of the infrared image, AfnatureFeatures representing low frequency subbands of a visible light image portion;
then, the characteristic features of the infrared image part are obtained:
Afspcial=Afinf-Afcomm
next, the low frequency subbands of the fused image are generated:
Figure FDA0002148616790000041
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of natural images, said β being the ratio of the confidence of said pig face recognition algorithm to the confidence of the convolutional neural network recognition algorithm employed by said common congested area recognition unit.
7. The pig counting device according to claim 1, wherein the identification unit comprises a common congestion area identification unit and an infrared image pig congestion area identification unit;
the common crowded area identification unit identifies common crowded areas by adopting an Ic-CNN, SANet or CSRNet algorithm, and identifies the number of designated pig outlines, wherein the designated pig outlines are outlines determined according to all or part of the front, rear and front limbs of the pig;
the pig crowded region identification unit identifies and counts pig crowded regions other than the common crowded region in the infrared image, firstly determines a visible light image corresponding region corresponding to each pig crowded region in the visible light image, and identifies and counts pigs in the visible light image corresponding region as the number of pigs in each corresponding pig crowded region in the corresponding infrared image.
8. The pig counting device according to claim 7, wherein the total counting unit counts an area outside the pig crowded area in the infrared image based on a pig contour recognition algorithm, and obtains a total recognition result by using the recognition result of the infrared image pig crowded area recognition unit and the recognition result of the common crowded area recognition unit.
9. The pig counting device according to claim 8, wherein the infrared image crowded area determination unit determines the pig crowded area in the infrared image according to the accuracy of a pig contour recognition algorithm.
10. The pig counting device according to claim 8, wherein the fusion unit performs NSCT transformation on the visible light image portion and the infrared image portion to be fused to obtain corresponding low-frequency subband coefficients and high-frequency subband coefficients, performs low-frequency subband coefficient fusion and high-frequency subband coefficient fusion according to a fusion rule, and performs NSCT inverse transformation to obtain a fused image,
wherein the low frequency coefficient fusion is performed as follows:
first, a feature Af common to low-frequency subbands of an infrared image portion and a visible light image portion is obtainedcomm
Afcomm=min(Afinf,Afnature)
Wherein AfinfFeatures of low-frequency subbands representing portions of the infrared image, AfnatureFeatures representing low frequency subbands of a visible light image portion;
then, the characteristic features of the infrared image part are obtained:
Afspcial=Afinf-Afcomm
next, the low frequency subbands of the fused image are generated:
Figure FDA0002148616790000051
σ(Afinf) Representing the regional variance, σ (Af), of the infrared imagenature) Representing the regional variance of natural images, said β being the ratio of the confidence of said pig contour recognition algorithm to the confidence of the convolutional neural network recognition algorithm employed by said common congested area recognition unit.
CN201910693494.9A 2019-07-30 2019-07-30 Pig counting device Active CN110796632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693494.9A CN110796632B (en) 2019-07-30 2019-07-30 Pig counting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693494.9A CN110796632B (en) 2019-07-30 2019-07-30 Pig counting device

Publications (2)

Publication Number Publication Date
CN110796632A true CN110796632A (en) 2020-02-14
CN110796632B CN110796632B (en) 2023-08-11

Family

ID=69426937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693494.9A Active CN110796632B (en) 2019-07-30 2019-07-30 Pig counting device

Country Status (1)

Country Link
CN (1) CN110796632B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070791A (en) * 2020-09-21 2020-12-11 深圳喜为智慧科技有限公司 Method and system for improving accuracy and efficiency of animal husbandry individual points
CN112200003A (en) * 2020-09-14 2021-01-08 浙江大华技术股份有限公司 Method and device for determining feed feeding amount of pig farm

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4652480B1 (en) * 2010-07-02 2011-03-16 オオクマ電子株式会社 Device for counting the number of drugs in a medicine package
US20110196661A1 (en) * 2009-05-01 2011-08-11 Spicola Tool, Llc Remote Contactless Stereoscopic Mass Estimation System
US20120048207A1 (en) * 2010-08-31 2012-03-01 Technologies Holding Corp. Automated System for Applying Disinfectant to the Teats of Dairy Livestock
CN102637297A (en) * 2012-03-21 2012-08-15 武汉大学 Visible light and infrared image fusion method based on Curvelet transformation
CN103076073A (en) * 2011-10-26 2013-05-01 因诺威泰克有限公司 Device for determining weight and number of chick and sorting device comprising the same
CN105719263A (en) * 2016-01-22 2016-06-29 昆明理工大学 Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features
CN106780419A (en) * 2016-11-28 2017-05-31 深圳汇通智能化科技有限公司 A kind of missing child automatic recognition system based on big data
CN107230196A (en) * 2017-04-17 2017-10-03 江南大学 Infrared and visible light image fusion method based on non-down sampling contourlet and target confidence level
TWI611374B (en) * 2017-05-04 2018-01-11 Chunghwa Telecom Co Ltd Gender and age identification method for vertical image flow counting
CN107977924A (en) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of image processing method based on dual sensor imaging, system
CN108171162A (en) * 2017-12-27 2018-06-15 重庆交通开投科技发展有限公司 Crowded degree detection method, apparatus and system
CN108207653A (en) * 2018-01-31 2018-06-29 佛山市神风航空科技有限公司 A kind of grazing system based on unmanned plane
US20180300884A1 (en) * 2016-01-08 2018-10-18 Flir Systems Trading Belgium Bvba Thermal-image based object detection and heat map generation systems and methods
CN109461151A (en) * 2018-11-05 2019-03-12 上海睿畜电子科技有限公司 A kind of method, apparatus and system that livestock number is checked

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110196661A1 (en) * 2009-05-01 2011-08-11 Spicola Tool, Llc Remote Contactless Stereoscopic Mass Estimation System
JP4652480B1 (en) * 2010-07-02 2011-03-16 オオクマ電子株式会社 Device for counting the number of drugs in a medicine package
US20120048207A1 (en) * 2010-08-31 2012-03-01 Technologies Holding Corp. Automated System for Applying Disinfectant to the Teats of Dairy Livestock
CN103076073A (en) * 2011-10-26 2013-05-01 因诺威泰克有限公司 Device for determining weight and number of chick and sorting device comprising the same
CN102637297A (en) * 2012-03-21 2012-08-15 武汉大学 Visible light and infrared image fusion method based on Curvelet transformation
US20180300884A1 (en) * 2016-01-08 2018-10-18 Flir Systems Trading Belgium Bvba Thermal-image based object detection and heat map generation systems and methods
CN105719263A (en) * 2016-01-22 2016-06-29 昆明理工大学 Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features
CN107977924A (en) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of image processing method based on dual sensor imaging, system
CN106780419A (en) * 2016-11-28 2017-05-31 深圳汇通智能化科技有限公司 A kind of missing child automatic recognition system based on big data
CN107230196A (en) * 2017-04-17 2017-10-03 江南大学 Infrared and visible light image fusion method based on non-down sampling contourlet and target confidence level
TWI611374B (en) * 2017-05-04 2018-01-11 Chunghwa Telecom Co Ltd Gender and age identification method for vertical image flow counting
CN108171162A (en) * 2017-12-27 2018-06-15 重庆交通开投科技发展有限公司 Crowded degree detection method, apparatus and system
CN108207653A (en) * 2018-01-31 2018-06-29 佛山市神风航空科技有限公司 A kind of grazing system based on unmanned plane
CN109461151A (en) * 2018-11-05 2019-03-12 上海睿畜电子科技有限公司 A kind of method, apparatus and system that livestock number is checked

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴翠颖;周涛;陆惠玲;王媛媛;: "特征级图像融合及在医学图像中的应用研究", 电视技术, no. 12, pages 135 - 147 *
陈永;: "人脸识别技术打造平安社会", 中国安防, no. 04, pages 62 - 64 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200003A (en) * 2020-09-14 2021-01-08 浙江大华技术股份有限公司 Method and device for determining feed feeding amount of pig farm
CN112200003B (en) * 2020-09-14 2024-02-20 浙江大华技术股份有限公司 Method and device for determining feed feeding amount in pig farm
CN112070791A (en) * 2020-09-21 2020-12-11 深圳喜为智慧科技有限公司 Method and system for improving accuracy and efficiency of animal husbandry individual points

Also Published As

Publication number Publication date
CN110796632B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN109815919B (en) Crowd counting method, network, system and electronic equipment
WO2022033150A1 (en) Image recognition method, apparatus, electronic device, and storage medium
JP7026456B2 (en) Image processing device, learning device, focus control device, exposure control device, image processing method, learning method, and program
CN110276411B (en) Image classification method, device, equipment, storage medium and medical electronic equipment
US9189867B2 (en) Adaptive image processing apparatus and method based in image pyramid
CN112446270A (en) Training method of pedestrian re-identification network, and pedestrian re-identification method and device
Fendri et al. Fusion of thermal infrared and visible spectra for robust moving object detection
CN112446380A (en) Image processing method and device
CN112036455B (en) Image identification method, intelligent terminal and storage medium
CN105260750B (en) A kind of milk cow recognition methods and system
CN111401215B (en) Multi-class target detection method and system
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN110796632A (en) Pig counting device
CN113657163B (en) Behavior recognition method, electronic device and storage medium
CN110991443A (en) Key point detection method, image processing method, key point detection device, image processing device, electronic equipment and storage medium
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN113781421A (en) Underwater-based target identification method, device and system
CN111626251A (en) Video classification method, video classification device and electronic equipment
JP7300027B2 (en) Image processing device, image processing method, learning device, learning method, and program
CN112101195A (en) Crowd density estimation method and device, computer equipment and storage medium
CN110969642B (en) Video filtering method and device, electronic equipment and storage medium
CN110472632B (en) Character segmentation method and device based on character features and computer storage medium
CN110705564B (en) Image recognition method and device
Kumar et al. Enhancement of satellite and underwater image utilizing luminance model by color correction method
JP6851246B2 (en) Object detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230707

Address after: 401121 27F, building B, Pisces, No. 55, middle section of Huangshan Avenue, Yubei District, Chongqing

Applicant after: Chongqing Yutonghe Digital Technology Co.,Ltd.

Address before: 400050 floor 24, Yulong Building, No. 26-4, Yangjiaping Main Street, Jiulongpo District, Chongqing

Applicant before: CHONGQING XIAOFUNONGKANG AGRICULTURAL TECHNOLOGY SERVICES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant