CN110263753B - Object statistical method and device - Google Patents

Object statistical method and device Download PDF

Info

Publication number
CN110263753B
CN110263753B CN201910572394.0A CN201910572394A CN110263753B CN 110263753 B CN110263753 B CN 110263753B CN 201910572394 A CN201910572394 A CN 201910572394A CN 110263753 B CN110263753 B CN 110263753B
Authority
CN
China
Prior art keywords
counted
image
area surrounding
rectangular area
ellipse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910572394.0A
Other languages
Chinese (zh)
Other versions
CN110263753A (en
Inventor
陈奕名
苏睿
张为明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN201910572394.0A priority Critical patent/CN110263753B/en
Publication of CN110263753A publication Critical patent/CN110263753A/en
Priority to PCT/CN2020/083513 priority patent/WO2020258977A1/en
Application granted granted Critical
Publication of CN110263753B publication Critical patent/CN110263753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an object statistical method and device, wherein the method comprises the following steps: acquiring an image shot by a fixed camera in a monitoring area; determining a rectangular area surrounding each object to be counted in the image; converting a rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted; and identifying and counting all objects to be counted in the image according to all the preset shapes obtained by conversion. The invention can accurately identify and count the detection objects in the images shot by the fixed camera.

Description

Object statistical method and device
Technical Field
The invention relates to the technical field of image processing, in particular to an object statistical method and device.
Background
Liveness detection is a method used in some authentication scenarios to determine the true physiological characteristics of a subject. In the application of face recognition, living body detection can verify whether a user operates for a real living body by combining actions of blinking, mouth opening, head shaking, head nodding and the like and by using techniques of face key point positioning, face tracking and the like. The common attack means such as photos, face changing, masks, sheltering, screen copying and the like can be effectively resisted.
Other live animal identification is the attempt and exploration of artificial intelligence techniques in new scenarios and environments, as opposed to "face recognition". The biological living body identification technology has wide application prospect, and can help a farm to track each breeding object, such as pigs, cattle, sheep and the like, so that daily information management and whole-process tracing are realized.
Disclosure of Invention
In view of the above, the present invention provides an object counting method and device, which can accurately identify and count the number of detection objects in an image captured by a fixed camera.
In order to achieve the purpose, the invention provides the following technical scheme:
an object statistics method, comprising:
acquiring an image shot by a fixed camera in a monitoring area;
determining a rectangular area surrounding each object to be counted in the image;
converting a rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted;
and identifying and counting all objects to be counted in the image according to all the preset shapes obtained by conversion.
Preferably, the detecting the image by using the pre-trained R2CNN detection model, and determining a rectangular region surrounding each object to be counted in the image, includes:
determining a horizontal rectangular frame surrounding each object to be counted by using a candidate regional network RPN algorithm;
generating image characteristics of each horizontal rectangular frame by using a region-of-interest Pooling ROI Pooling algorithm, performing regression analysis on the image characteristics, and adjusting the horizontal rectangular frame into an inclined rectangular frame according to the regression analysis result; and the regression analysis result comprises translation and rotation angle information corresponding to the horizontal rectangular frame.
Preferably, the preset shape fitting the body type of the object to be counted is an ellipse;
converting the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted, and the method comprises the following steps:
setting the central point of a rectangular area surrounding the object to be counted as the central point of an ellipse;
setting the inclination angle of a rectangular area surrounding the object to be counted as the inclination angle of an ellipse;
setting the width and the height of a rectangular area surrounding the object to be counted as a long axis and a short axis of an ellipse respectively;
and generating an ellipse which is set according with the central point, the major axis, the minor axis and the inclination angle, and replacing a rectangular area surrounding the object to be counted by the ellipse.
Preferably, identifying and counting all objects to be counted in the image according to all the converted preset shapes, including:
inhibiting each non-maximum value of the preset shape row obtained by conversion to obtain an identification result of the object to be counted;
and counting the number of all objects to be counted in the identification result.
An object statistics apparatus comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring images shot by a fixed camera of a monitoring area;
the determining unit is used for determining a rectangular area surrounding each object to be counted in the image;
the conversion unit is used for converting the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted;
and the recognition and statistics unit is used for recognizing and counting all objects to be counted in the image according to all the preset shapes obtained by conversion of the conversion unit.
Preferably, the determining unit, which detects the image by using a pre-trained R2CNN detection model, determines a rectangular region in the image, which surrounds each object to be counted, and includes:
determining a horizontal rectangular frame surrounding each object to be counted by using a candidate regional network RPN algorithm;
generating image characteristics of each horizontal rectangular frame by using a region-of-interest Pooling ROI Pooling algorithm, performing regression analysis on the image characteristics, and adjusting the horizontal rectangular frame into an inclined rectangular frame according to the regression analysis result; and the regression analysis result comprises translation and rotation angle information corresponding to the horizontal rectangular frame.
Preferably, the preset shape fitting the body type of the object to be counted is an ellipse;
the conversion unit converts the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted, and comprises:
setting the central point of a rectangular area surrounding the object to be counted as the central point of an ellipse;
setting the inclination angle of a rectangular area surrounding the object to be counted as the inclination angle of an ellipse;
setting the width and the height of a rectangular area surrounding the object to be counted as a long axis and a short axis of an ellipse respectively;
and generating an ellipse which is set according with the central point, the major axis, the minor axis and the inclination angle, and replacing a rectangular area surrounding the object to be counted by the ellipse.
Preferably, the identifying and counting unit identifies and counts all objects to be counted in the image according to all the preset shapes obtained by the conversion, including:
inhibiting each non-maximum value of the preset shape row obtained by conversion to obtain an identification result of the object to be counted;
and counting the number of all objects to be counted in the identification result.
An electronic device, comprising: the system comprises at least one processor and a memory connected with the at least one processor through a bus; the memory stores one or more computer programs executable by the at least one processor; wherein the at least one processor, when executing the one or more computer programs, performs the steps of the object statistics method described above.
A computer-readable storage medium storing one or more computer programs which, when executed by a processor, implement the object statistics method described above.
According to the technical scheme, after the rectangular area surrounding each object to be counted in the image shot by the fixed camera is determined, the rectangular area surrounding each object to be counted is converted in shape, so that the rectangular area is more fit with the body type of the object to be counted, each object to be counted can be more accurately distinguished, particularly more than two objects to be counted with partial overlapping can be distinguished, and therefore the accuracy of subsequent identification and statistics of the objects to be counted can be effectively improved.
Drawings
FIG. 1 is a flow chart of an object statistics method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an identification result of an object to be counted according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of recognition results of objects to be counted according to an embodiment of the present invention
FIG. 4 is a schematic structural diagram of an object statistics apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described in detail below with reference to the accompanying drawings according to embodiments.
In the intelligent breeding scene, the number of breeding objects is counted by using machine vision, so that the expenditure of human resources can be reduced to the maximum extent. The technical scheme provided by the invention can be used for counting the number of the breeding objects in the intelligent breeding scene so as to reduce the labor cost.
Referring to fig. 1, fig. 1 is a flowchart of an object statistics method according to an embodiment of the present invention, and as shown in fig. 1, the method mainly includes the following steps:
and 101, acquiring an image shot by a fixed camera in a monitored area.
In an intelligent breeding scene, in order to count the number of breeding objects, fixed cameras (namely fixed cameras) can be deployed in each monitoring area, and the cameras are used for regularly shooting images of the monitoring areas to which the cameras belong.
In the embodiment of the invention, all breeding objects (objects to be counted) in the monitoring scene to which the fixed cameras belong are identified and subjected to quantity counting by acquiring the images regularly shot by the fixed cameras and analyzing the images, so that the human resource expenditure caused by the number of workers can be reduced, and the labor cost is reduced.
And 102, determining a rectangular area surrounding each object to be counted in the image.
In practical applications, the determination of the rectangular region surrounding each object to be counted in the image can be implemented by using various methods in the image processing technology. In the embodiment of the present invention, an R2CNN technique is used for implementation, specifically, a plurality of training samples of an object to be counted may be used in advance for training to obtain an R2CNN detection model, and then the R2CNN detection model may be used in the object counting process of the present invention, specifically, in this step, a rectangular region surrounding each object to be counted in an image is determined by using the R2CNN detection model, that is: and inputting the image into the R2CNN detection model, and carrying out image detection on the input image by the R2CNN detection model, namely outputting a rectangular area surrounding each object to be counted in the image.
In the embodiment of the invention, the pre-trained R2CNN detection model is used for carrying out image detection on the image, and a rectangular area surrounding each object to be counted in the image is determined, and the specific implementation process is as follows:
determining a horizontal rectangular frame surrounding each object to be counted in the image by using a candidate region network (RPN) algorithm;
generating image characteristics of each horizontal rectangular frame by using a region-of-interest Pooling ROI Pooling algorithm, performing regression analysis on the image characteristics, and adjusting the horizontal rectangular frame into an inclined rectangular frame according to the regression analysis result; and the regression analysis result comprises translation and rotation angle information corresponding to the horizontal rectangular frame.
The method comprises the steps of determining a horizontal rectangular frame surrounding each object to be counted in an image by using a candidate region network (RPN) algorithm, mainly extracting image features under different scales by using a convolution algorithm, wherein the image features include low-level edge texture features and high-level semantic features, and generating complete information surrounding each object to be counted and a rectangular frame (called as a horizontal rectangular frame) parallel to the image boundary by fusing the two features.
Most of the existing living body detection methods have no displayed directivity, only have detection results in the horizontal or vertical direction, and people usually have overlooking visual angles when counting the number of cultured objects, so that the living body detection in the actual production scene of intelligent culture is different from the detection task of a common target, and the living body detection facing any direction scene is added besides framing the information of the cultured objects.
Therefore, in order to sufficiently identify the information of the object to be counted, for each horizontal rectangular frame, picture information detection may be performed through a region of interest Pooling (ROI Pooling) algorithm, so as to generate image features of the horizontal rectangular frame, and then regression analysis is performed on the image features generated by the ROI Pooling algorithm, so that an obtained regression analysis result includes translation and rotation angle information corresponding to the horizontal rectangular frame, where the translation and rotation angle information indicates direction adjustment required to be performed on the horizontal rectangular frame, and is a basis for adjusting the horizontal rectangular frame into an inclined rectangular frame with directionality.
And 103, converting the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted.
In practical application, the shape which can be attached to the body type of the object to be counted can be preset according to the body type characteristics of the object to be counted, after the rectangular region surrounding each object to be counted is determined, the frame surrounding the rectangular region of the object to be counted can be converted and attached to the shape of the body type of the object to be counted, and the framed region surrounding each object to be counted and the region actually occupied by the object to be counted are attached/attached to each other.
In the smart breeding scene, the breeding objects are different and have different body types, and the fixed camera generally takes images from a top view angle, so that the main shooting part is also the trunk part of the breeding objects, therefore, when any breeding object is taken as the object to be counted, the shape setting can be carried out according to the body type of the trunk part of the breeding object, for example, the shape fitting the body type of the object to be counted is circular, oval, rhombic and the like.
The pig, the cattle and the sheep are the most common breeding objects in an intelligent breeding scene, and the body types of the body parts of the pig, the cattle and the sheep are close to an ellipse, so that when the object to be counted is any one of the pig, the cattle and the sheep, the shape fitting the body type of the object to be counted can be set to be an ellipse in advance.
When the preset shape fitting the body type of the object to be counted is an ellipse, a rectangular area surrounding each object to be counted can be converted into an elliptical area in the following manner:
setting the central point of a rectangular area surrounding the object to be counted as the central point of an ellipse;
setting the inclination angle of a rectangular area surrounding the object to be counted as the inclination angle of an ellipse;
setting the width and the height of a rectangular area surrounding the object to be counted as a long axis and a short axis of an ellipse respectively;
and generating an ellipse which is set according with the central point, the major axis, the minor axis and the inclination angle, and replacing a rectangular area surrounding the object to be counted by the ellipse.
If the preset shape of the body type of the object to be counted is attached to other shapes, corresponding conversion needs to be carried out according to the actual shape, but during conversion, the central point and the gradient need to be kept consistent, and the converted shape cannot exceed the size of the original rectangular area.
And 104, identifying and counting all objects to be counted in the image according to all the preset shapes obtained by conversion.
In this step, in order to accurately identify all objects to be counted in the image, each of the preset shapes obtained by conversion may be subjected to non-maximum suppression, so as to obtain an identification result of the objects to be counted, and then the number of all the objects to be counted in the identification result is determined by statistics.
In practical application, when the non-maximum value suppression is performed on the region surrounding the object to be counted, for two objects to be counted having an overlapping region, the larger the overlapping region is, the higher the possibility of suppressing one of the objects to be counted is, and otherwise, the lower the possibility of suppressing one of the objects to be counted is. Therefore, the rectangular region surrounding the object to be counted is converted into the preset shape of the body type of the object to be counted, and then the preset shape is subjected to non-maximum suppression, so that the situation that the object to be counted is suppressed in actual existence can be reduced, and the accuracy of identification and statistics of the object to be counted in the image is effectively improved.
Fig. 2 and 3 are respectively a graph of the method shown in fig. 1 for breeding objects in an intelligent breeding scene: the images shot by the pigs at different time periods are identified and counted to obtain an identification result schematic diagram, as can be seen from fig. 2 and 3, the monitoring area for fixing the camera is a pigsty, and all the pigs in the pigsty can be identified by adopting the method of the invention. It should be noted that, in the present invention, only the identification and the number statistics of the pigs in the monitoring area shot by the fixed camera are needed, and the identification and the number statistics of the pigs outside the monitoring area are not needed, so that the pigs outside the pigsty (the images of the pigsty are superimposed on the images of the pigs) need to exist as a side sample when the R2CNN detection model is trained, and thus, the pigs outside the pigsty are not identified and counted even if the images shot by the fixed camera include the pigs outside the pigsty.
The object statistical method according to the embodiment of the present invention is described in detail above, and the present invention further provides an object statistical apparatus, which is described in detail below with reference to fig. 4.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an object statistics apparatus according to an embodiment of the present invention, and as shown in fig. 4, the apparatus includes an obtaining unit 401, a determining unit 402, a converting unit 403, and an identifying and statistics unit 404: wherein the content of the first and second substances,
an obtaining unit 401, configured to obtain an image captured by a fixed camera in a monitored area;
a determining unit 402, configured to determine a rectangular region surrounding each object to be counted in the image;
a conversion unit 403, configured to convert a rectangular region surrounding each object to be counted into a preset shape conforming to the body type of the object to be counted;
and an identifying and counting unit 404, configured to identify and count all objects to be counted in the image according to all preset shapes obtained through conversion by the conversion unit 403.
In the arrangement shown in figure 4 of the drawings,
the determining unit 402 detects the image by using a pre-trained R2CNN detection model, and determines a rectangular region surrounding each object to be counted in the image, including:
determining a horizontal rectangular frame surrounding each object to be counted by using a candidate regional network RPN algorithm;
generating image characteristics of each horizontal rectangular frame by using a region-of-interest Pooling ROI Pooling algorithm, performing regression analysis on the image characteristics, and adjusting the horizontal rectangular frame into an inclined rectangular frame according to the regression analysis result; and the regression analysis result comprises translation and rotation angle information corresponding to the horizontal rectangular frame.
In the arrangement shown in figure 4 of the drawings,
the preset shape fitting the body type of the object to be counted is an ellipse;
the conversion unit 403 converts the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted, and includes:
setting the central point of a rectangular area surrounding the object to be counted as the central point of an ellipse;
setting the inclination angle of a rectangular area surrounding the object to be counted as the inclination angle of an ellipse;
setting the width and the height of a rectangular area surrounding the object to be counted as a long axis and a short axis of an ellipse respectively;
and generating an ellipse which is set according with the central point, the major axis, the minor axis and the inclination angle, and replacing a rectangular area surrounding the object to be counted by the ellipse.
In the arrangement shown in figure 4 of the drawings,
the identifying and counting unit 404 identifies and counts all objects to be counted in the image according to all the preset shapes obtained by the conversion of the converting unit 403, including:
inhibiting each non-maximum value of the preset shape row obtained by conversion to obtain an identification result of the object to be counted;
and counting the number of all objects to be counted in the identification result.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, including: at least one processor 501, and a memory 502 connected to the at least one processor 501 through a bus; the memory 502 stores one or more computer programs that are executable by the at least one processor 501; wherein the at least one processor 501, when executing the one or more computer programs, implements the steps of the object statistics method described above in fig. 1.
Embodiments of the present invention also provide a computer-readable storage medium, which stores one or more computer programs that, when executed by a processor, implement the object statistics method shown in fig. 1.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. An object statistical method is applied to an intelligent breeding scene and comprises the following steps:
acquiring an image shot by a fixed camera in a monitoring area at a overlooking angle;
determining a rectangular area surrounding each object to be counted in the image; the objects to be counted are breeding objects in an intelligent breeding scene;
converting a rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted; the preset body is arranged according to the body type of the trunk part of the object to be counted;
identifying and counting all objects to be counted in the image according to all the preset shapes obtained by conversion;
wherein the content of the first and second substances,
the preset shape fitting the body type of the object to be counted is an ellipse;
converting the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted, and the method comprises the following steps:
setting the central point of a rectangular area surrounding the object to be counted as the central point of an ellipse;
setting the inclination angle of a rectangular area surrounding the object to be counted as the inclination angle of an ellipse;
setting the width and the height of a rectangular area surrounding the object to be counted as a long axis and a short axis of an ellipse respectively;
and generating an ellipse which is set according with the central point, the major axis, the minor axis and the inclination angle, and replacing a rectangular area surrounding the object to be counted by the ellipse.
2. The method of claim 1,
detecting the image by utilizing a pre-trained R2CNN detection model, and determining a rectangular area surrounding each object to be counted in the image, wherein the method comprises the following steps:
determining a horizontal rectangular frame surrounding each object to be counted by using a candidate regional network RPN algorithm;
generating image characteristics of each horizontal rectangular frame by using a region-of-interest Pooling ROI Pooling algorithm, performing regression analysis on the image characteristics, and adjusting the horizontal rectangular frame into an inclined rectangular frame according to the regression analysis result; and the regression analysis result comprises translation and rotation angle information corresponding to the horizontal rectangular frame.
3. The method of claim 1,
identifying and counting all objects to be counted in the image according to all the preset shapes obtained by conversion, wherein the identification comprises the following steps:
inhibiting each non-maximum value of the preset shape row obtained by conversion to obtain an identification result of the object to be counted;
and counting the number of all objects to be counted in the identification result.
4. The utility model provides an object statistics device, its characterized in that, the device is applied to intelligence and breeds the scene, includes:
the acquisition unit is used for acquiring an image shot by a fixed camera of a monitoring area at a overlooking angle;
the determining unit is used for determining a rectangular area surrounding each object to be counted in the image;
the conversion unit is used for converting the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted; the objects to be counted are breeding objects in an intelligent breeding scene; the preset body is arranged according to the body type of the trunk part of the object to be counted;
the recognition and statistics unit is used for recognizing and counting all objects to be counted in the image according to all the preset shapes obtained by conversion of the conversion unit;
wherein the content of the first and second substances,
the preset shape fitting the body type of the object to be counted is an ellipse;
the conversion unit converts the rectangular area surrounding each object to be counted into a preset shape fitting the body type of the object to be counted, and comprises:
setting the central point of a rectangular area surrounding the object to be counted as the central point of an ellipse;
setting the inclination angle of a rectangular area surrounding the object to be counted as the inclination angle of an ellipse;
setting the width and the height of a rectangular area surrounding the object to be counted as a long axis and a short axis of an ellipse respectively;
and generating an ellipse which is set according with the central point, the major axis, the minor axis and the inclination angle, and replacing a rectangular area surrounding the object to be counted by the ellipse.
5. The apparatus of claim 4,
the determining unit, which detects the image by using a pre-trained R2CNN detection model, determines a rectangular region in the image surrounding each object to be counted, including:
determining a horizontal rectangular frame surrounding each object to be counted by using a candidate regional network RPN algorithm;
generating image characteristics of each horizontal rectangular frame by using a region-of-interest Pooling ROI Pooling algorithm, performing regression analysis on the image characteristics, and adjusting the horizontal rectangular frame into an inclined rectangular frame according to the regression analysis result; and the regression analysis result comprises translation and rotation angle information corresponding to the horizontal rectangular frame.
6. The apparatus of claim 4,
the identification and statistics unit identifies and counts all objects to be counted in the image according to all the preset shapes obtained by conversion, and the identification and statistics unit comprises the following steps:
inhibiting each non-maximum value of the preset shape row obtained by conversion to obtain an identification result of the object to be counted;
and counting the number of all objects to be counted in the identification result.
7. An electronic device, comprising: the system comprises at least one processor and a memory connected with the at least one processor through a bus; the memory stores one or more computer programs executable by the at least one processor; characterized in that the at least one processor, when executing the one or more computer programs, implements the method steps of any of claims 1-3.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more computer programs which, when executed by a processor, implement the method of any one of claims 1-3.
CN201910572394.0A 2019-06-28 2019-06-28 Object statistical method and device Active CN110263753B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910572394.0A CN110263753B (en) 2019-06-28 2019-06-28 Object statistical method and device
PCT/CN2020/083513 WO2020258977A1 (en) 2019-06-28 2020-04-07 Object counting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910572394.0A CN110263753B (en) 2019-06-28 2019-06-28 Object statistical method and device

Publications (2)

Publication Number Publication Date
CN110263753A CN110263753A (en) 2019-09-20
CN110263753B true CN110263753B (en) 2020-12-22

Family

ID=67922783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910572394.0A Active CN110263753B (en) 2019-06-28 2019-06-28 Object statistical method and device

Country Status (2)

Country Link
CN (1) CN110263753B (en)
WO (1) WO2020258977A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263753B (en) * 2019-06-28 2020-12-22 北京海益同展信息科技有限公司 Object statistical method and device
CN115937791B (en) * 2023-01-10 2023-05-16 华南农业大学 Poultry counting method and device suitable for multiple cultivation modes

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299688A (en) * 2018-09-19 2019-02-01 厦门大学 Ship Detection based on deformable fast convolution neural network
CN109670501A (en) * 2018-12-10 2019-04-23 中国科学院自动化研究所 Object identification and crawl position detection method based on depth convolutional neural networks
CN109685870A (en) * 2018-11-21 2019-04-26 北京慧流科技有限公司 Information labeling method and device, tagging equipment and storage medium
CN109816041A (en) * 2019-01-31 2019-05-28 南京旷云科技有限公司 Commodity detect camera, commodity detection method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9858496B2 (en) * 2016-01-20 2018-01-02 Microsoft Technology Licensing, Llc Object detection and classification in images
US10460470B2 (en) * 2017-07-06 2019-10-29 Futurewei Technologies, Inc. Recognition and reconstruction of objects with partial appearance
CN108073930A (en) * 2017-11-17 2018-05-25 维库(厦门)信息技术有限公司 A kind of target detection and tracking based on multiple irregular ROI
CN108960230B (en) * 2018-05-31 2021-04-27 中国科学院自动化研究所 Lightweight target identification method and device based on rotating rectangular frame
CN109242826B (en) * 2018-08-07 2022-02-22 高龑 Mobile equipment end stick-shaped object root counting method and system based on target detection
CN109583425B (en) * 2018-12-21 2023-05-02 西安电子科技大学 Remote sensing image ship integrated recognition method based on deep learning
CN110263753B (en) * 2019-06-28 2020-12-22 北京海益同展信息科技有限公司 Object statistical method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299688A (en) * 2018-09-19 2019-02-01 厦门大学 Ship Detection based on deformable fast convolution neural network
CN109685870A (en) * 2018-11-21 2019-04-26 北京慧流科技有限公司 Information labeling method and device, tagging equipment and storage medium
CN109670501A (en) * 2018-12-10 2019-04-23 中国科学院自动化研究所 Object identification and crawl position detection method based on depth convolutional neural networks
CN109816041A (en) * 2019-01-31 2019-05-28 南京旷云科技有限公司 Commodity detect camera, commodity detection method and device

Also Published As

Publication number Publication date
WO2020258977A1 (en) 2020-12-30
CN110263753A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
Fernandes et al. A novel automated system to acquire biometric and morphological measurements and predict body weight of pigs via 3D computer vision
US10058076B2 (en) Method of monitoring infectious disease, system using the same, and recording medium for performing the same
CN110287907B (en) Object detection method and device
CN107330403B (en) Yak counting method based on video data
CN111914685B (en) Sow oestrus detection method and device, electronic equipment and storage medium
CN112257564B (en) Aquatic product quantity statistical method, terminal equipment and storage medium
CN110263753B (en) Object statistical method and device
CN112101124B (en) Sitting posture detection method and device
CN110532899B (en) Sow antenatal behavior classification method and system based on thermal imaging
Kaixuan et al. Target detection method for moving cows based on background subtraction
CN112232978A (en) Aquatic product length and weight detection method, terminal equipment and storage medium
CN112528823B (en) Method and system for analyzing batcharybus movement behavior based on key frame detection and semantic component segmentation
WO2022041484A1 (en) Human body fall detection method, apparatus and device, and storage medium
Noe et al. Automatic detection and tracking of mounting behavior in cattle using a deep learning-based instance segmentation model
CN112036206A (en) Monitoring method, device and system based on identification code type pig ear tag
WO2023041904A1 (en) Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes
CN114581948A (en) Animal face identification method
CN116912880A (en) Bird recognition quality assessment method and system based on bird key point detection
Witte et al. Evaluation of deep learning instance segmentation models for pig precision livestock farming
JP6893812B2 (en) Object detector
JP6851246B2 (en) Object detector
CN112135102B (en) Pig monitoring method, device and system based on pig ear tags of different shapes
CN112560750A (en) Video-based ground cleanliness recognition algorithm
Ghadiri Implementation of an automated image processing system for observing the activities of honey bees
Orandi A Computer Vision System for Early Detection of Sick Birds in a Poultry Farm Using Convolution Neural Network on Shape and Edge Information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, 100176

Patentee before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address