Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the application provides a container detection and feeding detection method, a device and a feeding system.
In a first aspect, the present application provides a container inspection method, comprising:
acquiring a first image to be detected of a target container;
identifying target object information in the first image to be detected according to a pre-trained container detection model;
and generating a label corresponding to the target container according to the target object information.
Optionally, the identifying the target object information in the first image to be detected according to the pre-trained container detection model includes:
identifying the residual quantity of the target object in the target container in the first image to be detected according to the container detection model;
and determining the target object information according to the allowance.
Optionally, the identifying the target object information in the first image to be detected according to the pre-trained container detection model further includes:
identifying the shielded rate of the target container in the first image to be detected according to the container detection model;
and determining the target object information according to the allowance and the shielded rate.
Optionally, the method further includes:
generating a throwing instruction according to the label, wherein the throwing instruction is used for controlling throwing equipment to throw the target object corresponding to the target object information to the target container;
and sending the releasing instruction to the releasing device.
Optionally, the generating a release instruction according to the tag includes:
acquiring a second image to be detected at the target container;
identifying the number of objects of a preset object in the second image to be detected according to a pre-trained object detection model;
and generating the releasing instruction according to the labels and the number of the objects.
Optionally, the method further includes:
acquiring a target container sample image;
acquiring first marking information corresponding to the target container sample image, wherein the first marking information comprises a first marking frame framing the target container in the target container sample image and target object information in the target container;
and training the target container sample image and the first marking information by adopting a first neural network, and determining the target object information in the target container according to the image characteristics of the target container sample image to obtain the container detection model.
Optionally, the target object information includes: a margin of the target object and/or an occlusion rate of the target container.
Optionally, the method further includes:
acquiring a preset object sample image;
acquiring second labeling information corresponding to the preset object sample image, wherein the second labeling information comprises a third labeling frame framing a preset object in the preset object sample image and the category of the preset object;
and training the preset object sample image and the second labeling information by adopting a second neural network, and determining the number of preset objects according to the number of third labeling frames in accordance with preset categories to obtain the object detection model.
In a second aspect, the present application provides a feeding detection method comprising:
acquiring a first image to be detected of the crib;
identifying feed information in the first image to be detected according to a pre-trained crib detection model;
and generating a label corresponding to the trough according to the feed information.
Optionally, the identifying the feed information in the first image to be detected according to a pre-trained crib detection model includes:
identifying the allowance of the feed in the first image to be detected in the trough according to the trough detection model;
and determining the feed information according to the allowance.
Optionally, the identifying the feed information in the first image to be detected according to a pre-trained crib detection model further includes:
identifying the shielded rate of the crib in the first image to be detected according to the crib detection model;
and determining the feed information according to the allowance and the shielded rate. Optionally, the method further includes:
generating a feeding instruction according to the label, wherein the feeding instruction is used for controlling feeding equipment to feed the feed corresponding to the feed information to the trough;
and sending the releasing instruction to the releasing device.
Optionally, the generating a release instruction according to the tag includes:
acquiring a second image to be detected at the trough;
identifying the number of animals in the second image to be detected according to a pre-trained animal detection model;
and generating the releasing instruction according to the label and the animal number.
Optionally, the identifying the number of animals in the second image to be detected according to a pre-trained animal detection model includes:
identifying the head direction of the animal in the second image to be detected according to the animal detection model;
determining the number of animals with the head oriented towards the trough.
In a third aspect, the present application provides a container inspection device comprising:
the acquisition module is used for acquiring a first image to be detected of the target container;
the identification module is used for identifying the target object information in the first image to be detected according to a pre-trained container detection model;
and the generating module is used for generating a label corresponding to the target container according to the target object information.
In a fourth aspect, the present application provides a feeding detection device, comprising:
the acquisition module is used for acquiring a first image to be detected of the trough;
the identification module is used for identifying the feed information in the first image to be detected according to a pre-trained crib detection model;
and the generating module is used for generating a label corresponding to the trough according to the feed information.
In a fifth aspect, the present application provides a feeding system comprising: the feeding device comprises a shooting device, a feeding detection device and a throwing device;
the shooting device is used for shooting the crib to obtain a first image to be detected;
the feeding detection device is used for acquiring the first image to be detected; identifying feed information in the first image to be detected according to a pre-trained crib detection model; generating a label corresponding to the trough according to the feed information; generating a release instruction according to the label; sending the launching instruction to the launching device;
and the feeding equipment is used for feeding the feed corresponding to the allowance into the trough.
Optionally, the shooting device is further configured to shoot a second image to be detected at the trough;
the feeding detection device is also used for identifying the number of animals in the second image to be detected according to a pre-trained animal detection model; and generating the releasing instruction according to the label and the animal number.
In a sixth aspect, the present application provides an electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above method steps when executing the computer program.
In a seventh aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the above-mentioned method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the image recognition is carried out on the container, the target object information in the container is determined, the label corresponding to the target container is generated according to the target object information, and whether the target object is put in the container or not can be determined based on the label. Therefore, the target objects in the container do not need to be monitored and added manually, the efficiency and the accuracy of monitoring and adding the target objects in the container are improved, the waste of the target objects is avoided, and the labor cost is reduced.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method is based on a computer vision technology, and analyzes the eating state of the animals by monitoring the feed allowance in the trough, so that the feed is scientifically and reasonably put in.
First, a container inspection method according to an embodiment of the present invention will be described.
Fig. 1 is a flowchart of a container detection method according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the steps of:
step S11, acquiring a first image to be detected of the target container;
step S12, identifying the target object information in the first image to be detected according to the pre-trained container detection model;
in step S13, a label corresponding to the target container is generated based on the target object information.
In this embodiment, image recognition is performed on the container, information of the target object in the container is determined, a tag corresponding to the target container is generated according to the information of the target object, and then whether to put the target object into the container may be determined based on the tag. Therefore, the target objects in the container do not need to be monitored and added manually, the efficiency and the accuracy of monitoring and adding the target objects in the container are improved, the waste of the target objects is avoided, and the labor cost is reduced.
Wherein, step S12 includes: identifying the allowance of the target object in the first image to be detected in the target container according to the container detection model; and determining the target object information according to the allowance.
For example, as shown in fig. 2, when a first image to be detected is captured from directly above the target container, the container detection model can identify the area ratio of the target object 21 in the target container 20 in the first image to be detected; as shown in fig. 3, when the first image to be detected is captured from the side of the target container, the container detection model can identify the height ratio of the target object 21 in the target container 20 in the first image to be detected. By the area ratio or the height ratio, the remaining amount of the target object can be determined.
In another alternative embodiment, step S12 includes: identifying the shielded rate of the target container in the first image to be detected according to the container detection model; and determining the target object information according to the allowance and the shielded rate.
For example, when the target container is a trough to be fed by an animal and the target object is a feed preset object, the feed may not be accurately recognized from the first image to be measured, which is captured, because the feed may block the trough. Therefore, in order to determine the state of the feed in the trough more accurately, it is necessary to consider the shielding rate of the trough by the animal in addition to the margin recognized from the image.
Fig. 4 is a flowchart of a container inspection method according to another embodiment of the present disclosure. In another alternative embodiment, as shown in fig. 4, the method further comprises:
step S14, generating a throwing instruction according to the label, wherein the throwing instruction is used for controlling throwing equipment to throw a target object corresponding to the target object information to a target container;
step S15, sending the launch instruction to the launch device.
In this embodiment, whether to put in the target object to this container is determined based on this label, need not artifical target object to the container and add to can adopt same standard to add to the target object in the target container automatically, not only improve efficiency and the degree of accuracy to the interpolation of target object in the container, and avoid the waste of target object, reduce the cost of labor.
In another alternative embodiment, the step S14, generating the placement instruction according to the label includes:
acquiring a second image to be detected at the target container;
identifying the number of objects of a preset object in a second image to be detected according to a pre-trained object detection model;
and generating a throwing instruction according to the number of the labels and the objects.
In this embodiment, the number of objects added to the target container also depends on the number of the preset objects. For example, when the target container is a trough for feeding an animal, the target object is feed, and the predetermined object is an animal, the more animals around the trough, the greater the amount of feed added relatively. Therefore, in order to more accurately determine the number of target objects dropped to the target container, the number of preset objects at the target container is further identified based on the image.
In another optional embodiment, the method further comprises a training process of the container detection model, which is as follows:
step A1, acquiring a target container sample image;
step A2, acquiring first labeling information corresponding to a target container sample image, wherein the first labeling information comprises a first labeling frame for framing a container in the target container sample image and target object information of a target object in the target container;
step A3, training the target container sample image and the first labeling information by using a first neural network, and determining target object information of a target object in the target container according to the image characteristics of the target container sample image to obtain a container detection model.
In another alternative embodiment, the object information includes: the remaining amount of the target object and/or the occluded rate of the target container.
Wherein the first neural network may be the following target detection algorithm: YOLOv1, YOLOv2, YOLOv3, R-CNN, Fast R-CNN, SPP-net, Faster R-CNN, R-FCN and SSD, etc., or a target detection algorithm using a lightweight network such as MobileNet as a backbone network, such as MobileNet-YOLOv1, MobileNet-YOLOv2, MobileNet-YOLOv3, etc.
The following describes the training process of the container detection model in detail, taking the first neural network as MobileNet-YOLOv3 as an example.
(1) Inputting the target container sample image and the marked first marking information into a MobileNet-YOLOv3 network;
(2) the MobileNet-YOLOv3 divides the picture into 13 × 13, 26 × 26 and 52 × 52 grids at different convolution layers respectively through a MobileNet convolution network;
(3) each mesh has 3 prior boxes of different sizes, which are responsible for predicting objects of different shapes and sizes, and each prior box is responsible for predicting one bounding box, i.e. each mesh will predict 3 bounding boxes.
The prior frame sizes of grids in 13 × 13, 26 × 26 and 52 × 52 orders are also inconsistent and are used for predicting a large target, a medium target and a small target respectively;
(4) calculating the information of coordinates, width and height, confidence coefficient, category (more, less, none and abnormal) and the like of the center point of each bounding box;
(5) and (4) calculating a loss function through the information in the step (4), and continuously and reversely propagating the loss function to optimize the network until the network converges to obtain a container detection model.
In practical application, a first image to be detected is input into a container detection model, and the center point coordinate, the width and the height, the confidence coefficient and the category information of the container are output through the model, wherein the category information comprises the proportion or the allowance of a target object in a target container. When the confidence is greater than 0.5, the detection result is considered to be valid.
In the embodiment, through the training of the container detection model, the model can be applied to detect and identify the container image, the allowance of the target object in the container is determined, the target object in the container does not need to be monitored manually, the efficiency and the accuracy of monitoring the target object in the container are improved, and the labor cost is reduced.
In another optional embodiment, the method further comprises a training process of the object detection model, which is as follows:
step B1, acquiring a preset object sample image;
step B2, acquiring second labeling information corresponding to the preset object sample image, wherein the second labeling information comprises a third labeling frame framing the preset object in the preset object sample image and the category of the preset object;
and step B3, training the preset object sample image and the second labeling information by adopting a second neural network, and determining the number of preset objects according to the image characteristics of the preset object sample image to obtain an object detection model.
Wherein the second neural network may be the following target detection algorithm: YOLOv1, YOLOv2, YOLOv3, R-CNN, Fast R-CNN, SPP-net, Faster R-CNN, R-FCN and SSD, etc., or a target detection algorithm using a lightweight network such as MobileNet as a backbone network, such as MobileNet-YOLOv1, MobileNet-YOLOv2, MobileNet-YOLOv3, etc., and the Darknet backbone network in the YOLO is replaced by MobileNet, thereby improving the network operation speed while ensuring the accuracy.
The following describes the training process of the object detection model in detail by taking the second neural network as MobileNet-YOLOv2 as an example.
(1) Inputting the preset object sample image and the second annotation information into a MobileNet-YOLOv2 network;
(2) the MobileNet-YOLOv2 divides the picture into 13x13 grids through a MobileNet convolution network, and each grid is responsible for predicting the object with the center falling into the grid;
(3) each grid is provided with 5 prior frames with different sizes and is responsible for predicting objects with different shapes and sizes, and each prior frame is responsible for predicting a boundary frame, namely each grid can predict 5 boundary frames;
(4) calculating the coordinate, width and height, confidence coefficient, category (whether the category is a preset category or not, or the name of the category, and the like) and other information of the center point of each bounding box;
(5) and (4) calculating a loss function through the information in the step (4), and continuously and reversely propagating the loss function to optimize the network until the network converges to obtain an object detection model.
And in actual application, inputting the second image to be detected into the object detection model, and outputting the frame information, confidence coefficient and quantity of the object through the model. When the confidence is greater than 0.5, the detection result is considered to be valid.
In the embodiment, the number of the preset objects in the image can be quickly and accurately identified by applying the model through training the object detection model, the objects do not need to be artificially preset for monitoring, the monitoring efficiency and the accuracy of the preset objects are improved, and the labor cost is reduced.
The container detection method can be applied to the field of animal feeding, in particular to the feeding of poultry and livestock.
Fig. 5 is a flowchart of a feeding method provided in the present application. As shown in fig. 5, the method further comprises the steps of:
step S21, acquiring a first image to be detected of the trough;
step S22, identifying feed information in the first image to be detected according to a pre-trained crib detection model;
and step S23, generating a label corresponding to the trough according to the feed information.
In this embodiment, through carrying out image recognition to the trough, confirm the fodder information in the trough, generate the label that this trough corresponds according to the fodder information, whether follow-up can be based on this label and confirm whether to put in the fodder in this trough. Like this, need not the manual work and monitor the fodder in the trough, but feed balance and the animal condition of eating in the real time monitoring trough have improved the efficiency and the degree of accuracy that the animal fed, and avoid the waste of fodder, reduce the cost of labor.
Wherein, step S22 includes:
identifying the allowance of the feed in the first image to be detected in the trough according to the trough detection model;
and determining the feed information according to the allowance.
For example, when the first image to be detected is photographed from directly above the trough, the ratio of the area occupied by the feed in the first image to be detected in the trough can be identified by the trough detection model. By means of this area ratio, the remaining amount of feed can be determined, so that a label corresponding to the trough can be generated from this remaining amount. If the area of the feed occupying the trough is larger than 1/2, the detection result output by the trough detection model is polyphagia; when the area of the feed occupying the trough is larger than 1/3 and smaller than 1/2, the detection result output by the trough detection model is that the food is short.
In an alternative embodiment, the actual remaining amount of feed may not be accurately identified from the first image to be measured taken, since the feeding of the animal may cause an obstruction to the feeding trough. Therefore, in order to determine the state of the feed in the trough more accurately, it is necessary to consider the shielding rate of the trough by the animal in addition to the margin recognized from the image.
Step S22 further includes: identifying the shielded rate of the crib in the first image to be detected according to the crib detection model; and determining the feed information according to the allowance and the shielded rate.
For example, when the foodstuff occupying area is less than 1/3 and the animal occupying area is less than 1/3, the detection result output by the crib detection model is no food; if the area of the trough covered by the animal exceeds 1/3 or the trough cannot be detected, the detection result output by the trough detection model is abnormal.
Fig. 6 is a flow chart of a feeding method provided in another embodiment of the present application. As shown in fig. 6, the method further includes:
step S24, generating a feeding instruction according to the label, wherein the feeding instruction is used for controlling feeding equipment to feed the trough with feed corresponding to the feed information;
step S25, sending the launch instruction to the launch device.
In this embodiment, whether confirm whether to put in the fodder in the trough based on this label, need not the manual work and in time add the fodder to the unloading, not only improved the efficiency and the degree of accuracy of feeding, and avoided the waste of fodder, reduced the cost of labor.
Fig. 7 is a flow chart of a feeding method provided in another embodiment of the present application. As shown in fig. 7, step S24 includes:
step S31, acquiring a second image to be detected at the trough;
step S32, identifying the number of animals in the second image to be detected according to the pre-trained animal detection model;
and step S33, generating a throwing instruction according to the label and the number of animals.
In this embodiment, the amount of feed added to the trough is also dependent on the number of animals. The more animals around the trough, the greater the amount of feed added. Thus, to more accurately determine the amount of feed delivered to the trough, the number of animals at the trough is further identified based on the image.
In an alternative embodiment, step S32 includes:
identifying the head direction of the animal in the second image to be detected according to the animal detection model;
the number of animals with their heads oriented towards the trough is determined.
In this example, the number of animals was further determined according to the head direction of the animals. Although there may be a plurality of animals around the trough, the animal may not need to be fed if it is not head-facing the trough. Thus, only the number of animals whose heads are facing the trough, i.e. only the number of animals that need to be fed, is counted. And after the second image to be detected is input into the animal detection model, outputting the frame information of the head and the tail of each animal or the frame information of the shoulder and the tail of each animal, and obtaining the number of animals with the heads facing the trough based on the frame information. Therefore, the feed feeding accuracy is further improved, the feed can be added to the animals in time, and the feed waste is avoided.
The above method will be described in detail below, taking pig feeding as an example.
Whether to put in the feed is determined based on the trough feed balance and the number of pigs, as shown in table 1 below,
TABLE 1
And if the trough detection model returns more food and less food, the food is not discharged.
And if the trough detection model returns no food and the animal detection model returns no pig, no blanking is performed.
And if the trough detection model returns no food and the animal detection model returns more than or equal to 1 pig, blanking. If the feed feeding amount corresponding to each pig is w, and the number of pigs is N, the feed feeding amount in the trough is wN.
And if the crib detection model returns abnormity and the return result of the animal detection model is larger than 1 pig, blanking, and putting the feed in the crib by the amount wN.
If the trough detection model returns to be abnormal and the animal detection model returns to 1 pig, the pig is not fed when playing in the trough transversely.
In this embodiment, carry out the accurate of fodder and put in based on the surplus of fodder in the trough and the quantity of pig around the trough, not only realize automatic timely unloading, improve pig quality and efficiency of feeding, avoid the fodder extravagant again, practice thrift manpower and financial resources cost.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application.
Fig. 8 is a block diagram of a container detection apparatus provided in an embodiment of the present application, which may be implemented as part of or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 8, the container inspection apparatus includes:
an obtaining module 41, configured to obtain a first to-be-detected image of a target container;
the identification module 42 is configured to identify target object information of a target object in the first image to be detected according to a pre-trained container detection model;
and a generating module 43, configured to generate a label corresponding to the target container according to the target object information.
Fig. 9 is a block diagram of a feeding detection device provided in an embodiment of the present application, and the feeding detection device may be implemented as part or all of an electronic device through software, hardware, or a combination of the software and the hardware. As shown in fig. 9, the feeding detection device includes:
an obtaining module 51, configured to obtain a first image to be detected of a trough;
the identification module 52 is configured to identify feed information in the first image to be detected according to a pre-trained crib detection model;
and the generating module 53 is configured to generate a label corresponding to the trough according to the feed information.
Fig. 10 is a block diagram of a feeding system provided in an embodiment of the present application. As shown in fig. 10, the system includes: a shooting device 61, a feeding detection device 62 and a feeding device 63;
the shooting device 61 is used for shooting the trough to obtain a first image to be detected;
the feeding detection device 62 is used for acquiring a first image to be detected; identifying feed information in a first image to be detected according to a pre-trained crib detection model; generating a label corresponding to the trough according to the feed information; generating a release instruction according to the label; sending a launch instruction to launch equipment 63;
and the feeding equipment 63 is used for feeding the feed corresponding to the allowance into the trough.
In another embodiment, the shooting device 61 is further configured to shoot a second image to be measured at the trough; the feeding detection device 62 is further configured to identify the number of animals in the second image to be detected according to a pre-trained animal detection model; and generating a release instruction according to the label and the number of the animals.
Wherein, the shooting device 61 can be arranged above the trough. The feeding detection device 62 may be located locally or at a cloud server. The shooting device 61, the feeding detection device 62 and the feeding device 63 can communicate with each other in a wired or wireless manner.
An embodiment of the present application further provides an electronic device, as shown in fig. 11, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (pci) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method embodiments described below.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.