GB2614051A - Image processing - Google Patents

Image processing Download PDF

Info

Publication number
GB2614051A
GB2614051A GB2118316.5A GB202118316A GB2614051A GB 2614051 A GB2614051 A GB 2614051A GB 202118316 A GB202118316 A GB 202118316A GB 2614051 A GB2614051 A GB 2614051A
Authority
GB
United Kingdom
Prior art keywords
image data
cameras
data sets
plural
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2118316.5A
Inventor
Mark Peacock Andrew
Mcmahon Anthony
Christopher Lee Brian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peacock Tech Ltd
Original Assignee
Peacock Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peacock Tech Ltd filed Critical Peacock Tech Ltd
Priority to GB2118316.5A priority Critical patent/GB2614051A/en
Publication of GB2614051A publication Critical patent/GB2614051A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K1/00Housing animals; Equipment therefor
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K3/00Pasturing equipment, e.g. tethering devices; Grids for preventing cattle from straying; Electrified wire fencing

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Environmental Sciences (AREA)
  • Signal Processing (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Animal Husbandry (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

Forming an object image data set comprises: receiving 54 plural image data sets from plural cameras installed in a livestock habitation; spatially relating 56 a first image data set and a second image data set from the plural sets to account for overlap between the image data sets; and forming 58 an object image data set from the image data contained in the spatially related image data sets, the object image data set comprising at least part of the image data common to the two data sets. The object image data set comprises the coterminous area of the first and second image data sets and at least part of each of the first and second image data sets outside of the overlapping area. The second image data represents a scene in the livestock habitation. The plural image data sets each represent a view of a different part of the livestock habitation. The cameras may be installed at elevated locations in the livestock habitation in an array such that each of the cameras views the habitation from above.

Description

Intellectual Property Office Application No G132118316.5 RTM Date:18 February 2022 The following terms are registered trade marks and should be read as such wherever they occur in this document: NVIDIA Jetson Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo
Title of Invention: Image processing
Field of the Invention
The present invention relates to processing of image data sets representing a livestock habitation and electrical apparatus operative to process image data sets representing a livestock habitation.
zo Background Art
It is known to monitor livestock by way of one or more cameras. In an approach to livestock monitoring, cameras are installed at spaced apart and elevated locations and are oriented such that each camera acquires images from above of a respective different part of a livestock habitation, such as a barn. Desiring uninterrupted coverage of a large area of a livestock habitation, the present inventors installed an array of cameras such that adjacent cameras acquired images of different but overlapping parts of the livestock habitation. Further to this, the present inventors wished to track an individual animal, such as a cow, as it moved from the field of view of one camera to the field of view of the adjacent camera. Tracking was by way of electronic data processing of images acquired by the adjacent cameras. Effective tracking of an animal by electronic data processing of acquired images depended on most if not the whole of the animal being within the fields of view of the adjacent cameras at the same time. This resulted in a significant extent of overlap of fields of view of adjacent cameras with consequential increase in the number of cameras needed to provide uninterrupted coverage of a large area of the livestock habitation. By way of example, of the order of one hundred and fifty cameras were needed in one livestock monitoring application.
The present invention has been devised in light of the inventors' appreciation of the above problem. It is therefore an object for the present invention to provide an improved method of electronic data processing of images of a livestock habitation. It is a further object for the present invention to provide electrical apparatus operative to process images of a livestock habitation.
Statement of Invention
According to a first aspect of the present invention there is provided a process of forming by electronic methods an object image data set from image data sets representing a livestock habitation, the process comprising: receiving plural image data sets, each of the plural image data sets comprising image data representing a view of a different part of a livestock habitation, and first and second image data sets of the plural image data sets having image data in common whereby the image data comprised in the first and second image data sets represent first and second views respectively of different but overlapping parts of the livestock habitation; spatially relating the first and second image data sets to each other to account 25 for an extent of overlap of the first and second image data sets; and forming an object image data set from the image data comprised in the spatially related first and second image data sets, the object image data set comprising at least part of the image data common to the first and second image data sets, first image data from the first image data set, and second image data from the second image data set, each of the first and second image data conterminous with the at least part of the image data in common and extending beyond the at least part of the image data in common partway across image data comprised in a respective one of the first and second image data sets whereby the object image data set constituted by the first image data, the at least part of the image data in common, and the second image data represents a scene in the livestock habitation.
The process of forming an object image data set from image data sets representing a livestock habitation is accomplished by electronic methods, as described further below. The process comprises receiving plural image data sets, such as from plural cameras installed in the livestock habitation. Each of the plural image data sets comprises image data representing a view of a different part of the livestock habitation. The image data may be information in digital format. The image data may comprise an array of pixels. Furthermore, the image data may comprise a vector and more usually at least one coordinate linked, such as by way of a data structure, to each pixel in the array. First and second image data sets of the plural image data sets have image data in common whereby the image data comprised in the first and second image data sets represent first and second views respectively of different but overlapping parts of the livestock habitation. According to the present process, a moderate extent of overlap is involved. This is because it may be difficult if not impossible to install an array of cameras in a livestock habitation such that the fields of view of all adjacent pairs of cameras are conterminous. Therefore the cameras may be installed such that the fields of view of adjacent pairs of cameras overlap to a moderate extent which is sufficient to provide for uninterrupted coverage of the monitored area of the livestock habitation despite imprecision in installation of the cameras. Nevertheless, the extent of overlap may be markedly less than is needed for an animal to be within the fields of view of adjacent cameras at the same time.
The process also comprises spatially relating the first and second image data sets with each other to account for an extent of overlap of the first and second image data sets. The step of spatially relating the first and second image data sets with each other may allow for identification of the image data in common. For example, where each pixel in an image data set has its location specified by x and y coordinates, plural pixels with the same spatially related x and y coordinates may be common to the first and second image data sets with such plural pixels being the image data in common. Approaches to spatially relating the first and second image data sets with each other are described below.
The process further comprises forming an object image data set from the image data comprised in the spatially related first and second image data sets. The object image data set comprises at least part of the image data common to the first and second image data sets, first image data from the first image data set, and second image data from the second image data set. Each of the first and second image data is conterminous with the at least part of the image data in common and extends beyond the at least part of the image data in common partway across image data comprised in a respective one of the first and second image data sets. Each of the first and second image data may be substantially coextensive with the at least part of the image data in common where the image data is conterminous. The object image data set thus constituted by the first image data, the at least part of the image data in common, and the second image data represents a scene in the livestock habitation.
The object image data set may, in effect, extend continuously across the first image data set, the overlap, and the second image data set. As mentioned above, of the order of one hundred and fifty cameras were needed in one livestock monitoring application without the present process. With the present process, the present inventors found that of the order of ninety cameras were needed in the same livestock monitoring application.
As described above, the object image data set comprises at least part of the image data common to the first and second image data sets. Where the object image data set comprises less than all of the common image data, the common image data comprised in the object image data set may be constituted by a part of the common image data extending in a direction orthogonal to a direction of separation of the first and second image data sets. Furthermore, all of the image data in this part of the common image data which extends in the direction of separation of the first and second image data sets may be comprised in the object image data set.
The object image data set may be formed by assembly together of image data from the first and second image data sets. Alternatively, the object image data set may be formed by linking together by way of a data structure image data from the first and second image data sets. In this alternative, the object image data set may thus not be formed by assembling together of image data.
The scene represented by the object image data set may contain an animal of interest in whole with one part of the animal contained in the first image data set and the other part of the animal contained in the second image data set. Having the animal contained wholly in the object image data set may provide for ease of monitoring of the animal, such as by way of tracking the animal by electronic data processing of the object image data set and perhaps also by electronic data processing of further image data sets. Formation of an object image data set may provide for effective animal recognition which may be required for tracking, although and depending on circumstances animal recognition may be to a lower degree of confidence than can be achieved when the animal is wholly within the field of view of one camera. As the animal of interest moves further from a field of view of a first camera to a field of view of a second adjacent camera whereby the animal is wholly within the field of view of the second camera the second image data set may also be used for animal recognition and tracking with recognition perhaps being to a higher degree of confidence so as to permit of distinguishing one animal from another animal of the same form, such as one cow from another cow.
In a specific example and according to an application of the process, cows are being tracked. Each of two adjacent cameras used for tracking has a width of field of view of 10.5 m and there is an overlap in the fields of view of 1 m. A large cow is assumed to have a length of 2.5 m. The cow has moved from wholly within a field of view of a first one of the two adjacent cameras so that it is present in the overlap and each of the two fields of view outside the overlap. The object image set is to represent a length of 3 m to provide a margin of 0.5 m over the assumed cow length of 2.5 m. In this example, the object image set therefore comprises the 1 m overlap of the two fields of view, 1 m of the field of view of one of the two cameras on one side of the overlap, and 1 m of the field of view of the other of the two cameras on the other side of the overlap.
The present process may find application in monitoring of static objects, i.e. objects which do not move bodily around the livestock habitation, as well as in monitoring of moving animals. A livestock habitation may contain objects which are deserving of monitoring. For example, the livestock habitation may contain a trough or a gate and it may be desirable to monitor the trough to determine if it is empty and the gate to determine if it is locked or unlocked. Effective electronic data processing of image data to determine a condition of a static object may require the whole object to be present within one image data set. It might be feasible to install an array of cameras to ensure each of several static objects of interest falls within a field of view of one camera. However, installation of an array of cameras to achieve uninterrupted coverage of a livestock habitation may be difficult enough to achieve without catering for coverage of static objects of interest each within a single field of view. The present process may provide for formation of an object image data set which represents a scene containing a static object whereby the static object may be monitored.
The process may further comprise acquiring the plural image data sets each by way of a respective camera installed in the livestock habitation. The plural cameras may be installed such that each of the plural cameras acquires a view of a respective different part of the livestock habitation, albeit overlapping parts in respect of adjacent cameras. The views acquired by the plural cameras may together cover substantially all of the livestock habitation. The plural cameras may be installed in an array. Alternatively or in addition, the plural cameras may be installed at elevated locations such that each camera acquires images from above of a respective different part of the livestock habitation.
As described above, the first and second image data sets are spatially related to each other to address an extent of overlap of the first and second image data sets.
The first and second image data sets may be spatially related to each other by a spatial scheme.
According to a first general approach, the plural image data sets may be stored or a subset of the plural image data sets may be stored having regard to object monitoring or animal tracking requirements. The stored image data sets may be accessed during the step of forming an object image data set. No composite image may be formed according to the first general approach described below.
The spatial scheme may comprise at least one quantity linked to each of the first and second image data sets, such as by way of a data structure, or may be comprised in each of the first and second image data sets. More specifically, there may be a quantity linked to or comprised in the image data set and for each of the image data comprised in the image data set. In a first form, the spatial scheme may be vector based. For example, there may be a vector associated with each of the first and second image data sets, such as by way of a data structure, and more specifically there may be a vector associated with each of the image data, for example associated with each pixel, comprised in the image data set. In a second form, the spatial scheme may be coordinate based. For example, there may be x and y coordinates associated with each of the first and second image data sets and more specifically there may be x and y coordinates associated with each of the image data, for example with each pixel, comprised in the image data set. As described further below, the second form may be the more appropriate of the two forms where each of the image data within an image data set has its location specified by way of zo coordinates.
According to a second general approach, the spatial scheme may comprise forming a composite image from the plural image data sets or a subset thereof. The image data sets may thus be spatially related to one another in the composite image. This approach may be appropriate where there are plural objects to be monitored at the same time, such as several animals to be tracked and several static objects to be monitored. The composite image may be formed in dependence on at least one quantity linked to or comprised in each of the image data sets, as described above, such as at least one coordinate for each of the image data comprised in an image data set. Alternatively or in addition, a correlation algorithm may be applied to adjacent image data sets to address overlap between the adjacent image sets whereby the image data sets are properly located in relation to each other in the composite image.
The step of spatially relating the first and second image data sets to each other may be carried out in dependence on an image data set map which represents the relative locations of the plural image data sets. The image data set map may be formed when cameras are installed in the livestock habitation, the relative location of the cameras is known, and the specification of the cameras is known. The image data map may relate the first and second image data sets to each other and more specifically may relate the image data comprised in the first image data set and the image data comprised in the second image data set to one another. For example, the image data map may comprise at least one map coordinate for each image data set.
Image data, or pixels, in an image data set acquired by a camera often have their relative positions designated by x and y coordinates. As described above, a spatial scheme for spatially relating the first and second image data sets may be coordinate based. A coordinate based spatial scheme, such as an image data map comprising at least one map coordinate for each image data set, may be appropriate in view of its ease of application to an image data set having relative positions designated by x and y coordinates. More specifically, the coordinate based spatial scheme may be applied to at least one of the x and y coordinates of each of the image data in an image data set to modify at least one of the x and y coordinates of each of the image data.
The process may further comprise installing each of the plural cameras in the livestock habitation. Where the livestock habitation is roofed, for example where the livestock habitation is a barn, the plural cameras may be mounted on the roof of the livestock habitation. Each of the plural cameras may therefore acquire images from above of a respective different part of the livestock habitation.
A height, h, of a camera above a plane of interest in the livestock habitation may be known. The plane of interest may be above ground, such as at or near the top of a monitored object or a tracked animal. The plane of interest may be a distance, ho, above the ground. Normally, the angle of view, 0, of the camera is also known. The width of the field of view in the plane of interest may therefore be given by 2 x h x tan (0/2). Image data, or pixels, in an image data set acquired by the camera can thus be related to a location within the plane of interest.
When acquiring images of a livestock habitation it may be desirable on the one hand to locate a camera at a more elevated location to provide a greater field of view. A greater field of view may reduce the number of cameras needed. On the other hand, it may be desirable to locate the camera sufficiently close to the object or animal of interest to acquire images of sufficient detail and more specifically of sufficient detail to allow for object recognition. There may therefore be a compromise between extent of field of view and sufficient detail.
As mentioned above, the plane of interest may be at or near the top of a monitored object or a tracked animal. A compromise between providing a greater field of view and acquiring images of sufficient detail may therefore be struck in respect of the plane of interest and not in respect of the ground. If the height of the object or animal is ho and the height of the camera above the ground is hcam, the compromise may be struck in respect of a distance of how, -ho between the plane in which the camera is installed and the plane of interest. For example if cattle are being monitored, an appropriate value for ho may be 1.7 m, which is the height of a large cow.
As described above, there is overlap in the views of the parts of the livestock habitation acquired by adjacent cameras. It may be desirable to have the overlap in the plane of interest and not on the ground. The angle of view of each of the adjacent cameras, the height of the plane of interest, and the extent of overlap may all be known. The distance between adjacent cameras may therefore be a function of the height of the cameras from the ground and also of the known angle of view of each of the adjacent cameras, of the height of the plane of interest, and of the extent of overlap. When the height of the cameras from the ground has been determined, such as in dependence on a comprise between field of view and detail in acquired images, this function may determine the distance between adjacent cameras to achieve a desired overlap in the plane of interest.
Assuming an overlap of do in the plane of interest, which lies at a distance of hcam -ho from the plane in which the camera is installed, the fields of view of adjacent cameras may cross at a height hx above the plane of interest. hx may therefore be given by (d0/2) / (tan (0/2)). The distance between the crossing height hx and the plane in which adjacent cameras lie may be hcam -ho -hx. The separation, L, between the adjacent cameras may be given by 2 x tan (0/2) x (hcam -ho -hr).
Often the cameras are mounted on the roof of a livestock habitation, such as the roof of a barn. The height of a barn roof usually varies, with slanted roofs being typical.
This may mean that rows of an array of cameras are mounted at different heights. One may wish to have a minimum overlap of fields of view of adjacent cameras to provide for uninterrupted coverage while allowing for imprecision in installation of cameras and variation from specified angle of view from camera to camera. An overlap of lm might, for example, be appropriate. Where the cameras in each row of an array are at substantially the same height above the ground and there is difference in height above the ground of cameras from row to row, it may be advantageous to vary the number of cameras from row to row while maintaining in each row the minimum overlap of fields of view of adjacent cameras.
As described above, as the height of adjacent cameras is increased the extent of overlap of their fields of view may increase. This means that fewer more widely spaced cameras may be installed in rows of cameras which are at greater height. The distance between adjacent cameras in a row may be a function of: the height from the ground of the cameras in the row; the angle of view of each of the cameras; the height of the plane of interest, if any; and the extent of overlap. The separation, L, between adjacent cameras in the row may be given by 2 x tan (0/2) x (hcam -ho -hx), where 0 is the angle of view, hcam is the height from the ground of the camera, ho is the height from the ground of the plane of interest, and hx is the height from the ground of a location where the fields of view of adjacent cameras cross. h" may be given by (d012) / (tan (0/2)), where do is the extent of overlap in fields of view. The number of cameras in an array may thus be minimised whilst maintaining a desired minimum overlap of fields of view of adjacent cameras.
A large number of cameras may be needed where a large area of livestock habitation is to be monitored. At least one cable may extend from each of the cameras with the at least one cable providing for supply of electrical power to the camera, for supply of control signals to the camera, and for onward communication of image data from the camera. Control data and image data may be communicated by way of an Ethernet cable. Furthermore, the camera may be Power Over Ethernet (POE) enabled whereby electrical power is provided to the camera by way of the Ethernet cable and there is no need for a separate power cable. Installation may require much cable where there are a large number of cameras. The present inventors have realised that a daisy chain cabling scheme may provide for a substantial reduction in the amount of cable required.
The daisy chain cabling scheme may have a linear topology whereby cameras in a line of cameras are electrically connected to the daisy chain cabling scheme.
The daisy chain cabling scheme may comprise a 3-pod switch for each of the plural cameras with exception of the camera of the plural cameras at a distal end of the daisy chain. The 3-pod switch may provide for connection to a respective camera, to a camera or a 3-port switch further down the daisy chain, and to a 3-port switch or a spaced apart location further up the daisy chain.
The daisy chain cabling scheme may further comprise a switch controller at a location spaced apart from the plural cameras, such as a location on the ground within the livestock habitation. The switch controller may control each 3-port switch to thereby connect to each of the plural cameras selectively. The daisy chain cabling scheme may be operative to connect to each of the plural cameras, such as in turn or when need arises.
As mentioned above, an application of the process may comprise object detection.
The process may therefore further comprise detecting an object in the object image set, the object detected with a neural network object detection algorithm and more specifically a convolution neural network (CNN) object detection algorithm. For example, the CNN object detection algorithm may be the YOLO algorithm or the ResNet algorithm. Depending on circumstances, the object image set may provide for detection to a lower degree of confidence than may be achieved from one image data set which wholly contains the object. Detection to a lower degree of confidence may suffice for identifying a form of object, such as a cow, but may be insufficient to distinguish one cow from another. The process may therefore yet further comprise detecting an object in at least one of the plural image data sets wherein the object is substantially wholly within the image data set.
Where the detected object is an animal that moves, tracking of the animal may be desirable. The process may further comprise tracking a detected animal by way of an object tracking algorithm. The object tracking algorithm may be based on alpha-beta filtering or Kalman filtering. The object tracking algorithm may be operative on at least one object image set. Often the object tracking algorithm may be operative on at least one object image set and at least one image data set.
As described above, apparatus to provide for the process may comprise plural cameras which may be disposed in a row or in plural rows to form an array of camera. The plural cameras may be Internet Protocol (IP) cameras.
Alternatively or in addition, apparatus to provide for the process may comprise at least one digital processor to perform at least one of the steps of the process. According to a first approach, a digital processor may be disposed at each of plural cameras whereby at least one step of the process is carried out locally to a camera. The first approach may be applied together with at least one of the second and third approaches described below. According to a second approach, a digital processor may be at a location in the livestock habitation spaced apart from the plural cameras, such as on the ground inside a barn, the processor receiving image data sets from all of the plural cameras. According to a third approach, a digital processor may be located remotely from the livestock habitation, the digital processor receiving image data sets from all of the plural cameras. Furthermore, image data sets from the plural cameras may be conveyed to the remotely located digital processor by way of the Internet. The digital processor may be constituted by a cloud computing arrangement. The digital processor may therefore be of distributed form. The second and third approaches may be applied together with, for example, preliminary processing according to the second approach and completion of processing according to the third approach.
The at least one digital processor may be configured to perform one or more of the processes described herein. Apparatus may comprise structures and/or non-transitory memory having programmed instructions which are operated upon by the at least one digital processor and perhaps may further comprise electronic circuitry to perform these processes.
According to a second aspect of the present invention there is provided electrical apparatus operative to form an object image data set from image data sets representing a livestock habitation, the electrical apparatus comprising a digital processor and a data store, in which the data store stores plural image data sets, each of the plural image data sets comprising image data representing a view of a different part of a livestock habitation, and first and second image data sets of the plural image data sets having image data in common whereby the image data comprised in the first and second image data sets represent first and second views respectively of different but overlapping parts of the livestock habitation, the digital processor spatially relates the first and second image data sets to each other to account for an extent of overlap of the first and second image data sets, and the digital processor forms an object image data set from the image data comprised in the spatially related first and second image data sets, the object image data set comprising at least a part of the image data common to the first and second image data sets, first image data from the first image data set, and second image data from the second image data set, each of the first and second image data conterminous with the at least a part of the image data in common and extending beyond the at least a part of the image data in common partway across image data comprised in a respective one of the first and second image data sets whereby the object image data set constituted by the first image data, the at least a part of the image data in common, and the second image data represents a scene in the livestock habitation.
Embodiments of the second aspect of the present invention may comprise one or more features of the first aspect of the present invention.
The present inventors appreciated the feature of varying spacing of cameras from one row of cameras to the next row of cameras in dependence on differing heights of rows of cameras to be of wider applicability than hitherto described. Therefore and according to a third aspect of the present invention, there is provided a method of installing an array of cameras at elevated locations in a livestock habitation, the method comprising: determining an installation height of each of first and second rows of cameras in the array of cameras, the first row of cameras to be installed higher above ground of the livestock habitation than the second row of cameras; determining an installation distance between adjacent cameras in each of the first and second rows of cameras, the installation distance determined as a function of: height from the ground of the adjacent cameras in the respective row; angle of view of the adjacent cameras; and an extent of overlap of fields of view of adjacent zo cameras; and installing the array of cameras in the livestock habitation, adjacent cameras in the first row of cameras spaced further apart from each other than adjacent cameras in the second row of cameras.
The installation distance may be determined further as a function of the height of a plane of interest, the plane of interest spaced apart from the ground of the livestock habitation.
The separation, L, between adjacent cameras in a row of cameras may be given by 2 x tan (0/2) x (hcam -ho -hx), where 0 is the angle of view, hcam is the height from the ground of the cameras, h0 is the height from the ground of the plane of interest, and hx is the height from the ground of a location where the fields of view of adjacent cameras cross. hx may be given by (do/2) / (tan (0/2)), where cl, is the extent of overlap in fields of view. The number of cameras in an array may thus be minimised whilst maintaining a desired minimum overlap of fields of view of adjacent cameras.
Further embodiments of the third aspect of the present invention may comprise one or more features of the first aspect of the present invention.
The present inventors appreciated the feature of a daisy chain cabling scheme to be of wider applicability than hitherto described. Therefore and according to a fourth aspect of the present invention, there is provided a livestock monitoring apparatus 10 comprising: plural cameras installed in a livestock habitation, each of the plural cameras acquiring image data of a respective different part of the livestock habitation; a processor at a location spaced apart from the plural cameras; and a cable arrangement conveying the image data acquired by the plural cameras from the plural cameras to the processor, wherein the cable arrangement has a daisy chain configuration.
A large number of cameras may be needed where a large area of livestock habitation is to be monitored. A daisy chain cabling scheme may provide for a substantial reduction in the amount of cable which would otherwise be required.
The daisy chain cabling scheme may have a linear topology whereby cameras in a line of cameras are electrically connected to the daisy chain cabling scheme.
The cable arrangement may comprise a 3-port switch for each of the plural cameras with exception of one camera of the plural cameras, such as the camera at a distal end of the daisy chain. A first 3-port switch may provide for connection to one of the cameras, to the 3-port switch excepted camera, and to the processor. A second Sport switch may provide for connection to another one of the cameras, to the first 3- 3o port switch, and to the processor. Where there are further 3-port switches, each further 3-port switch may provide for connection to a respective camera, to the previous 3-port switch, and to the processor and more specifically to the processor by way of at least one yet further 3-port switch.
The daisy chain cabling scheme may further comprise a switch controller which controls each 3-port switch to thereby connect each of the plural cameras selectively, such as in turn or as need arises, to the processor. The switch controller may be at a location spaced apart from the plural cameras, such as adjacent or comprised in the processor.
Further embodiments of the fourth aspect of the present invention may comprise one or more features of any previous aspect.
According to a fifth aspect of the present invention, there is provided a method of monitoring livestock, the method comprising: acquiring image data of each of plural different parts of a livestock habitation by way of a respective one of plural cameras installed in the livestock habitation; and conveying the image data acquired by the plural cameras from the plural cameras to a processor at a location spaced apart from the plural cameras, wherein the image data acquired by the plural cameras is conveyed from the plural cameras to the processor by a cable arrangement having a daisy chain configuration.
Embodiments of the fifth aspect of the present invention may comprise one or more features of any of the first to third aspects.
Brief Description of Drawings
Further features and advantages of the present invention will become apparent from the following specific description, which is given by way of example only and with reference to the accompanying drawings, in which: Figure 1 is a view from one side of image processing apparatus installed in a barn according to the invention; Figure 2 is a block diagram representation of a daisy chain cabling scheme according to an embodiment of the invention; Figure 3 is a detailed view of two adjacent cameras in the image processing apparatus of Figure 1; Figure 4 is a flow chart representation of a process according to the invention; and Figure 5 is a representation of formation of an object image set according to the invention.
Description of Embodiments
Figure 1 shows a view from one side of image processing apparatus 10 when installed in a barn 12 (which constitutes a livestock habitation) according to the invention. The barn 12 is of conventional form and has a roof (not shown), which may be pitched, over the ground 14, and sides (not shown), which extend at least part way up from the ground to the roof. The barn 12 thus defines an enclosed space in which livestock, such as cattle, are housed. Figure 1 shows one cow 16, which is free to roam around the enclosed space of the barn 12, and a trough 18 (which constitutes a static object). The image processing apparatus 10 comprises an array of downwardly directed IP cameras 20 which are mounted on the roof of the barn at spaced apart locations whereby each IP camera 20 acquires images of a respective different part of the enclosed space. As described further below with reference to Figure 3, the fields of view of adjacent IP cameras 20 overlap to some extent to provide for uninterrupted coverage of the enclosed space while allowing for inaccuracy of installation of the IP cameras. Figure 1 shows only one row of IP cameras 20 in the array. The array therefore comprises further rows of IP cameras extending into and out of the page. Each IP camera 20 is of conventional form and function.
As further described below with reference to Figure 2, the IP cameras 20 are in data communication with a local processor 22. The local processor 22 is of conventional form and function. Communication between the local processor 22 and the IP cameras 20 is by way of Ethernet cabling. Data communicated over the Ethernet cabling comprises image data acquired by the IP cameras 20 and control data conveyed to the IP cameras 20. Where the IP cameras 20 are Power Over Ethernet (POE) enabled, electrical power is provided to the IP cameras 20 by way of the Ethernet cable. Otherwise, separate power cables provide electrical power to the IP cameras 20. In a first form, image processing steps according to the invention are performed by the local processor 22. In a second form, the image processing apparatus 10 comprises a cloud computing resource 24 of conventional form and function. In the second form, the local processor 22 is operative as a relay station to convey the image data received from the IP cameras 20 to the cloud computing resource 24 by way of the Internet. The cloud computing resource 24 then performs image processing steps according to the invention. In a third form, image processing steps according to the invention are performed in a distributed fashion by way of a camera processor (not shown) comprised in or adjacent some if not all of the IP cameras 20. The camera processor is a Jetson Xavier XT from NVIDIA, Trinity House, Cambridge Business Park, Cowley Road, Cambridge, CB4 OWZ, United Kingdom. In the third form, the local processor 22 provides for onward communication of results from distributed image processing, such as animal tracking results or alarms arising from animal recognition or tracking.
A block diagram representation of a daisy chain cabling scheme according to an embodiment of the invention is shown in Figure 2. Figure 2 shows a first IP camera 30, a second IP camera 32, and a third IP camera 34 of the row of IP cameras 20 shown in Figure 1. Figure 2 also shows a first 3-port switch 36, a second 3-port switch 38, and a switch controller 40. The first IP camera 30 is connected to a first port of the first 3-port switch 36 by way of Ethernet cable and the second IP camera 32 is connected to a second port of the first 3-port switch 36 by way of Ethernet cable. The third port of the first 3-port switch 36 is connected to the first pod of the second 3-port switch 38 by way of Ethernet cable and the second port of the second 3-port switch 38 is connected to the third IP camera 34. The third port of the second 3-port switch 38 is connected to the first port of a further 3-port switch (not shown) and so on until all of the IP cameras are daisy chained in like fashion. The final 3-port switch in the daisy chain is connected by way of Ethernet cable to a 3-port switch controller 40. The 3-port switch controller 40 is constituted by the local processor 22 shown in Figure 1. The 3-port switch controller 40 controls the 3-port switches 36, 38 in the daisy chain cabling scheme to control access to either image data or processed image data and onward communication thereof depending on the form of the image processing apparatus 10. Control may involve accessing each of all IP cameras 20 in turn or accessing in turn a subset of all IP cameras depending on where an animal 16 being tracked moves within the barn 12. Alternatively, control may involve accessing two adjacent IP cameras in turn, for example where monitoring the trough 18 is of interest. The control strategy applied is under the direction of an operator remote from the barn.
Figure 3 provides a detailed view of two adjacent IP cameras 20 in the image processing apparatus 10 of Figure 1. Features in common with Figure 1 are designated in Figure 3 with like reference numbers. As can be seen from Figure 3, each of the IP cameras 20 has an angle of view 0 which results in an overlap of the fields of view of the IP cameras on the ground 14 and also at a height ho above the ground. Cow recognition and tracking involves anatomical features in the upper body of the cow. The plane of interest for image acquisition is therefore at the upper body of the cow. Assuming a large cow is 1.7 m high, the plane of interest is set at 1.7 m whereby ho = 1.7 m. As mentioned above, the fields of view of the adjacent IP cameras 20 overlap to some extent to provide for uninterrupted coverage of the enclosed space while allowing for inaccuracy of installation of the IP cameras. If an IP camera is installed inaccurately, the buffer afforded by the overlap minimises the risk of a gap between the adjacent fields of view. The overlap, do, is in the plane of interest and is set for the present example at 1 m.
Referring again to Figure 3, the height above the plane of interest where the adjacent fields of view cross is h, h, is given by (d0/2) / (tan (0/2)), where do is the extent of overlap in fields of view, and 0 is the angle of view of each IP camera. The separation, L, between adjacent IP cameras 20 in a row is given by 2 x tan (0/2) x (hcam -ho -hx), where hcam is the height from the ground of the camera, and ho is the height from the ground of the plane of interest. As mentioned above, the roof of the barn 12 may be pitched whereby rows of IP cameras 20 may be installed at different heights. Where a row of IP cameras 20 is installed at greater height than another row of IP cameras, there is scope for increased separation, L, between adjacent IP cameras 20 in the higher row without breaching a desired minimum overlap in adjacent fields of view. The equation for L in the present paragraph provides for calculating L for rows of cameras at different heights, hcam. The number of cameras in an array can thus be minimised whilst maintaining a desired minimum overlap in the plane of interest of fields of view of adjacent IP cameras 20.
Figure 3 shows a cow 16 which has moved so that the middle of the cow is in the overlap between the two fields of view, the rear of the cow is in the field of view of a first one of the IP cameras 20 and outside the overlap, and the head of the cow is in the field of view of a second one of the IP cameras 20 and outside the overlap. Alternatively, Figure 3 shows a trough 18 which is located such that its middle section is in the overlap between the two fields of view, one end section of the trough is in the field of view of a first one of the IP cameras 20 and outside the overlap, and the opposite end section of the trough is in the field of view of a second one of the IP cameras 20 and outside the overlap. As described hereinabove, installation of an array of IP cameras 20 in a barn to provide uninterrupted coverage of a desired area presents sufficient difficulty without ensuring that the trough 18 and other static objects each fall wholly within the field of view of one IP camera. Object detection, tracking, and recognition depends on most if not all of the object falling within one field of view. As shown by way of Figure 3, this is not the case for the present cow 16 or trough 18 of interest. Image processing is therefore applied to address this problem as described in more detail below with reference to Figures 4 and 5.
Image processing according to the invention will now be described with reference to the flow chart 50 in Figure 4 and the representation in Figure 5 of formation of an object image set.
A first step according to the present example is formation of an image data set map 52 when the IP cameras 20 have been installed in the barn. The relative locations of the IP cameras 20 and the specification of the IP cameras 20 are known. The image data map spatially relates the IP cameras 20 in the array to one another by way of x and y coordinates. During use of the image processing apparatus 10, the plural IP cameras 20 are operative to acquire at least one image data set for each IP camera 54. Each image data set represents a view at the plane of interest of a different part of the barn 12 acquired by one of the IP cameras 20. Furthermore, each image data set comprises an array of pixels with the position of pixels in the array relative to one another specified by way of x and y coordinates.
Thereafter, the acquired image data sets are processed in accordance with one of the three forms of image processing apparatus 10 described above. Irrespective of the form of image processing apparatus 10, and referring to Figures 3 and 5, a first data set 72 from a first one of two adjacent IP cameras 20 and a second image data 74 set from a second one of the two adjacent IP cameras 20 are spatially related to each other to account for the extent of overlap 76 in the plane of interest of their fields of view 56.
According to a first approach, the pair of x and y coordinates for the first adjacent IP camera 20 are applied to the x and y coordinates of every pixel in the first image data set 72, and the pair of x and y coordinates for the second adjacent IP camera 20 are applied to the x and y coordinates of every pixel in the second image data set 74. All of the pixels in the first and second image data sets 72, 74 are thus spatially related to one another with image data common to the first and second image data sets 76 designated by common x and y coordinates. Although Figure 5 shows the spatially related first and second image data sets 72, 74 together in what constitutes a composite image, according to the first approach the first and second image data sets 72, 74 are stored separately whereby no composite image is formed. Nevertheless, the coordinates for pixels in the first and second image data sets 72, 74 as modified by application of the x and y coordinates for the adjacent IP cameras 20 amounts to the first and second image data sets being spatially related to each other to account for the extent of overlap 76 in the plane of interest of their fields of view.
According to a second approach, a composite image is formed from the plural image data sets or a subset thereof, such as the first and second image data sets 72, 74, and then stored. A composite image 70 formed from the first and second image data sets 72, 74 is shown in Figure 5. The composite image 70 is formed following the first approach described above with the positions of the first and second image data sets 72, 74, including the overlap 76, determined by the modified coordinates for pixels in the first and second image data sets. Alternatively, the x and y coordinates for the adjacent IP cameras 20 provide the relative locations of the first and second image data sets 72, 74 and a correlation algorithm is applied to the first and second image data sets to accurately position the first and second image data sets in relation to each other.
When the first and second image data sets 72, 74 have been spatially related to each other, an object image data set is formed from the image data 58, and more specifically, from pixels and their coordinates, comprised in the spatially related first and second image data sets. The object image data set is represented by dashed box 78 in Figure 5. As described above, an object of interest may be a cow 16 or a trough 18 which extends into each of the first and second image data sets 72, 74 beyond the overlap 76 in their fields of view. Further as described above, object detection, tracking, and recognition is achieved when the object is wholly within an image data set. This achieved by formation of the object image data set 78. The length and width of the object image data set 78 depends on the dimensions of the object of interest. The field of view of each of the IP cameras 20 is such that the width of the object is not greater than the field of view. However, the field of view of each of the IP cameras 20 and the extent of overlap 76 are such that the object does not extend along its length completely across the spatially related first and second image data sets 72, 74. The object image data set 78 therefore comprises a section 80 of the image data common to the first and second image data sets 72, 74, first image data 82 from the first image data set 72, and second image data 84 from the second image data set 74. The section 80 of the common image data is a part of the image data in the overlap 76 in a direction orthogonal to the direction of separation of the first and second image data sets, and all of the image data in this part in the direction of separation of the first and second image data sets. The section 80 of the common image data is taken from one of the first and second image data sets 72, 74. Each of the first and second image data 82, 84 is conterminous with the section 80 of image data in common and extends beyond the section of image data in common partway across image data comprised in a respective one of the first and second image data sets 72, 74 whereby the object image data set 78 constituted by the first image data 82, the section of image data in common 80, and the second image data 84 represents a scene in the livestock habitation containing an object of interest, such as a cow 16 or a trough 18.
Thereafter, a convolution neural network (CNN) object detection algorithm is applied to the object image data set 78 to detect the object of interest 60. The CNN object detection algorithm is the YOLO algorithm or the ResNet algorithm. Application of the YOLO or the ResNet algorithm is within the ordinary design skills of the person skilled in the art. The CNN object detection algorithm is trained with a training set of to image data which contains sufficient variety of object image data to provide for detection of the object(s) of interest. Having now detected and recognised the object of interest, the object image data set 78 is subject to whatever further image processing is required depending on the application to hand. For example, and where the object of interest is a trough 18, the object image data set 78 is analysed 62 to determine a level of food or water in the trough. For example, and where the object of interest is a cow 16, the object image data set 78 is analysed 64 to identify rumination, to monitor interaction with other cows, or to observe certain other behaviours. In some circumstances, the object image data set 78 is unsuited to determining a physiological parameter of the cow, such as cow body condition score zo or the relative disposition of different parts of the cow which are indicative of cow health. In such circumstances, the cow's progress is tracked, as described below, until the cow is wholly within the field of view of one camera, and perhaps also in a location that is conducive to a certain form of analysis, and the physiological parameter of the cow is determined on the basis of at least one image acquired with this one camera.
A cow 16 is liable to move around a livestock habitation with it being desirable to keep a cow of interest under observation. The process therefore comprises applying an object tracking algorithm 66 to further acquired image data sets having regard to the cow 16 detected in the preceding step and further analysing the moved cow in the further acquired image data sets 68. As mentioned above, improved analysis, such as body condition scoring, can be achieved when an image of the whole cow is acquired with one camera. The object tracking algorithm is based on alpha-beta filtering or Kalman filtering. Application of alpha-beta or Kalman filtering to achieve object tracking is within the ordinary design skills of the person skilled in the art. When the cow 16 has moved such that it extends into the non-overlapping pads of two fresh adjacent image data sets, the two fresh adjacent image data sets are spatially related to each other 56 before a fresh object data set is formed 58, as described above. In view of the cow having been detected already, there is usually no need to perform object recognition 60 again whereby the process proceeds to further tracking of the cow or to cow analysis 64, as described above.

Claims (19)

  1. Claims C\I comprising at least part of the image data common to the first and second image data sets, first image data from the first image data set, and second image data from 1. A process of forming by electronic methods an object image data set from image data sets representing a livestock habitation, the process comprising: receiving each of plural image data sets from a respective one of plural cameras installed in a livestock habitation, each of the plural image data sets comprising image data representing a view of a different part of the livestock habitation, and first and second image data sets of the plural image data sets having image data in common whereby the image data comprised in the first and second image data sets represent first and second views respectively of different but overlapping parts of the livestock habitation; spatially relating the first and second image data sets to each other to account for an extent of overlap of the first and second image data sets; and forming an object image data set from the image data comprised in the spatially related first and second image data sets, the object image data set the second image data set, each of the first and second image data conterminous C\I with the at least part of the image data in common and extending beyond the at least part of the image data in common partway across image data comprised in a respective one of the first and second image data sets whereby the object image data set constituted by the first image data, the at least part of the image data in common, and the second image data represents a scene in the livestock habitation.
  2. 2. The process of claim 1 further comprising installing the plural cameras in the livestock habitation before the step of receiving each of plural image data sets from a respective one of the plural cameras, the plural cameras installed such that fields of view of adjacent pairs of cameras of the plural cameras overlap.
  3. 3. The process of claim 2 wherein the plural cameras are installed at elevated locations in the livestock habitation in an array of at least two dimensions such that each of the plural cameras acquires images from above of a respective different part of the livestock habitation.
  4. 4. The process of claim 3 wherein the plural cameras are installed with a separation, L, between adjacent cameras, L given by 2 x tan (0/2) x (hcam -ho -114, where 0 is the angle of view of each of the adjacent cameras, hcam is the height of the adjacent cameras above the ground, hp is the height of a plane of interest from the ground, and hx is the height above the plane of interest where the fields of view of the adjacent cameras cross, wherein hx is given by (d012) / (tan (0/2)), where do is the extent of overlap in the plane of interest of the fields of view of the adjacent cameras.
  5. 5. The process of claim 4 wherein the plural cameras are attached to a roof of the livestock habitation, the roof varying in height from the ground whereby different rows in the installed array are at different heights from the ground, and the separation, L, between adjacent cameras in each row is determined by L = 2 x tan (0/2) X (hcam -ho -hr).
  6. C\I 15 6. The process of any one of the preceding claims, wherein the plural image data sets are stored in a data store, the stored plural image data sets are accessed during the step of forming an object image data set, and the object image data set is C\I formed by linking together by way of a data structure image data in the first and second image data sets whereby there is no formation of a composite image per se comprising the object image data set.
  7. 7. The process of any one of the preceding claims, wherein the first and second image data sets are spatially related to each other by a spatial scheme, the spatial 25 scheme comprising coordinates associated with each of the image data comprised in the first and second image data sets.
  8. 8. The process of claim 7 further comprising forming an image data set map when the relative locations of the plural cameras are known after installation, the image data set map representing the relative locations of views represented by the plural image data sets, and the image data set map comprising x and y map coordinates for each image data set.
  9. 9. The process of claim 8 wherein image data in each image data set have their relative positions within the image data set designated by x and y coordinates, the process further comprising modifying the x and y coordinates for the image data in each image data set with the respective x and y map coordinates comprised in the image data set map.
  10. 10. The process of any one of the preceding claims further comprising detecting an object in the object image set, wherein the object is detected with a convolution neural network (CNN) object detection algorithm.
  11. 11. The process of claim 10 further comprising detecting the already detected object in a further acquired image data set following movement of the object, wherein the object is substantially wholly within the further acquired image data set and detection of the already detected object in the further acquired image data set is with a convolution neural network (CNN) object detection algorithm.
  12. 12. The process of claim 10 or 11 further comprising tracking the detected object by way of an object tracking algorithm. C\I
  13. 13. The process of claim 12 wherein the object tracking algorithm comprises at least one of alpha-beta filtering and Kalman filtering.
  14. 14. The process of any one of the preceding claims wherein the plural cameras are installed in at least one row, the cameras in each at least one row are electrically connected to provide electrical power to the cameras and for communication from the cameras of image data or data derived therefrom, the cameras in each at least one row are electrically connected by a daisy chain cabling scheme.
  15. 15. The process of claim 14 wherein the daisy chain cabling scheme comprises a 3-port switch for each of the cameras in each at least one row with exception of a camera at a distal end of the daisy chain, adjacent 3-port switches connected to each other and to their respective cameras in the row.
  16. 16. The process of claim 15 wherein the daisy chain cabling scheme further comprises a switch controller at a location spaced apart from the plural cameras, the switch controller controlling each 3-port switch in each at least one row of cameras to thereby connect to each of the plural cameras selectively.
  17. 17. The process of any one of the preceding claims, wherein at least one digital processor performs the steps of the process by operation on structures and/or non-transitory memory having programmed instructions. C\Iimage data representing a view of a different part of the livestock habitation, and first and second image data sets of the plural image data sets having image data in
  18. 18. Electrical apparatus operative to form an object image data set from image data sets representing a livestock habitation, the electrical apparatus comprising a digital processor, a data store, and plural cameras installed in a livestock habitation, in which the data store stores each of plural image data sets received from a respective one of the plural cameras, each of the plural image data sets comprising common whereby the image data comprised in the first and second image data sets C\I represent first and second views respectively of different but overlapping parts of the livestock habitation, the digital processor spatially relates the first and second image data sets to each other to account for an extent of overlap of the first and second image data sets, and the digital processor forms an object image data set from the image data comprised in the spatially related first and second image data sets, the object image data set comprising at least a part of the image data common to the first and second image data sets, first image data from the first image data set, and second image data from the second image data set, each of the first and second image data conterminous with the at least a part of the image data in common and extending beyond the at least a part of the image data in common partway across image data comprised in a respective one of the first and second image data sets whereby the object image data set constituted by the first image data, the at least a part of the image data in common, and the second image data represents a scene in the livestock habitation. C\Icomprising at least part of the image data common to the first and second image data sets, first image data from the first image data set, and second image data from
  19. 19. A process of forming by electronic methods an object image data set from image data sets representing a livestock habitation, the process comprising: receiving plural image data sets, each of the plural image data sets comprising image data representing a view of a different part of a livestock habitation, and first and second image data sets of the plural image data sets having image data in common whereby the image data comprised in the first and second image data sets represent first and second views respectively of different but overlapping parts of the livestock habitation; spatially relating the first and second image data sets to each other to account for an extent of overlap of the first and second image data sets; and forming an object image data set from the image data comprised in the spatially related first and second image data sets, the object image data set the second image data set, each of the first and second image data conterminous C\I with the at least part of the image data in common and extending beyond the at least part of the image data in common partway across image data comprised in a respective one of the first and second image data sets whereby the object image data set constituted by the first image data, the at least part of the image data in common, and the second image data represents a scene in the livestock habitation.
GB2118316.5A 2021-12-16 2021-12-16 Image processing Pending GB2614051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2118316.5A GB2614051A (en) 2021-12-16 2021-12-16 Image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2118316.5A GB2614051A (en) 2021-12-16 2021-12-16 Image processing

Publications (1)

Publication Number Publication Date
GB2614051A true GB2614051A (en) 2023-06-28

Family

ID=86611132

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2118316.5A Pending GB2614051A (en) 2021-12-16 2021-12-16 Image processing

Country Status (1)

Country Link
GB (1) GB2614051A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020122113A1 (en) * 1999-08-09 2002-09-05 Foote Jonathan T. Method and system for compensating for parallax in multiple camera systems
WO2013144447A1 (en) * 2012-03-30 2013-10-03 Mirasys Oy A method, an apparatus and a computer program for tracking an object in images
JP2015033066A (en) * 2013-08-06 2015-02-16 株式会社ニコン Camera system
US9355433B1 (en) * 2015-06-30 2016-05-31 Gopro, Inc. Image stitching in a multi-camera array
US9466109B1 (en) * 2015-06-30 2016-10-11 Gopro, Inc. Image stitching in a multi-camera array
WO2019168323A1 (en) * 2018-02-27 2019-09-06 엘지이노텍 주식회사 Apparatus and method for detecting abnormal object, and photographing device comprising same
WO2019172618A1 (en) * 2018-03-05 2019-09-12 Samsung Electronics Co., Ltd. Electronic device and image processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020122113A1 (en) * 1999-08-09 2002-09-05 Foote Jonathan T. Method and system for compensating for parallax in multiple camera systems
WO2013144447A1 (en) * 2012-03-30 2013-10-03 Mirasys Oy A method, an apparatus and a computer program for tracking an object in images
JP2015033066A (en) * 2013-08-06 2015-02-16 株式会社ニコン Camera system
US9355433B1 (en) * 2015-06-30 2016-05-31 Gopro, Inc. Image stitching in a multi-camera array
US9466109B1 (en) * 2015-06-30 2016-10-11 Gopro, Inc. Image stitching in a multi-camera array
WO2019168323A1 (en) * 2018-02-27 2019-09-06 엘지이노텍 주식회사 Apparatus and method for detecting abnormal object, and photographing device comprising same
WO2019172618A1 (en) * 2018-03-05 2019-09-12 Samsung Electronics Co., Ltd. Electronic device and image processing method

Similar Documents

Publication Publication Date Title
Fang et al. Comparative study on poultry target tracking algorithms based on a deep regression network
Kashiha et al. Automatic monitoring of pig locomotion using image analysis
Kashiha et al. Automatic weight estimation of individual pigs using image analysis
US20100288198A1 (en) Arrangement and Method for Determining the Position of an Animal
Yang et al. An automatic recognition framework for sow daily behaviours based on motion and image analyses
EP2740049B1 (en) Method for automatic behavioral phenotyping
WO2020003310A1 (en) Monitoring livestock in an agricultural pen
NL1016836C2 (en) Farm management system with cameras for following animals on the farm.
CA3125658A1 (en) Automatic driving system for grain processing, automatic driving method, and path planning method
WO2017127188A1 (en) Unmanned livestock monitoring system and methods of use
Van Hertem et al. Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images
Del Valle et al. Unrest index for estimating thermal comfort of poultry birds (Gallus gallus domesticus) using computer vision techniques
Bhoj et al. Image processing strategies for pig liveweight measurement: Updates and challenges
CN110490161B (en) Captive animal behavior analysis method based on deep learning
KR102584357B1 (en) Apparatus for identifying a livestock using a pattern, and system for classifying livestock behavior pattern based on images using the apparatus and method thereof
Ratnayake et al. Towards computer vision and deep learning facilitated pollination monitoring for agriculture
Salau et al. Dairy cows’ contact networks derived from videos of eight cameras
GB2614051A (en) Image processing
Doornweerd et al. Passive radio frequency identification and video tracking for the determination of location and movement of broilers
Ozguven The digital age in agriculture
US20230292737A1 (en) A method of selectively treating vegetation in a field
CN109814551A (en) Cereal handles automated driving system, automatic Pilot method and automatic identifying method
NL2031623B1 (en) Animal husbandry system
EP3075348B1 (en) Method and device for displaying a dairy cow which is probably limping and selected from a group
AT524715A1 (en) Method and device for tracking animals