GB2435975A - Selecting image data based on position and time information from moving subjects of interest - Google Patents

Selecting image data based on position and time information from moving subjects of interest Download PDF

Info

Publication number
GB2435975A
GB2435975A GB0604687A GB0604687A GB2435975A GB 2435975 A GB2435975 A GB 2435975A GB 0604687 A GB0604687 A GB 0604687A GB 0604687 A GB0604687 A GB 0604687A GB 2435975 A GB2435975 A GB 2435975A
Authority
GB
United Kingdom
Prior art keywords
data
subject
tracking
collecting means
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0604687A
Other versions
GB0604687D0 (en
Inventor
Christopher Willoughby
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0604687A priority Critical patent/GB2435975A/en
Publication of GB0604687D0 publication Critical patent/GB0604687D0/en
Publication of GB2435975A publication Critical patent/GB2435975A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/30Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
    • G11B27/3027Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording used signal is digitally coded
    • G11B27/3036Time code signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A method of collating image data relating to a particular subject 2 comprises capturing image data from one or more devices 1, each device capturing image data from an associated region 8 of a geographical area, tracking the movement of the subject 2 within the geographical area to identify when the subject was within the region 8 associated with each image capture device 1, and extracting the data from each image capture device 1 that was collected during the period in which the subject 2 was within the associated region 8. There may be multiple subjects of interest and each subject may be provided with a tracking tag including a unique identification code. The tracking data may be used to process images of the subject to segment the image and extract image regions relating to the subject.

Description

<p>METHOD FOR SELECTiNG DATA BASED ON POSITION</p>
<p>AND TIME INFORMATION FROM MOVING SUBJECTS OF INTEREST</p>
<p>The present invention relates to methods of data filtration, segmentation and post processing, in particular but not exclusively video data.</p>
<p>It is possible to capture large amounts of data that represents a wide variety of physical attributes. The viable data storage size of many data storage formats has increased dramatically in recent years, along with the equipment for capturing such data becoming more easily obtained and operated. Data may be captured over extended periods of time without the need for human intervention after the initail installation of the data capture apparatus.</p>
<p>The task of identifying and retrieving the often very small amounts of data of interest from the large total data set can be difficult and time consuming. This problem is expanded when there are multiple locations from where the data is being captured and multiple subjects of interest that may pass through each data capture location a multiple of times. This task is often undertaken manually with the entire data set being interrogated to find the specific data of interest. As the general trend is for the amounts of captured data to increase, the manual approach is becoming less viable in terms of time, cost and accuracy of retrieval of the required data.</p>
<p>In situations where the data is being captured by a device with a geographical area of data capture, for example but not exclusively video or photographic cameras, there are already some solutions to this problem. For example, it is common to use movement sensors, proximity sensors, pressure pads etc to determine if a subject of interest is within the field of view of a data capture device. These methods can be satisfactory in some situations but there are problems with the existing approaches in some environments.</p>
<p>The primary problem is that most of the existing methods involve the permanent or semi permanent fitment of additional hardware to the envoronment of the field of view. This is only viable where the field of view is contained to a relatively small controlled area, for example a room in a building. There would be considerable work required to instrument a larger environment such as a long street.</p>
<p>An additional problem is that the most of the sensing hardware impacts upon the environment in which it is installed and hence interferes with the activity of the subject of interest, for example it would be difficult to instrument a football pitch with pressure pads without effecting the surface of the pitch.</p>
<p>Another problem is that most of the existing solutions are not capable of identification of individual subjects of interest, making it difficult to review the data with respect to specific subjects of interest from a large group of potential subjects of interest.</p>
<p>Another problem with many of the existing solutions is that they will only define if a subject is within the field of view of the data capture device and not specifically where they are within the field of view. This information may be useful for further data filtering, segmentation and post processing.</p>
<p>According to a first aspect of the present invention there is provided a method of collating data relating to a particular subject comprising the steps of collecting data from at least one data collecting means, the or each data collecting means collecting data from an associated region of a geographical area, tracking the movement of the subject within the geographical area, analysing the tracking information relating to the subject to identi1' when the subject was within the region associated with the or each data collecting means, and directly extracting only the data from the or each data collecting means which was collected during the period in which the subject was within the region associated therewith.</p>
<p>The present invention further provides a system for collecting and collating data relating to a particular subject comprising at least one data collecting means, the or each data collecting means collecting data from an associated region of a geographical area, tracking means for tracking the geographical location of the subject as it moves within the geographical area, means for analysing the tracking information relating to the subject and identifying when the subject was within range of each data collecting means, and processing means for extracting from the data collected from each collecting means the data covering the period during which the subject was within range thereof.</p>
<p>It is possible to fit a subject of interest with a device that monitors the geographical location of the subject of interest. Global and local positioning systems are readily available that provide position information of the location of a receiving device.</p>
<p>It is also possible to define the field of view of a data capture device, such as but not exclusively a video recorder, by defining the data capture area using geographical position referance information. There are many methods for defming the geometry of an area but providing the position of the subject of interest and the field of view of the data capture device are in expressed comparable formats and can be cross referanced the exact method is not important.</p>
<p>Given that the position of a subject of interest and the field of view of a data capture device would be known if instrumented as described above, it can be deduced whether the subject of interest is within the field of view of the data capture device by performing simple comparitive logoc on the two pieces of positional information.</p>
<p>The comparison of the position of the subject of interest and the field of view can be carried out in a number of manners. An example being the position information from the subject of interest is passed directly to the data capture device so that the data is tagged as being relevant to the subject of interest. Another example being the field of view information could be passed to the subject of interest position monitoring device and an output taken of when the subject of interest was within the field of view. Another exmple being both the field of view information and the position of the subject of interest information are passed to a separate processing device that deliveres an output of when the subject of interest is within the field of view.</p>
<p>Given that passing information between devices can be difficult, especially when larger distances between devices are encountered, another example would be to record both the field of view data and the track data from the subject of interest and compare the two pieces of information at a later time when the information is more easily transferred.</p>
<p>If the information is to be recorded and compared at a later time then an enhancement would be to record time and a unique identifier along with the position information from the subject of interest. Another enhancement in this situation would be to record time and a unique identifier for the data captured from the field of view. The time in both cases should be recorded in a manner that allows the synchronisation of the position of the subject of interest and the data from the field of view. The unique identifier information is useful when there are multiple subjects of interest or multiple fields of view or both.</p>
<p>The comparison of the two sets of data described above would firstly compare the position of the subject of interest relative to the area of the field of view. The output of this comparison would be a set of times when the subject of interest was in the field of view of the specific data logger. These time frames would then used to recall the data from the data logger with the field of view. If all of the data to be manipulated is digital it would be possible to do this task computationally. This task would be repeated for all of the data loggers with a field of view for a given subject of interest. This would allow any data for a given subject of interest from a multiple number of data loggers to be collated together for easy interogation.</p>
<p>It would also be possible to use this method applied to a group of subjects of interest. The logic would interrogate the track data from a specified group of subjects of interest and filter the main body of data for occasions when any of the subjects of interest were within the field of view.</p>
<p>It would be possible to further use the position information of the subject of interest if required. For example if the data for a specific area within the field of view were required then the data could be further filtered. Another example would be to filter the data based upon the direction or speed of motion of the subject of interest. Another example would be to detennine the position of the subject of interest within a pictorial view, when the captured data is visual in its nature, and produce an enlarged image by isolating the image local to the postion of the subject of interest within the field of view. A method for achieving this is described in more detail in the illustrative embodiment.</p>
<p>In order that the invention may be well undersatood, there will now be described an embodiment threof, given by way of example, refemece being made to the accompanying drawings, in which: Figure 1 is a schematic illustration of a mounting system of the invention; Figure 2 is a system flow diagram of a method embodying the invention; Figure 3 is a schematic illustration of a particular data collection region of the system of Figure 1; Figure 4 is an illustration of an image obtained from the system of Figure 1; and Figure 5 is an illustration of the image of Figure 4 after it has been processed.</p>
<p>Referring first to Figure 1, there is shown the general layout of an example to illustrate an embodyment of the invention. Video data is recorded from two vantage points with cameras A and B, I, as two cars X and Y, 2, are driven down a road 13 whilst competeing in a auto race. The field of view 8 of each of the cameras I is defined by location of the defining points its boundary 9.</p>
<p>The competing cars 2 would be fitted with a data logger each that recorded the GPS position of the car and the time at which the car was at each of these positions GPS postitions. This measurement would be taken at perscribed time intervals to allow the course of the car to be logged during the duration of the competition. A unique identifier would also be recorded on the data for each car so as the data could be referanced back to each car.</p>
<p>During the course of the auto competition the video footage would be recorded by each camera. As this data is being recorded the time would also be recorded, allowing the camera footage to be referanced to the time at which it was captured.</p>
<p>Along with this data a unique identifier to the camera that the data was recorded from would incorporated in the data set.</p>
<p>After the auto competition has concluded the desired output would be to fmd all of the video footage for each individual car.</p>
<p>Figure 2 illustrates the system flow diagram of how the data would be collected in this embodyment, and at what stage it would be processed. The video data, time referance and unique camera identifier data is recorded by the cameras I. The data for all of the cameras would then be sent via a data link 3 and stored together in a data storage device 5.</p>
<p>The position, time and unique identifier information from the competing cars 2 would be sent via a data link 4 to a data storage device 6 at the conclusion of the auto race.</p>
<p>The information that describes the field of view of each of the cameras would be stored in a data storage device 8.</p>
<p>All of the data sets would be linked to a computational processor that was capable of performing logic functions with the data sets described 7.</p>
<p>The first stage of the logic function would be to compare the GPS information for a given car against the field of view of each of the cameras as described by GPS positions.</p>
<p>The output of this process would be a range of times when the car was within the</p>
<p>field of view of a given camera.</p>
<p>When the position of the car is found to be within the field of view of the camera then the time referance and the camera identifier is then used to retrieve the video footage from the data storage device 5. This process is continued through all of the GPS data for each car.</p>
<p>Once all of the relevant video footage is located for a given car it can then be transferred via a data link 10 to descrete storage locations 11 and 12, so that it is possible for a viewer to see all of the footage for a given car. The car GPS data and time referance is also be stored synchronously with the filtered video data.</p>
<p>This information can be further processed to filter the video data to enhance the view of the car within the viewed video frame as described below.</p>
<p>Figure 3 shows how the car 2 could be positioned at an instant within the field of view 8 of the camera I as it is driven along road 13. Given that the GPS position of the car is know, it is therefore known where the car is within the field of view of the camera. The position could be refaranced by calculating the distance left or right of the field of view centreline 16 and the distance from the edge of the field of view closest to the camera 15 for example.</p>
<p>Figure 4 shows how the raw image would appear for the situation described in Figure 3. The defining points of the field of view of the camera, Figure 3 items 9 are in this instance directly related to the corners of the image, Figure 4 items 20.</p>
<p>The car 2 would appear in the top right of the picture as it is distant and to the right in the field of view of the camera. In Figure 4 the dimension 19 is related to Figure 3 dimension 15 by simple rules of perspective. This is also the case for Figure 4 dimenion 18 and Figure 3 dimension 16. This may not hold true for uneven ground whereby a distortion of the perspective rules may be required to correctly assiciate the geographical position with a position in the frame of video data.</p>
<p>It may be desireable to enlarge the view of the car in Figure 4 as its image will be small as a result of it being distant in the field of view. This is possible with the data recorded as the centre for enlargement is known from the dimensions 18 and 19. The size of the enlargement box 14 can also be calculated as the depth of position of the car within the field of view is known by dimension 15. Again simple rules of perspective allow this to be calculated.</p>
<p>After the image data bounded by 14 is filtered from the main body of the video the image can be presented as shown in Figure 5. This method can be applied to the duration of the video footage.</p>

Claims (1)

  1. <p>CLAIMS: 1. A method of collating data relating to a particular subject
    comprising the steps of collecting data from at least one data collecting means, the or each data collecting means collecting data from an associated region of a geographical area, tracking the movement of the subject within the geographical area, analysing the tracking information relating to the subject to identifr when the subject was within the region associated with the or each data collecting means, and directly extracting only the data from the or each data collecting means which was collected during the period in which the subject was within the region associated therewith.</p>
    <p>2. A method according to claim 1, wherein data is collected from a plurality of data collecting means, each data collecting means being associated with a different region of the geographical area.</p>
    <p>3. A method according to claim I or claim 2, wherein the step of tracking the subject comprises providing the subject with a tracking tag and tracking the movement of the tag within the geographical area.</p>
    <p>4. A method according to claim 3, wherein the tracking tag detects and records it position, the recorded information being downloaded when the subject leaves the geographical area and analysed to identif' when the subject was located within a region associated with the or each data collecting means.</p>
    <p>5. A method according to claim 3 or claim 4, wherein the tracking tag uses a local positioning system or global positioning system to track its location within the geographical region.</p>
    <p>6. A method according to claim 3, wherein the tracking tag includes transponder means, the or each data collecting means including receiver means which detects the presence of the tracking tag in its associated region.</p>
    <p>7. A method according to claim 6, wherein upon detecting the presence of a tracking tag within its associated region, the data collecting means tags the data while it is collected to identify it as relating to the subject.</p>
    <p>8. A method according to claim 6 or claim 7, wherein the transponder means includes a unique identification code, so as to enable data to be collected and collated for a plurality of subjects at the same time.</p>
    <p>9. A method according to claim 6, wherein the data collecting means including transmitting means for simultaneously transmitting the collected data, and said tracking tag includes data storage means, the data collecting means, upon detecting the presence of the tracking tag within its associated region, transmitting the collected data to the tracking tag, the tracking tag storing the transmitted data in its storage means.</p>
    <p>10. A method according to any of the preceding claims, wherein the or each data collecting means and the tracking information are synchronised to a common time base.</p>
    <p>11. A method according to any of the preceding claims, further comprising the step of locating and/or tracking the position and/or motion of the subject within the or each associated region of the geographical area.</p>
    <p>12. A method according to claim 11, wherein the step of locating and/or tracking the subject within the or each associated region is achieved using the tracking information.</p>
    <p>13. A method according to claim 11, wherein the subject is located and/or tracked by analysing the data collected when the subject was in each associated region and identifying the data relating to the subject within the collected data.</p>
    <p>14. A method according to any of the preceding claims, wherein the data collecting means comprise video data collection means.</p>
    <p>15. A method according to claim 14, further comprising the step of identifying the subject within the field of view of each video data collection means and controlling the video data collection means to track the subject, in particular panning and or zooming the video data collection means.</p>
    <p>16. A method according to claim 14 or claim 15, further comprising the step of filtering the collected data based on direction and or speed of the subject.</p>
    <p>17. A system for collecting and collating data relating to a particular subject comprising at least one data collecting means, the or each data collecting means collecting data from an associated region of a geographical area, tracking means for tracking the geographical location of the subject as it moves within the geographical area, means for analysing the tracking information relating to the subject and identifying when the subject was within range of each data collecting means, and processing means for extracting from the data collected from each collecting means the data covering the period during which the subject was within 18. A system according to claim 17, wherein the tracking means includes a transponder.</p>
    <p>19. A system according to claim 17 or claim 18, wherein the tracking means includes data storage means.</p>
    <p>20. A system according to any of claims 17 to 19, wherein the tracking means uses a global positioning system or a local positioning system.</p>
    <p>21. A system according to any of claims 17 to 20, wherein the tracking means includes a unique identification code.</p>
    <p>22. A system according to any of claims 17 to 21, wherein the or each data collecting means is video data collecting means.</p>
    <p>23. A system according to any of claims 17 to 22, comprising a plurality of data collecting means, each of which collects data relating to a different region of the geographical area.</p>
    <p>24. A method substantially as herein described with reference to the accompanying drawings 25. A system substantially as herein described with reference to the accompanying drawings.</p>
GB0604687A 2006-03-08 2006-03-08 Selecting image data based on position and time information from moving subjects of interest Withdrawn GB2435975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0604687A GB2435975A (en) 2006-03-08 2006-03-08 Selecting image data based on position and time information from moving subjects of interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0604687A GB2435975A (en) 2006-03-08 2006-03-08 Selecting image data based on position and time information from moving subjects of interest

Publications (2)

Publication Number Publication Date
GB0604687D0 GB0604687D0 (en) 2006-04-19
GB2435975A true GB2435975A (en) 2007-09-12

Family

ID=36241229

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0604687A Withdrawn GB2435975A (en) 2006-03-08 2006-03-08 Selecting image data based on position and time information from moving subjects of interest

Country Status (1)

Country Link
GB (1) GB2435975A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020001468A1 (en) * 2000-07-03 2002-01-03 Fuji Photo Film Co., Ltd. Image collecting system and method thereof
US20030103149A1 (en) * 2001-09-28 2003-06-05 Fuji Photo Film Co., Ltd. Image identifying apparatus and method, order processing apparatus, and photographing system and method
US20040153970A1 (en) * 2002-11-20 2004-08-05 Sony Corporation Picture production system, and picture production apparatus and method
JP2005286377A (en) * 2004-03-26 2005-10-13 Fuji Photo Film Co Ltd Scene extraction system and scene extraction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020001468A1 (en) * 2000-07-03 2002-01-03 Fuji Photo Film Co., Ltd. Image collecting system and method thereof
US20030103149A1 (en) * 2001-09-28 2003-06-05 Fuji Photo Film Co., Ltd. Image identifying apparatus and method, order processing apparatus, and photographing system and method
US20040153970A1 (en) * 2002-11-20 2004-08-05 Sony Corporation Picture production system, and picture production apparatus and method
JP2005286377A (en) * 2004-03-26 2005-10-13 Fuji Photo Film Co Ltd Scene extraction system and scene extraction method

Also Published As

Publication number Publication date
GB0604687D0 (en) 2006-04-19

Similar Documents

Publication Publication Date Title
CN101918989B (en) Video surveillance system with object tracking and retrieval
US10152858B2 (en) Systems, apparatuses and methods for triggering actions based on data capture and characterization
EP2923487B1 (en) Method and system for metadata extraction from master-slave cameras tracking system
US8953044B2 (en) Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems
TWI416068B (en) Object tracking method and apparatus for a non-overlapping-sensor network
US8879786B2 (en) Method for detecting and/or tracking objects in motion in a scene under surveillance that has interfering factors; apparatus; and computer program
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
JP6013923B2 (en) System and method for browsing and searching for video episodes
KR101492180B1 (en) Video analysis
CN101277429A (en) Method and system for amalgamation process and display of multipath video information when monitoring
JP3466169B2 (en) Management system for roads and surrounding facilities
CN106446002A (en) Moving target-based video retrieval method for track in map
WO2020183345A1 (en) A monitoring and recording system
TWI430664B (en) Intelligent Image Monitoring System Object Track Tracking System
CN113256731A (en) Target detection method and device based on monocular vision
CN112836683A (en) License plate recognition method, device, equipment and medium for portable camera equipment
KR100885418B1 (en) System and method for detecting and tracking people from overhead camera video
Lin et al. Moving camera analytics: Emerging scenarios, challenges, and applications
KR101161557B1 (en) The apparatus and method of moving object tracking with shadow removal moudule in camera position and time
CN115019241B (en) Pedestrian identification and tracking method and device, readable storage medium and equipment
GB2435975A (en) Selecting image data based on position and time information from moving subjects of interest
US7869621B1 (en) Method and apparatus for interpreting images in temporal or spatial domains
JP7102383B2 (en) Road surface image management system and its road surface image management method
US20100202688A1 (en) Device for segmenting an object in an image, video surveillance system, method and computer program
Lin et al. Accurate coverage summarization of UAV videos

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)