CN101170683A - Target moving object tracking device - Google Patents

Target moving object tracking device Download PDF

Info

Publication number
CN101170683A
CN101170683A CNA2007101425988A CN200710142598A CN101170683A CN 101170683 A CN101170683 A CN 101170683A CN A2007101425988 A CNA2007101425988 A CN A2007101425988A CN 200710142598 A CN200710142598 A CN 200710142598A CN 101170683 A CN101170683 A CN 101170683A
Authority
CN
China
Prior art keywords
pixel
contour images
pixel value
row
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007101425988A
Other languages
Chinese (zh)
Other versions
CN101170683B (en
Inventor
古川聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Electric Works Co Ltd
Original Assignee
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2006293079A external-priority patent/JP4725490B2/en
Priority claimed from JP2007110915A external-priority patent/JP4867771B2/en
Application filed by Matsushita Electric Works Ltd filed Critical Matsushita Electric Works Ltd
Publication of CN101170683A publication Critical patent/CN101170683A/en
Application granted granted Critical
Publication of CN101170683B publication Critical patent/CN101170683B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

A target moving object tracking device takes a time series of picture images of a target moving object, and track a movement of the moving object in the picture images for displaying an enlarged view of the moving object. The device includes a template memory storing a template image which is compared with each one of time-series outline images given from the picture images to determine a partial area in match with the template image for each of the outline images and extract the partial areas as a moving object outline image. The template image is constantly updated by being replaced with a combination of the previous moving object outline images to accurately reflect the moving object.

Description

Target moves the object tracking equipment
Technical field
The present invention relates to target and move the object tracking equipment, more specifically, relate to a kind of like this equipment that is used to utilize the possible invador of video camera tracking.
Background technology
Exist increase day by day below need: in the restricted area around door or the analog, follow the tracks of people's activity and identification people's activity by means of video camera.For this reason, the tracking equipment of prior art is configured to the differentiation by the time series monitoring picture usually, determines the mobile subject area in each monitoring picture, and amplifies the mobile subject area of so determining.Yet so the mobile subject area of determining is accelerated along with mobile object moves and is become littler, and this causes the resolution of enlarged image of mobile subject area very poor.In order to address this problem, at paper " Human Tracking Using TemporalAveraging Silhouette with an Active Camera " (Transactions of the Instituteof Electronics, Information and Communication Engineers ISSN:09151923, vol.J88-D-II, No.2, pp 291-301 published on February 1st, 2005) in, proposed to determine another scheme of mobile subject area.This paper proposes, and based on the motion vector (light stream (optical flow)) that obtains at the Chosen Point in each image of continuous monitoring image, determines mobile subject area.At first, this scheme obtains to differentiate (background differentiation) and detected encirclement by image discriminating and the detection block of detected mobile object by background.Then, this scheme and then obtain motion vector about the zone of selecting respectively at the within and without of the detection block of each image of two consecutive images, mobile subject area is distinguished in analysis based on motion vector from background, and the mobile contours of objects in the extraction present image moves the shape and the center of object to determine this.Although finding this scheme is effectively in expection has only the environment of a mobile object, expection has under the situation more than a mobile object in the visual field of video camera, and it is but quite difficult to discern mobile object.In order to alleviate this shortcoming, rely on template may think effectively, by this template, target can be moved object and other and move object and differentiate.Because mobile contours of objects is defined as having the set of the part of same movement vector, so the accurate shape of mobile object quite is difficult to extract.Therefore, even add template, such scheme is to realizing that the reliable definite of mobile object also can't be satisfactory.
Summary of the invention
Consider above problem, made the present invention, can determine that the target that target moves object moves the object tracking equipment, is used for the identification that target moves object with the accuracy that improves so that provide.
Equipment according to the present invention comprises: picture image memory (20), and it is configured to store video camera (10) about observing the time series of the captured real picture image in district, wherein, observes district's possible target of covering and moves object; And display (30), it is configured to show selectable one or more real picture images in the described real picture image with required magnification ratio.In this equipment, also comprise: contour images (outline image) processor (40), it is configured to provide contour images from described real picture image respectively; And template storage (60), it is configured to store the template image that recognition objective moves object.This equipment further comprises mobile subject positioner (70), it is configured to each contour images in the described contour images and template image order are compared, to detect the regional area that mates with template image in each contour images, and, obtain target and move the position data of object in observing the district based on being detected the described regional area that mates with template image.Comprise in this equipment and amplify picture generating apparatus (80), it is used for based on described position data, from the real picture image and the corresponding extracting section enlarged image of regional area with template image coupling of being detected contour images, and on display, show the picture view that amplifies.The invention is characterized in, mobile subject positioner (70) be configured to from the corresponding contour images of regional area that is detected with template image coupling each extract mobile object outline image, and be to provide template renewal device (62), be used for by use current in the described mobile object outline image one with described mobile object outline image in previous one or more combining replace template image, thereby upgrade template image.By means of this configuration, template image is constantly upgraded, in case current and previous contour images is detected and the template image coupling, has just reflected current and previous contour images well.Therefore, for the profile of the changeless a bit part of shape in the human body (that is with compare the part that between the moving period of human body, is difficult for having the shape fluctuation such as other parts such as arm or legs) such as head or shoulder, template image can be accumulated and weighting, so that provide solid foundation for the reliable recognition of mobile object.In addition, the any little omission of the part among in mobile object outline image one of described mobile object can both replenish by in the described mobile object outline image another, thereby make template image move object near target as far as possible, this causes determining accurately that based on the comparison between contour images and the template image target moves object.
Preferably, described contour images processor is configured to provide the contour images by the binary data definition, so that reduce the storage requirement of storage contour images when realizing described equipment.
Alternatively, described contour images processor can be configured to provide the contour images by the discrete gray levels data definition, when realizing described equipment with enough memories, make it possible to more accurately compare, and accurate more template image is provided with template image with box lunch.
In this respect, described contour images processor can be configured to obtain the contrast of template image, contour images by the binary data definition is provided when surpassing predetermined benchmark with convenient contrast, and when contrast is lower than described benchmark, provides by the defined described contour images of gray-scale data.Therefore, described equipment can rely on the contrast of template image of continual renovation and optimally operation, thereby realizes uniform detection that target is moved object.
In order to determine the contrast of template image, the contour images processor preferably detects average pixel value, this average pixel value is the mean value of composing respectively to the pixel value of the pixel in each subregion in a plurality of subregions (division) of template image, and when in the described subregion any one is detected when having the average pixel value that is lower than threshold value, perhaps when at the detected average pixel value of in the described subregion any one than during greater than predetermined extent, just judging that contrast is lower than described benchmark at the low degree of the average pixel value of another detection in the described subregion.
When the contour images of binary data is provided, the contour images processor preferably provides variable thresholding, be used for described real picture image transitions is become the contour images of binary data, and obtain the average gray-level value of template image, reduce described threshold value when being lower than predetermined limits with the described average gray-level value of box lunch.Therefore, even when the template image contrast reduces, also can be provided for successfully detecting mobile contours of objects image.
Preferably, provide mobile object outline video memory, so that store the time series of mobile object outline image.In this respect, the template renewal device is configured to read the previous mobile object outline image of predetermined number from mobile object outline video memory, these contour images are combined with current mobile object outline image, and by upgrading previous template image with described in conjunction with replacing previous template image.By means of this selective binding of contour images, template image can be moved object so that successfully detect target by appropriately weighted.
Realized a kind of preferred weighting scheme in the template renewal device, it moves and upgrades template image when the accumulation of object outline image reaches predetermined number whenever new continuous one group.
The place can realize another weighting scheme at the template renewal device, and it is only in conjunction with being defined as effective mobile object outline image according to predetermined criterion, thereby makes mobile object detection have the accuracy of raising.
For this reason, the template renewal device is configured to the calculating pixel index, this pixel index for comprise in mobile object outline image each and have a number (number) greater than the pixel of zero pixel value.When above-mentioned criterion is defined by difference at the pixel index of the pixel index of current mobile object outline image and previous mobile object outline image greater than predetermined extent, determine that current mobile object outline image is for effectively.
Alternatively, above criterion can have different definition.In this case, the template renewal device is configured to calculate the standard deviation of one pixel value in the real picture image of current mobile object outline image and correspondence.When this criterion is defined by difference when this standard deviation and the standard deviation that is calculated about previous mobile object outline image greater than predetermined extent, determine that current mobile object outline image is for effectively.
And, can according to constitute in the mobile object outline image each in the number of pixel of mobile object outline define above-mentioned criterion.In this case, the template renewal device is configured to the number of calculating pixel, and provide above criterion, this criterion about the number of pixels of current mobile object outline image with about the difference of the number of pixels of previous mobile object outline image during greater than predetermined extent, determine that current mobile object outline image is for effectively.
The present invention also proposes to use coalignment (71), so that reference template image determines that successfully target moves object.Coalignment (71) is configured to: from contour images, collect different regional areas, i.e. and unit area, each unit area has identical size with template image; Calculating is about the correlation of each described zones of different; And the regional area that will have maximum correlation is defined as the mobile object outline image with the template image coupling.In response to determining of mobile object outline image, the template renewal device is operated to obtain the pixel value of each pixel in the mobile object outline image, so that described pixel value is added on each respective pixel in the pixel in the previous mobile object outline image, thereby provide the template image of renewal.
Coalignment (71) is configured to provide above correlation, and this correlation can have different definition.For example, this correlation can be defined as about be selected to each regional area that constitutes in the regional area of contour images in profile the corresponding pixel of pixel, the pixel value that from template image, obtains and or sum of powers (power sum).
And correlation can be weighted appropriately, determines that to improve target moves the accuracy of object.In this case, the contour images processor is configured to provide the contour images of binary data, and wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, and pixel value " 0 " is assigned to the rest of pixels of contour images.Coalignment (71) be configured to from described template image to select with the regional area that constitutes contour images in the corresponding pixel of pixel of profile in each regional area, and obtain the number of having around each selected pixel, so that be weighted according to the number of pixels of acquisition like this pixel value to each selected pixel greater than the pixel of the pixel value of " 0 ".Described coalignment (71) further is configured to described correlation is defined as the pixel value sum of the weighting like this of selected pixel in the template image.
Correlation can be endowed different weights.For example, coalignment (71) is configured to obtain: first number of the pixel that meets the following conditions of each regional area of described regional area, described condition are that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in the regional area and the template image; And second number that has the pixel of pixel value " 1 " or bigger pixel value in the template image.Then, coalignment (71) definition is used for the above-mentioned correlation of each regional area of described regional area, and it is the ratio of first number to second number.
Coalignment (71) can be configured to obtain: first number of the pixel that meets the following conditions in each regional area in the described regional area, described condition are that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in the regional area and the template image; Second number of the pixel that meets the following conditions in each regional area in the described regional area, described condition are that pixel both corresponding in pixel in the regional area and the template image has pixel value " 0 "; And the 3rd number that has the pixel of pixel value " 1 " or bigger pixel value in the template image.In this case, for each regional area in the described regional area, correlation is defined as first number and adds after second number ratio to the 3rd number.
And, coalignment (71) can be configured to obtain maximum pixel value from the pixel set of the selected pixel arrangement around template image, and described selected pixel is corresponding to each pixel in the pixel of the profile in each regional area in the regional area that constitutes contour images.In this case, in coalignment (71), correlation is defined as respectively the maximum sum that obtains at described regional area.
Further, coalignment (71) can be configured to obtain various parameters, is used for based on such parameter-definition correlation.Described parameter comprises: first row index, arrange in its each row for each regional area of contour images and have a number greater than the pixel of the pixel value of " 0 "; First column index, arrange in its each row for each regional area of contour images and have a number greater than the pixel of the pixel value of " 0 "; Second row index, arrange in its each row for template image and have a number greater than the pixel of the pixel value of " 0 "; And the secondary series index, arrange in its each row for template image and have a number greater than the pixel of the pixel value of " 0 ".And it is poor to obtain line number with the difference between first row index and second row index, and it is poor to obtain columns with the difference between first column index and the secondary series index.So coalignment (71) obtains head office's value, it is the line number difference sum that obtains about row respectively; And total train value, it is the columns difference sum that obtains about row respectively, so that at each regional area in the described regional area, correlation is defined as the inverse of head office's value and total train value.
The present invention further proposes contour images is limited to limited region of search, so that move object to detect target the detection time that reduces.For this reason, described equipment comprises position estimation device, is used to estimate the limited region of search that target moves the detection of object that is used in the contour images.In this respect, mobile object extraction device is configured to based on poor with time correlation between two or more continuous contour images, detect at least one possible mobile object, and provide that the size that covers described mobile object reduces at least one cover part.Position estimation device is configured to: when mobile subject positioner provides positional information, just obtain to be stored in the time series data of the position data in the position data memory; Two or more continuous time series data of position-based information are calculated the estimated position that target moves object; The detecting area of preliminary dimension is set around the estimated position; And limited region of search is provided, it is to comprise at least one coverage partly the Minimum Area overlapping with detecting area.As a result, mobile subject positioner is configured to only select regional area in limited region of search, thereby has reduced the time that definite target moves object.
Position estimation device can be configured to: two or more continuous time series data of position-based information, calculate the estimation translational speed of mobile object; And detecting area is provided, the size of described detecting area and the estimating speed of mobile object are proportional.
At the position estimation device place, detecting area can be confirmed as having such size, and this size is the function of template image size.
In order further to limit described limited region of search, position estimation device can be configured to: obtain row index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search; Select one group of continuous row, each provisional capital has the row index greater than the predetermined row threshold value; Obtain column index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search; Select one group of continuous row, each row all has the column index greater than the predetermined column threshold value; And limited region of search is limited to the zone of being limited by continuous row group of selecting and the continuation column group selected.
In this respect, position estimation device can be configured to: to move a group of estimated position of object effective when having selected two or more continuously during the row group, only made to approach more in described group target; And when having selected two or more continuation column groups, only make and approach target in described group more to move a group of estimated position of object effective.
The another kind that has proposed limited region of search limits, wherein, position estimation device can be configured to: obtain row index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along each row of described limited region of search; And select at least one row group continuously, each provisional capital has the row index greater than the predetermined row threshold value.Move one of described estimated position of object the row group is effectively continuously when having selected two or more continuously during the row group, only made to approach more target.Subsequently, only about calculating by effective continuously row group institute restricted portion obtaining column index, the number of this column index for arranging along each row of limited region of search and have the pixel of pixel value " 1 " or bigger pixel value.Then, select the continuation column group, each row all has the column index greater than the predetermined column threshold value, so that position estimation device further is limited to limited region of search the zone of being limited by continuation column group of selecting and effective row group.The favourable part of this scheme is further to have reduced the quantity of the calculating that is used for definite mobile object.
Alternatively, position estimation device is configured to: at first analyze column index, so that a group in the independent continuation column group is effective; And only select row group continuously with reference to effective row group, be used for further limiting limited region of search.
In conjunction with the accompanying drawings, according to the following description of preferred embodiment, these and other favorable characteristics of the present invention will become more apparent.
Description of drawings
Fig. 1 is that target moves the block diagram of object tracking equipment according to the preferred embodiment of the invention;
Fig. 2 illustrates how reference template image is followed the tracks of this from the continuous profile image of mobile object is moved the schematic diagram of object;
Fig. 3 illustrates the time series data of contour images;
Fig. 4 illustrates template image;
Fig. 5 is the flow chart that illustrates the basic operation of described equipment;
Fig. 6 is the schematic diagram that illustrates the mobile object outline image that extracts from contour images;
Fig. 7 is the schematic diagram that illustrates the template image that the mobile object outline image with Fig. 6 compares;
Fig. 8 A to 8C illustrates that respectively the layout that depends on pixel value comes the schematic diagram of scheme that template image is weighted;
Fig. 9 A and 9B are the schematic diagrames that illustrates the counterpart of the part of regional area of contour images and template image respectively;
Figure 10 A and 10B are the schematic diagrames that illustrates the counterpart of the part of regional area of contour images and template image respectively;
Figure 11 illustrates being used for and regional area that the correlation aspect of template image between distributing with regard to pixel value compares and diagrammatic representation that the pixel value that calculates distributes at contour images;
Figure 12 is the diagrammatic representation of diagram at the pixel value distribution of template image calculating;
Each scheme of the limited region of search that provides in the contour images is provided Figure 13 to 15;
Figure 16 illustrates key diagram how to determine mobile object in limited region of search; And
Figure 17 and 18 illustrates the key diagram that how further to limit limited region of search under the situation of a plurality of mobile objects existing respectively.
Embodiment
With reference to figure 1, the target that shows according to a preferred embodiment of the invention moves the object tracking equipment.This equipment moves object to detect invador or target, follows the tracks of its motion and shows that with the view that amplifies the invador is so that discern this invador as invador's surveillance.
This equipment comprises video camera 10, it covers observes the district to take continuous picture, described picture is converted into the time series numerical data of the real picture image P that covers whole visual field by A/D converter 12, as shown in Figure 2, and is stored in the picture image memory 20.This equipment comprises display 30, and it can show be used for the view of amplification that recognition objective moves the chosen part of object in current picture image and the present image as will discussing after a while.In this equipment, comprise contour images processor 40, be used for generating contour images, and the time series data of consequent contour images is stored in the contour images memory 42 according in the real picture image in the memory 20 each.Contour images or adopt the form of binary picture data perhaps adopts the form of gray-scale data.Mobile object extraction device 50 is provided, has been used for to each contour images extracts regional area that is surrounds the unit area of mobile object, its details will discussed after a while.At first, regional area is fed to template storage 60 and is stored in the template storage 60 as the transition template image, it is compared with a contour images subsequently at mobile subject positioner 70 places, so that localizing objects moves object in the framework of this contour images.As will discussing after a while, in this equipment, comprise template renewal device 62, be used for comprising that by using to be determined at mobile subject positioner 70 places subsequently the combination of regional area that target moves the selected number of object replaces template image T and regular update template image T.This in the contour images is determined and comprises that the regional area that target moves object is called as mobile object outline image hereinafter.For this reason, provide mobile object outline video memory 72, be used for storing up the time series data of mobile object outline image M O1 to MO6, they for example illustrate in Fig. 2, compare with real picture image P and template image T, and shown in Figure 3.The template image T that is noted that renewal in this respect is defined as having the grayscale image of the pixel value of variation.
When mobile object extraction device 50 extracts when determining the original template image that wherein each all comprises two or more parts of possible mobile object, make mobile subject positioner 70 that the continuous profile image of each the candidate's part in the described part with predetermined number compared, with the believable part of determining in these contour images, to occur continuously, so that template renewal device 62 is appointed as the original template image with this believable part.
Mobile subject positioner 70 is configured to: at the mobile object of the framework of each contour images that is arranged in current contour images, obtain position data; And position data is sent to position data memory 74.Position data constantly reads by amplifying picture generating apparatus 80, described amplification picture generating apparatus 80 responds to read current picture image, from wherein selecting a part, and generate the amplification picture image of this part with predetermined magnification ratio, be used for showing this amplification picture image, thereby move object to keeper's notification target with the effect of amplifying at display 30.
This equipment further comprises position estimation device 76, two or more continuous time series datas in its position-based data, calculate the estimated position that target moves object, and in the contour images around this estimated position, provide limited region of search, be used for the detection that target moves object, its details will discussed after a while.
In brief, this equipment repeats in check circulation, and this circulation may further comprise the steps as shown in Figure 5: take real picture image (S1); Generate its contour images (S2); Calculate the estimated position (S3) of mobile object; Estimate limited region of search (S4); Obtain the position data (S5) of mobile object; Upgrade template image (S6); And the view (S7) that shows the amplification of mobile object.
Now, will be described hereinafter the details of this equipment several portions.Contour images processor 40 is configured to: after upgrading template image, obtain the contrast of template image T; And when contrast surpasses predetermined benchmark, provide, otherwise provide contour images by the gray scale definition by the defined contour images of binary data.At the kind time-like of determining contour images, contour images processor 40 detects average pixel value, this average pixel value is a mean value of composing the pixel value of the pixel in each subregion of giving in a plurality of predetermined partition of template image respectively, and, when arbitrary subregion in the described subregion is detected when having the average pixel value that is lower than threshold value, perhaps the detected average pixel value of arbitrary section post is than the low degree of the detected average pixel value of another section post at described subregion during greater than predetermined extent at described subregion, and contour images processor 40 just judges that described contrast is under benchmark.When generating the contour images of binary data, contour images processor 40 is configured to rely on the variable thresholding that is used for the real picture image transitions is become binary data image, and obtain the average gray-level value of template image, when lower, reduce described threshold value with the described average gray-level value of box lunch than the predetermined limits that is used for successfully comparing at mobile subject positioner 70 places and template image.The contour images of binary data also is known as edge image, and it obtains by means of the technology of well-known Sobel filter or correspondence.
Come mobile object extraction device 50 is described now with regard to the function of the mobile contour images of extraction of mobile object extraction device 50.At each contour images that extracts in the time (T), mobile contour images is with reference to two previous contour images that extract at time (T-Δ T2) and (T-Δ T1) respectively and respectively in two contour images acquisitions subsequently of time (T+ Δ T1) and (T+ Δ T2) extraction.The contour images that extracts at time (T-Δ T2) and (T+ Δ T2) is respectively carried out and operation, to provide the first logic product image PT1, simultaneously the contour images that extracts at time (T-Δ T1) and (T+ Δ T1) is respectively carried out and operation, to provide the second logic product image PT2.The first logic product image PT1 is inverted, and carry out and operation with the contour images that extracts in time T subsequently, to provide the 3rd logic product image PT3, it comprises: the mobile contours of objects that occurs in contour images when T, the background profile that when the time (T-Δ T2), is hidden in mobile object back and occurs during in the time (T), and when the time (T-Δ T2), occur and be hidden in the background profile of mobile object back during in the time (T+ Δ T2).Similarly, the second logic product image PT2 is inverted, be carried out subsequently and operate, to provide the 4th logic product image (PT4), it comprises: the mobile contours of objects that occurs in contour images when T, the background profile that when the time (T-Δ T1), is hidden in mobile object back and occurs during in the time (T), and when the time (T-Δ T1), occur and be hidden in the background profile of mobile object back during in the time (T+ Δ T1).At last, the third and fourth logic product image is carried out and operation, to extract mobile contours of objects.Mobile object extraction device with above-mentioned functions has been known in the art, for example, as disclosed among the Japanese patent publication No.2004-265252, therefore unnecessaryly further describes in detail.In this respect, the present invention can utilize the similar mobile object extraction device of various configurations.
The renewal of explanation template image T now.Template renewal device 62 is configured to read the previous mobile object outline image of predetermined number from mobile object outline video memory 72, and these images are combined with the current mobile object outline image that is determined at mobile subject positioner 70 places with the template image coupling, thereby upgrade previous template image T by replacing previous template image T with the image of combination like this.As schematically showing among Fig. 7, this template image T with higher pixel value compose in the human body such as the profile of specific parts such as head or shoulder, described specific part immobilizes more, that is with compare during movement the change that is difficult for producing shape such as other parts such as arm or legs.Utilize this result, template image T becomes and has indicated the major part of mobile object well, so that for by determining reliably that with regard to the comparison of correlation aspect between template image and the mobile contour images and current mobile object outline image mobile object provides solid foundation, as will discussing after a while.And so the template image T that upgrades can move any little omission that part corresponding in the object outline image compensates the part of a mobile object in the mobile object outline image well by one or more other.For example, for a part that is hidden in the belly of the hand back of waving fast in the mobile object images, can be used in other part that moves the correspondence that occurs in the object images replenishes, thereby make template image move object near target as far as possible, this causes based on the comparison between contour images and the template image, determines that accurately target moves object.
Preferably, move the accumulation of object outline image when reaching predetermined number whenever new continuous one group, just upgrade.And template renewal device 62 is configured to only will to be confirmed as effective mobile object outline image according to predetermined criterion and carries out combination.An example of described criterion is based on pixel index, this pixel index be in each of described mobile object outline image, comprising of calculating, have a number greater than the pixel of zero pixel value.When this criterion is defined by difference at the pixel index of the pixel index of current mobile object outline image and previous mobile object outline image greater than predetermined extent, just determine that current mobile object outline image is effective.Another criterion is based on the standard deviation of one pixel value in the real picture image of current mobile object outline image and correspondence, and be defined by when this standard deviation with during greater than predetermined extent, just determine that current mobile object outline image is effective about the difference of the standard deviation of previous mobile object outline image calculation.And, this criterion can be based on the number of the pixel of the mobile contours of objects in the described mobile object outline image of the formation that calculates each, and be defined by when about the described number of pixels of current mobile object outline image with during greater than predetermined extent, just determine that current mobile object outline image is effective about the difference of the described number of pixels of previous mobile object outline image.
Template renewal device 62 can be configured to use following weighted equation, relatively current mobile contour images is weighted with one group that combines previous mobile contour images:
T(x,y)=K·Vn(x,y)+(1-K)·Vp(x,y)
Wherein, T (x, y) pixel value in each pixel of expression template image, Vn (x, y) pixel value in each pixel of the current mobile object outline image of expression, (K then is a weight coefficient to Vp for x, the y) pixel value in each pixel of one group of previous mobile object outline image of expression combination.
Like this, relevant by suitable selection weight coefficient K with one group of described combination previous mobile object outline image, can make template image T reflect current mobile object outline image stronger or more weakly.
Mobile subject positioner 70 comprises coalignment 71, and it is configured to: collect different regional areas in the current contour images of each from contour images, each regional area has identical size with template image; And about the correlation of each regional area calculating in the different regional areas with respect to template image T.Coalignment 71 is operated so that by selecting different regional areas, that is by on the direction of row or column, regional area being moved a pixel, thereby the whole zone of scanning profile image continuously is defined as mobile object outline image with template image T coupling so that will have the regional area of maximum correlation.As schematically showing among Fig. 6, when the mobile object outline image M O of binary data is confirmed as with current template image T coupling, template renewal device 62 responds so that the pixel value of each pixel among the mobile object outline image M O that obtains to mate, so that this pixel value is added on each corresponding pixel in the previous mobile object outline image, thereby provide the template image T of renewal, as schematically showing among Fig. 7, the template image T that upgrades has such pixel, and described pixel has the pixel value of the gray-scale data of accumulation accordingly.
In the present invention, correlation from each definition, suitably selecting from following explanation.An example is that correlation is defined as: for be selected to each regional area that constitutes in the regional area of contour images in the corresponding pixel of pixel of mobile contours of objects, the pixel value that from template image T, obtains and.Another example is that correlation is defined as: for the regional area that constitutes contour images in each regional area in the corresponding pixel of pixel of mobile contours of objects, the sum of powers of the pixel value that from template image T, obtains.Sum of powers is preferably the quadratic sum of pixel value.
And when contour images processor 40 provided the contour images (wherein, pixel value " 1 " is assigned to the pixel that constitutes the contour images profile, and pixel value " 0 " then is assigned to the rest of pixels of contour images) of binary data, correlation can be weighted.In this case, coalignment 71 be configured to from template image T to select with the regional area that constitutes contour images in the corresponding pixel of pixel of profile in each regional area, and obtain the number of having around each selected pixel, so that be weighted according to the number of pixels of acquisition like this pixel value to each selected pixel greater than the pixel of the pixel value of " 0 ".For example, when whole eight (8) individual peripheral pixels around center pixel all had pixel value " 0 ", shown in Fig. 8 A, then center pixel was assigned to little weight " 1 ".When the selecteed pixel in eight peripheral pixels had pixel value greater than " 0 ", shown in Fig. 8 B, then center pixel was assigned to bigger weight " 2 ".When the pixel more than in eight peripheral pixels had pixel value greater than " 0 ", shown in Fig. 8 C, then center pixel was assigned to bigger weight " 4 ".The pixel value " a " at each pixel place be multiply by the weight of so determining, so that the correlation that coalignment 71 is determined as the weighted pixel value sum of the selected pixel among the template image T, so that contour images and template image T are carried out the consistency coupling.Can suitably select the value of weight outside above-mentioned value " 1 ", " 2 " and " 4 ".
Can differently define correlation to the ratio of second number of specific pixel according to first number of specific pixel.First number counts out, satisfies the number of pixel that pixel both corresponding in pixel in the regional area and the template image has the condition of pixel value " 1 " or bigger pixel value in each regional area, second number then is the number of the pixel with pixel value " 1 " or bigger pixel value that counts out in template image.The ratio of the number of " 0 " pixel that has when contour images and template image is during greater than the number of " 1 " or bigger pixel, as and (wherein at Fig. 9 B of template image T at Fig. 9 A of the regional area PA of contour images, black box indication " 0 " pixel, and white square indication pixel value " 1 " or bigger pixel value) in exemplary illustrate like that, correlation is particularly advantageous in contour images and template image is accurately compared thus defined.In the illustrated case, dependency expression is ratio 11/14 (=79%), and wherein, first number is " 11 ", and second number is " 14 ".Obtain correlation at each the regional area PA in the contour images, so that determine to show the regional area or the mobile object outline image of maximum correlation.Utilize correlation thus defined, can make the accurate detection of mobile object relatively avoid not constituting the influence of " 0 " pixel of mobile contours of objects.
Alternatively, it is also conceivable that the number that has the pixel of pixel value " 0 " in regional area PA and template image T defines correlation.In this case, coalignment 71 is configured to obtain:
1) count out at each regional area in the regional area, satisfy first number of pixel that pixel both corresponding in pixel in the regional area and the template image has the condition of pixel value " 1 " or bigger pixel value;
2) count out in the regional area of each in regional area, satisfy second number of pixel that pixel both corresponding in pixel in the regional area and the template image has the condition of pixel value " 0 "; And
3) the 3rd number that in template image, counts out, have the pixel of pixel value " 1 " or bigger pixel value.
Coalignment 71 is at the definition of each regional area in regional area correlation, and this correlation is that first number adds after second number ratio to the 3rd number.When will be thus defined correlation application during in the example of Fig. 9 A and 9B, be 4.1{=(11+47)/14} at the correlation of the regional area of Fig. 9 A.
And coalignment 71 can depend on the peripheral pixel around each specific pixel of considering in the template image and the correlation that defines.Coalignment 71 is configured to: select one group of contour pixel, described one group of contour pixel constitutes the profile in each regional area; Obtain maximum pixel value in the pixel set around the specific pixel from be arranged in template image, described specific pixel is corresponding to each contour pixel in the contour pixel; And correlation is defined as respectively the maximum sum that obtains at described regional area.That is, shown in Figure 10 A and 10B, according to the peripheral pixel of the specific pixel (T3) in the template image T that is have the pixel (Tmax) of maximum " 6 ", estimate each contour pixel (P3) in the contour pixel among the regional area PA.Like this, each regional area is assigned to such correlation, this correlation is the maximum sum that so obtains, and each maximum is to obtain at the contour pixel in the regional area, is defined as mobile object outline image so that coalignment 71 will have the regional area of maximum correlation.
Further, coalignment 71 can depend on and consider respectively histogram (Py, the Px that obtains at each the regional area PA in the regional area and template image T; Ty, Tx) and the definition correlation, shown in Figure 11 and 12.To each the regional area PA in the regional area obtain two straight) (Py, Px), one along Y-axis, and another is along X-axis for side figure.Similarly, to template image T obtain two histograms (Tx, Ty), respectively along Y-axis and X-axis.Y-axis histogram (Py) is the distribution of first row index, described first row index is to be arranged in each row of regional area PA of contour images and to have number greater than the pixel of the pixel value of " 0 ", X-axis histogram (Px) then is the distribution of first column index, and described first column index is to be arranged in each row of regional area PA of contour images and to have number greater than the pixel of the pixel value of " 0 ".Y-axis histogram (Ty) is the distribution of second row index, described second row index is to be arranged in each row of template image T and to have number greater than the pixel of the pixel value of " 0 ", X-axis histogram (Tx) then is the distribution of secondary series index, and described secondary series index is to be arranged in each row of template image T and to have number greater than the pixel of the pixel value of " 0 ".Based on these histograms, coalignment 71 calculates: line number is poor, and it is poor between first row index and second row index; Columns is poor, and it is poor between first column index and the secondary series index; Head office's value, it is the row difference sum that obtains about each row respectively; And total train value, it is the row difference sum that obtains about each row respectively.So coalignment 71 is defined as correlation at each regional area in the regional area inverse of head office's value and total train value sum.Therefore, become bigger Zong correlation becomes littler with head office's value and train value, that is, a specific regional area becomes and approaches template image more in the regional area.By means of correlation thus defined, when utilizing regional area when pixel of row or column displacement is come the whole zone of scanning profile image, the calculating of pixel value can significantly reduce.For example, when local zone during along pixel of row displacement, have only new row that do not covered by previous regional area just need to calculate first row and, and at first row of remaining columns with can obtain in the step formerly.This also is applicable to the situation of regional area along a pixel of row displacements, in this case, have only newline just need calculate first row and.
In the present invention,, the scanning of contour images is carried out in limited region of search, so that improve the speed that detects mobile object based on moving of detected mobile object.For this reason, position estimation device 76 is united with mobile object extraction device 50 provides limited region of search in contour images, and described mobile object extraction device 50 provides the local part that at least one is covered partly or size reduces that covers mobile object.Figure 13 illustrates an example, and wherein mobile object extraction device 50 provides four (4) individual coverage part M1, M2, M3 and M4 in contour images OL.Position estimation device 76 is configured to: when mobile subject positioner 70 provides position data, obtain the time series data of position data; And two or more continuous time series datas of position-based data, calculate the estimated position P that target moves object EThen, position estimation device 76 is at estimated position P EAround the detecting area Z of preliminary dimension is set, and determine that limited region of search LSR, this limited region of search LSR comprise with overlapping coverage part M1, the M2 of detecting area Z and M3 and get rid of not Minimum Area with the overlapping coverage part M4 of detecting area Z.After having determined limited region of search LSR, the mobile subject positioner 70 of position estimation device 76 indications is only selected regional area in limited region of search LSR.
In this case, detecting area Z can have such size, and the point (1) of this size and the before prelocalization of mobile object moves to the proportional variation of speed of point (2).The detecting area Z of non-square structure also is available, and wherein, x shaft size and y shaft size are with the proportional variation of the speed of different degree and mobile object.
Alternatively, detecting area Z can have such size, and this is of a size of the function of template image size.For example, detecting area Z is defined as has such size, this size is more than 1 times of template image size.And, can select this multiple, make the proportional variation of detected speed of itself and mobile object, and can be different at x axle and y axle.
Figure 14 illustrates another scheme that further limited region of search LSR is restricted to FLSR by means of filtering area FZ, and described filtering area FZ is formed on estimated position P EAround, have such size, this is of a size of the function of speed of mobile object.So limited region of search LSR is further limited the regional FLSR total with filtering area FZ.
Alternatively, as shown in figure 15, can limited region of search LSR be restricted to FLSR by means of template filtering area TFZ, described template filtering area TFZ is formed on estimated position P EAround, have such size, this is of a size of the function of the size of template image.
In this respect, be noted that filtering area FZ or template filtering area TFZ can be used alone as limited region of search.
And, consider that (Hy, Hx), limited region of search LSR can further be restricted to XLSR, as shown in figure 16 for histogram along the pixel value of x axle and y axle.Histogram (Hy) is that the y axle of row index distributes, and described row index is the number of arranging and having the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search LSR.Obtain limited region of search LSR according to the scheme among Figure 13, perhaps, even obtain the limited region of search FLSR of restriction according to the scheme described among Figure 14 or Figure 15.Equally, in this case, histogram (Hx) is the distribution of column index, and described column index is the number of arranging and having the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search LSR.Position estimation device 76 with respectively with predetermined capable threshold value TH RWith predetermined row threshold value TH CThe mode of comparing is analyzed histogram, and (Hy, Hx), each provisional capital has than capable threshold value TH so that select wherein RThe continuous row group G of big row index Y, and wherein each row all has than row threshold value TH CThe continuation column group G of big column index XThen, position estimation device 76 is restricted to limited region of search LSR by selected group of G YAnd G XThe regional XLSR of gauge eliminates any possible noise simultaneously, moves object so that accurately detect target.
If two or more continuous row groups are owing to have greater than row threshold value TH RRow index and selected, if perhaps two or more continuation column group owing to have greater than row threshold value TH CColumn index and selected, shown in Figure 17 and 18, then position estimation device only makes and approaches estimated position P more EA group collection (Gy2, Gx2) effective, therefore limited region of search is restricted to the regional XLSR that is limited by effective group.
And, in limited region of search LSR, have in the situation of two or more continuous row or column groups, when limited region of search LSR further is restricted to XLSR, in order to reduce the quantity of calculating, position estimation device 76 can be configured at first to obtain in row index and the column index, and based on one analysis in described row index and the column index, cancellation is used for obtaining another unnecessary calculating of row index and column index.For ease of understanding, at first the scheme of calculating row index before the calculated column index is described with reference to Figure 17.After each row in the boundary of limited region of search LSR was obtained row index, position estimation device 76 selections wherein each provisional capital had greater than row threshold value TH RTwo of row index row group Gy1 and Gy2 continuously, and only make that organizing Gy1 than another approaches estimated position P more EOne the group Gy2 effective.Subsequently, position estimation device 76 only obtains column index in the scope by effective group of Gy2 institute gauge, and selecting wherein, each row all has greater than row threshold value TH CThe continuation column group Gx2 of column index, and limited region of search further is limited to by the selected continuation column group Gx2 and the effective regional XLSR of row group Gy2 institute gauge continuously.
Alternatively, as shown in figure 18, can at first analyze limited region of search LSR, so that make among continuation column group Gx1 and the Gx2 one effectively about column index.Make continuation column group Gx2 because approach estimated position P more than another group Gx1 EAnd effectively, calculate, so that only obtain row index in the scope that effectively continuation column group Gx2 is limited, and selection wherein has greater than row threshold value TH each continuous provisional capital RThe continuous row group Gy2 of row index.So position estimation device 76 further is restricted to limited region of search LSR by the continuous row group Gy2 of selection like this and the effective regional XLSR that limited of continuation column group Gx2.
In the such scheme of reference Figure 17 and 18 explanations, term " continuously row " or " continuation column " be not in the present invention with the meaning interpretation of strictness, but be defined as so a series of row or column, wherein, have following row index of threshold value or column index row or column be no more than predetermined number continuously, allow to have the following row index of threshold value or the of short duration insertion of row or column of column index, realize the accurate detection of mobile object so that eliminate possible noise or error.
Although above description is only exemplary open and various features have been described, so that easy to understand basic conception of the present invention,, any combination that should be noted in the discussion above that described feature here is also within the scope of the invention.

Claims (27)

1. a target moves the object tracking equipment, comprising:
Picture image memory (20), it is configured to store video camera (10) about covering the time series that possible target moves the captured real picture image in the observation district of object;
Display (30), it is configured to show selectable one or more real picture images in the described real picture image with required magnification ratio;
Contour images processor (40), it is configured to provide contour images according to described real picture image respectively;
Template storage (60), it is configured to store and is used to discern the template image that described target moves object;
Mobile subject positioner (70), it is configured to each contour images in the described contour images is compared with described template image, to detect the regional area that mates with described template image in each contour images, described mobile subject positioner obtains described target and moves the position data of object in described observation district based on being detected the described regional area that mates with described template image;
Amplify picture generating apparatus (80), it is configured to based on described position data, from described real picture image with described contour images be detected and the corresponding part of described regional area of described template image coupling is extracted enlarged image, and on described display, show the picture image of described amplification;
Wherein, described mobile subject positioner (70) be configured to from the corresponding described contour images of described regional area that is detected with described template image coupling each extract mobile object outline image, and
Template renewal device (62) is provided, be used for by using combining of one or more mobile object outline image previous in a current mobile object outline image and the described mobile object outline image in the described mobile object outline image to replace described template image, thereby upgrade described template image.
2. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the described contour images by the binary data definition.
3. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the described contour images by the discrete gray levels data definition.
4. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to obtain the contrast of described template image, described contour images by the binary data definition is provided when surpassing predetermined benchmark with the described contrast of box lunch, and the described contour images that is defined by gray-scale data is provided when described contrast is lower than described benchmark.
5. target as claimed in claim 4 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to detect average pixel value, described average pixel value is a mean value of composing the pixel value of the pixel in each subregion of giving in a plurality of subregions of described template image respectively, and when in the described subregion any one is detected when having the described average pixel value that is lower than threshold value, perhaps during greater than predetermined extent, just judge that described contrast is lower than described benchmark than the low degree of another the described average pixel value in the described subregion when detect described average pixel value in the described subregion any one.
6. target as claimed in claim 2 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to be provided for described real picture image transitions is become the variable thresholding of the contour images of described binary data, described contour images processor (40) is configured to obtain the average gray-level value of described template image, and reduces described threshold value when described average gray-level value is lower than predetermined limits.
7. target as claimed in claim 1 moves the object tracking equipment, further comprises:
Mobile object outline video memory (72), it is used for storing the time series of described mobile object outline image;
Described template renewal device (62) is configured to read the previous mobile object outline image of predetermined number from described mobile object outline video memory, these contour images are combined with current mobile object outline image, and by upgrading described previous template image with described in conjunction with replacing previous template image.
8. target as claimed in claim 7 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to upgrade described template image when new one group of continuous described mobile object outline image accumulation reaches predetermined number.
9. move the object tracking equipment as claim 7 or 8 described targets, wherein,
Described template renewal device (62) is configured to only in conjunction be confirmed as effectively described mobile object outline image according to predetermined criterion.
10. target as claimed in claim 9 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to: the calculating pixel index, described pixel index for comprise in described mobile object outline image each and have a number greater than the pixel of zero pixel value; And described criterion is provided, in the difference of the pixel index of the pixel index of current mobile object outline image and previous mobile object outline image during, determines that current mobile object outline image is for effectively greater than predetermined extent.
11. target as claimed in claim 9 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to calculate the standard deviation of one pixel value in current mobile object outline image and the corresponding real picture image, and provide described criterion, described criterion described standard deviation with about the inclined to one side difference of the standard of previous mobile object outline image calculation during greater than predetermined extent, determine that current mobile object outline image is for effectively.
12. target as claimed in claim 9 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to calculate the number of the pixel of the described mobile contours of objects in each that constitutes in the described mobile object outline image, and provide described criterion, described criterion about the number of the described pixel of current mobile object outline image with about the difference of the number of the pixel of previous mobile object outline image during greater than predetermined extent, determine that current mobile object outline image is for effectively.
13. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described mobile subject positioner (70) comprises coalignment (71), and described coalignment (71) is configured to: collect different regional areas in the middle of described contour images, each zone has identical size with described template image; Calculating is about each the correlation in the described zones of different; And the described regional area that will have maximum correlation is defined as the described mobile object outline image with described template image coupling, and described template renewal device is configured to obtain the pixel value of each pixel in the described mobile object outline image, so that described pixel value is added on each corresponding pixel in the pixel in the previous mobile object outline image.
14. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described coalignment (71) is configured to described correlation is defined as: about be selected to each regional area that constitutes in the described regional area of described contour images in the corresponding pixel of pixel of profile, the pixel value sum that from described template image, obtains.
15. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described coalignment (71) is configured to described correlation is defined as: about be selected to each regional area that constitutes in the described regional area of described contour images in the corresponding pixel of pixel of profile, the sum of powers of the pixel value that from described template image, obtains.
16. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, pixel value " 0 " then is assigned to the rest of pixels of contour images, described coalignment (71) be configured to from described template image to select with the described regional area that constitutes described contour images in each regional area in the corresponding pixel of pixel of profile, and obtain the number of having around each selected pixel greater than the pixel of the pixel value of " 0 ", so that be weighted according to the number of pixels of acquisition like this pixel value to each selected pixel, described coalignment (71) is configured to described correlation is defined as the pixel value sum of the weighting like this of selected pixel in the described template image.
17. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, pixel value " 0 " then is assigned to the rest of pixels of contour images, described coalignment (71) is configured to obtain: satisfy first number of the pixel of following condition in each regional area of described regional area, this condition is that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in the described regional area and the described template image; And second number that has the pixel of pixel value " 1 " or bigger pixel value in the described template image,
Described coalignment (71) defines the described correlation in each regional area that is used for described regional area, and it is the ratio of described first number to described second number.
18. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, pixel value " 0 " then is assigned to the rest of pixels of contour images, described coalignment (71) is configured to obtain: first number of the pixel that satisfies following condition of each regional area of described regional area, this condition are that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in the described regional area and the described template image; Second number of the pixel that satisfies following condition of each regional area in the described regional area, this condition are that pixel both corresponding in pixel in the described regional area and the described template image has pixel value " 0 "; And the 3rd number that has the pixel of pixel value " 1 " or bigger pixel value in the described template image,
Described coalignment (71) definition is used for the described correlation of each regional area of described regional area, and it adds after the above second number ratio to described the 3rd number for described first number.
19. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, pixel value " 0 " then is assigned to the rest of pixels of contour images, described coalignment (71) is configured to: obtain the maximum in the described pixel value from the pixel set of the selected pixel arrangement around described template image, described selected pixel is corresponding to each pixel in the pixel of the profile in each regional area in the described regional area that constitutes contour images; And described correlation is defined as the described maximum sum that obtains at each regional area respectively.
20. target as claimed in claim 13 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, and pixel value " 0 " then is assigned to the rest of pixels of contour images, and described coalignment (71) is configured to obtain:
First row index, arrange in its each row for each described regional area and have a number greater than the pixel of the pixel value of " 0 ";
First column index, arrange in its each row for each described regional area of contour images and have a number greater than the pixel of the pixel value of " 0 ";
Second row index, arrange in its each row for described template image and have a number greater than the pixel of the pixel value of " 0 ";
The secondary series index, arrange in its each row for described template image and have a number greater than the pixel of the pixel value of " 0 ";
Line number is poor, its be at each the row described first row index and described second row index between poor;
Columns is poor, and it is poor between described first column index of each row and the described secondary series index;
Head office's value, it is the described line number difference sum that obtains about each row respectively; And
Total train value, it is the described columns difference sum that obtains about each row respectively,
Described coalignment (71) is defined as inverse at described head office value and described total train value sum of each regional area in the described regional area with described correlation.
21.。Target as claimed in claim 1 moves the object tracking equipment, further comprises:
Position estimation device (76) is used for estimating limited region of search in described contour images, is used for the detection that described target moves object, and
Mobile object extraction device (50), it is configured to based on poor with time correlation between two or more contour images in the described continuous contour images, detect at least one possible mobile object, and at least one coverage part of the size minimizing that covers described mobile object is provided;
Described position estimation device (76) is configured to: when described mobile subject positioner provides position data, just obtain to be stored in the time series data of the described position data in the position data memory (74); Based on two or more continuous time series data of described position data, calculate the estimated position (P that target moves object E); The detecting area (Z) of preliminary dimension is set around described estimated position; And described limited region of search (LSR) is provided, it is to comprise at least one described Minimum Area of covering part overlapping with described detecting area,
Described mobile subject positioner is configured to only select described regional area in described limited region of search.
22. target as claimed in claim 21 moves the object tracking equipment, wherein,
Described position estimation device (76) is configured to: based on two or more continuous time series data of described position data, calculate the estimation translational speed of mobile object; And described detecting area (Z) is provided, the size of described detecting area (Z) and the described estimating speed of mobile object are proportional.
23. target as claimed in claim 21 moves the object tracking equipment, wherein,
Described position estimation device (76) is configured to determine the size of described detecting area, and this is of a size of the function of the size of described template image.
24. move the object tracking equipment as claim 22 or 23 described targets, wherein,
Described position estimation device (76) is configured to:
Obtain row index, its number for arranging along each row of described limited region of search and have the pixel of pixel value " 1 " or bigger pixel value;
Select row group continuously, its each provisional capital has the row index greater than the predetermined row threshold value;
Obtain column index, its number for arranging along each row of described limited region of search and have the pixel of pixel value " 1 " or bigger pixel value;
Select the continuation column group, its each row all have the column index greater than the predetermined column threshold value; And
Further described limited region of search is limited to the zone of being limited by the continuous continuation column group of selecting of going group and selecting.
25. target as claimed in claim 24 moves the object tracking equipment, wherein,
Described position estimation device (76) is configured to: when selected two or more continuous whens group row, only made to approach more in described group target move object described estimated position a group effectively; And when having selected two or more continuation column group, only make approach more in described group target move object described estimated position a group effectively.
26. move the object tracking equipment as claim 22 or 23 described targets, wherein,
Described position estimation device (76) is configured to:
Obtain row index, its number for arranging along each row of described limited region of search and have the pixel of pixel value " 1 " or bigger pixel value;
Select at least one row group continuously, its each provisional capital has the row index greater than the predetermined row threshold value;
Move one of described estimated position of object the row group is effectively continuously when having selected two or more continuous whens group row, only made to approach more target;
Obtain column index, its number for only in by the described effectively scope that the row group is limited continuously, arranging and have the pixel of pixel value " 1 " or bigger pixel value along each row of described limited region of search;
Select the continuation column group, its each row all have the column index greater than the predetermined column threshold value; And
Further described limited region of search is limited to by continuation column group of selecting and described effectively going continuously and organizes the zone of being limited.
27. move the object tracking equipment as claim 22 or 23 described targets, wherein,
Described position estimation device (76) is configured to:
Obtain column index, its number for arranging along each row of described limited region of search and have the pixel of pixel value " 1 " or bigger pixel value;
Select at least one continuation column group, its each row all have the column index greater than the predetermined column threshold value;
When having selected two or more continuation column group, only make and approach target more to move the continuation column group of described estimated position of object effective;
Obtain row index, it is number that only arrange along each row of described limited region of search and pixel pixel value " 1 " or bigger pixel value in the scope that is limited by described effective continuation column group;
Select row group continuously, its each row all have the row index greater than the predetermined row threshold value; And
Further described limited region of search is limited to the zone of being limited by continuous row group of selecting and described effective continuation column group.
CN2007101425988A 2006-10-27 2007-08-29 Target moving object tracking device Expired - Fee Related CN101170683B (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
JP2006293078A JP4915655B2 (en) 2006-10-27 2006-10-27 Automatic tracking device
JP2006-293078 2006-10-27
JP2006293079 2006-10-27
JP2006293078 2006-10-27
JP2006293079A JP4725490B2 (en) 2006-10-27 2006-10-27 Automatic tracking method
JP2006-293079 2006-10-27
JP2007110915A JP4867771B2 (en) 2007-04-19 2007-04-19 Template matching device
JP2007-110915 2007-04-19
JP2007110915 2007-04-19

Publications (2)

Publication Number Publication Date
CN101170683A true CN101170683A (en) 2008-04-30
CN101170683B CN101170683B (en) 2010-09-08

Family

ID=39391118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101425988A Expired - Fee Related CN101170683B (en) 2006-10-27 2007-08-29 Target moving object tracking device

Country Status (2)

Country Link
JP (1) JP4915655B2 (en)
CN (1) CN101170683B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646066B (en) * 2008-08-08 2011-05-04 鸿富锦精密工业(深圳)有限公司 Video monitoring system and method
CN102158689A (en) * 2011-05-17 2011-08-17 无锡中星微电子有限公司 Video monitoring system and method
CN102223473A (en) * 2010-04-16 2011-10-19 鸿富锦精密工业(深圳)有限公司 Camera device and method for dynamic tracking of specific object by using camera device
CN101593510B (en) * 2008-05-30 2011-12-21 索尼株式会社 Image processing device,an image processing method and an image processing program
CN102414715A (en) * 2009-04-23 2012-04-11 丰田自动车株式会社 Object detection device
CN102445681A (en) * 2011-09-30 2012-05-09 深圳市九洲电器有限公司 Indoor positioning method and indoor positioning system of movable device
CN101739692B (en) * 2009-12-29 2012-05-30 天津市亚安科技股份有限公司 Fast correlation tracking method for real-time video target
CN102663777A (en) * 2012-04-26 2012-09-12 安科智慧城市技术(中国)有限公司 Target tracking method and system based on multi-view video
CN101820521B (en) * 2009-01-14 2013-03-27 索尼公司 Information processing apparatus, information processing method and program
CN103106667A (en) * 2013-02-01 2013-05-15 山东科技大学 Motion target tracing method towards shielding and scene change
CN103179335A (en) * 2011-09-01 2013-06-26 瑞萨电子株式会社 Object tracking device
CN101751549B (en) * 2008-12-03 2014-03-26 财团法人工业技术研究院 Method for tracking moving object
CN104065932A (en) * 2014-06-30 2014-09-24 东南大学 Disjoint-view object matching method based on corrected weighted bipartite graph
CN104065878A (en) * 2014-06-03 2014-09-24 小米科技有限责任公司 Method, device and terminal for controlling shooting
CN104508679A (en) * 2012-06-12 2015-04-08 实耐宝公司 Tool training for automated tool control systems
CN104769523A (en) * 2012-11-06 2015-07-08 惠普发展公司,有限责任合伙企业 Interactive display
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN105075248A (en) * 2013-03-29 2015-11-18 日本电气株式会社 Target object identifying device, target object identifying method and target object identifying program
CN105979209A (en) * 2016-05-31 2016-09-28 浙江大华技术股份有限公司 Monitoring video display method and monitoring video display device
CN106339725A (en) * 2016-08-31 2017-01-18 天津大学 Pedestrian detection method based on scale constant characteristic and position experience
US9584725B2 (en) 2014-06-03 2017-02-28 Xiaomi Inc. Method and terminal device for shooting control
CN106683308A (en) * 2017-01-06 2017-05-17 天津大学 Event recognition photoelectric information fusion perception device and method
CN107562203A (en) * 2017-09-14 2018-01-09 北京奇艺世纪科技有限公司 A kind of input method and device
CN109276819A (en) * 2017-07-20 2019-01-29 株式会社东芝 Information processing unit, information processing system and storage medium
CN109478328A (en) * 2016-09-30 2019-03-15 富士通株式会社 Method for tracking target, device and image processing equipment
CN110324690A (en) * 2018-03-30 2019-10-11 青岛海信电器股份有限公司 A kind of method of displaying target object, display device and display equipment
CN110503662A (en) * 2019-07-09 2019-11-26 科大讯飞(苏州)科技有限公司 Tracking and Related product
CN111052753A (en) * 2017-08-30 2020-04-21 Vid拓展公司 Tracking video scaling
WO2021093534A1 (en) * 2019-11-12 2021-05-20 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010034885A (en) * 2008-07-29 2010-02-12 Fujitsu Ltd Imaging apparatus
JP5370056B2 (en) * 2008-11-04 2013-12-18 オムロン株式会社 Image processing device
JP5279017B2 (en) * 2008-12-02 2013-09-04 国立大学法人 東京大学 Imaging apparatus and imaging method
JP4798259B2 (en) 2009-06-08 2011-10-19 株式会社ニコン Subject tracking device and camera
CN101872480B (en) * 2010-06-09 2012-01-11 河南理工大学 Automatic detection method for position and dimension of speckled characteristic in digital image
JP5740934B2 (en) * 2010-11-25 2015-07-01 カシオ計算機株式会社 Subject detection apparatus, subject detection method, and program
JP6464706B2 (en) * 2014-12-05 2019-02-06 富士通株式会社 Object detection method, object detection program, and object detection apparatus
RU2737193C1 (en) * 2017-06-13 2020-11-25 АйЭйчАй КОРПОРЕЙШН Method of monitoring moving body

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3254464B2 (en) * 1992-07-13 2002-02-04 株式会社日立製作所 Vehicle recognition device and moving object recognition method
JP3481430B2 (en) * 1997-09-11 2003-12-22 富士通株式会社 Mobile tracking device
JPH11252587A (en) * 1998-03-03 1999-09-17 Matsushita Electric Ind Co Ltd Object tracking device
US6426718B1 (en) * 2000-03-14 2002-07-30 The Boeing Company Subaperture processing for clutter reduction in synthetic aperture radar images of ground moving targets
JP3437555B2 (en) * 2001-03-06 2003-08-18 キヤノン株式会社 Specific point detection method and device
JP4132725B2 (en) * 2001-05-23 2008-08-13 株式会社リコー Image binarization apparatus, image binarization method, and image binarization program
JP3857558B2 (en) * 2001-10-02 2006-12-13 株式会社日立国際電気 Object tracking method and apparatus
CN1274146C (en) * 2002-10-10 2006-09-06 北京中星微电子有限公司 Sports image detecting method
FR2861204B1 (en) * 2003-10-21 2009-01-16 Egregore Ii METHOD FOR AUTOMATICALLY CONTROLLING THE DIRECTION OF THE OPTICAL AXIS AND THE ANGULAR FIELD OF AN ELECTRONIC CAMERA, IN PARTICULAR FOR AUTOMATIC TRACKING VIDEOSURVEILLANCE
JP4586578B2 (en) * 2005-03-03 2010-11-24 株式会社ニコン Digital camera and program
JP2006258944A (en) * 2005-03-15 2006-09-28 Fujinon Corp Autofocus system
CN100437145C (en) * 2005-12-19 2008-11-26 北京威亚视讯科技有限公司 Position posture tracing system

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593510B (en) * 2008-05-30 2011-12-21 索尼株式会社 Image processing device,an image processing method and an image processing program
CN101646066B (en) * 2008-08-08 2011-05-04 鸿富锦精密工业(深圳)有限公司 Video monitoring system and method
CN101751549B (en) * 2008-12-03 2014-03-26 财团法人工业技术研究院 Method for tracking moving object
CN101820521B (en) * 2009-01-14 2013-03-27 索尼公司 Information processing apparatus, information processing method and program
CN102414715A (en) * 2009-04-23 2012-04-11 丰田自动车株式会社 Object detection device
CN102414715B (en) * 2009-04-23 2014-03-12 丰田自动车株式会社 Object detection device
CN101739692B (en) * 2009-12-29 2012-05-30 天津市亚安科技股份有限公司 Fast correlation tracking method for real-time video target
CN102223473A (en) * 2010-04-16 2011-10-19 鸿富锦精密工业(深圳)有限公司 Camera device and method for dynamic tracking of specific object by using camera device
CN102158689A (en) * 2011-05-17 2011-08-17 无锡中星微电子有限公司 Video monitoring system and method
CN103179335A (en) * 2011-09-01 2013-06-26 瑞萨电子株式会社 Object tracking device
CN102445681A (en) * 2011-09-30 2012-05-09 深圳市九洲电器有限公司 Indoor positioning method and indoor positioning system of movable device
CN102445681B (en) * 2011-09-30 2013-07-03 深圳市九洲电器有限公司 Indoor positioning method and indoor positioning system of movable device
CN102663777A (en) * 2012-04-26 2012-09-12 安科智慧城市技术(中国)有限公司 Target tracking method and system based on multi-view video
US11741427B2 (en) 2012-06-12 2023-08-29 Snap-On Incorporated Monitoring removal and replacement of tools within an inventory control system
CN110238807A (en) * 2012-06-12 2019-09-17 实耐宝公司 Tool for automation tools control system is moulded
CN104508679A (en) * 2012-06-12 2015-04-08 实耐宝公司 Tool training for automated tool control systems
CN104769523A (en) * 2012-11-06 2015-07-08 惠普发展公司,有限责任合伙企业 Interactive display
CN104769523B (en) * 2012-11-06 2018-07-13 惠普发展公司,有限责任合伙企业 Interactive display
CN103106667A (en) * 2013-02-01 2013-05-15 山东科技大学 Motion target tracing method towards shielding and scene change
CN105075248A (en) * 2013-03-29 2015-11-18 日本电气株式会社 Target object identifying device, target object identifying method and target object identifying program
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
US9584725B2 (en) 2014-06-03 2017-02-28 Xiaomi Inc. Method and terminal device for shooting control
CN104065878B (en) * 2014-06-03 2016-02-24 小米科技有限责任公司 Filming control method, device and terminal
CN104065878A (en) * 2014-06-03 2014-09-24 小米科技有限责任公司 Method, device and terminal for controlling shooting
CN104065932B (en) * 2014-06-30 2019-08-13 东南大学 A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph)
CN104065932A (en) * 2014-06-30 2014-09-24 东南大学 Disjoint-view object matching method based on corrected weighted bipartite graph
CN105979209A (en) * 2016-05-31 2016-09-28 浙江大华技术股份有限公司 Monitoring video display method and monitoring video display device
CN106339725A (en) * 2016-08-31 2017-01-18 天津大学 Pedestrian detection method based on scale constant characteristic and position experience
CN109478328A (en) * 2016-09-30 2019-03-15 富士通株式会社 Method for tracking target, device and image processing equipment
CN106683308A (en) * 2017-01-06 2017-05-17 天津大学 Event recognition photoelectric information fusion perception device and method
CN109276819A (en) * 2017-07-20 2019-01-29 株式会社东芝 Information processing unit, information processing system and storage medium
CN111052753A (en) * 2017-08-30 2020-04-21 Vid拓展公司 Tracking video scaling
CN107562203A (en) * 2017-09-14 2018-01-09 北京奇艺世纪科技有限公司 A kind of input method and device
CN110324690A (en) * 2018-03-30 2019-10-11 青岛海信电器股份有限公司 A kind of method of displaying target object, display device and display equipment
CN110503662A (en) * 2019-07-09 2019-11-26 科大讯飞(苏州)科技有限公司 Tracking and Related product
WO2021093534A1 (en) * 2019-11-12 2021-05-20 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN101170683B (en) 2010-09-08
JP4915655B2 (en) 2012-04-11
JP2008113071A (en) 2008-05-15

Similar Documents

Publication Publication Date Title
CN101170683B (en) Target moving object tracking device
US8194134B2 (en) Target moving object tracking device
US11789545B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN111415461B (en) Article identification method and system and electronic equipment
EP1158309B1 (en) Method and Apparatus for position detection
JP5603403B2 (en) Object counting method, object counting apparatus, and object counting program
CN103069796B (en) For counting the device of order calibration method and the multiple sensors of use
US10423838B2 (en) Method for analysing the spatial extent of free queues
CN107238834A (en) Target Tracking System for use radar/vision fusion of automotive vehicle
US8213679B2 (en) Method for moving targets tracking and number counting
US8472715B2 (en) Situation determining apparatus, situation determining method, situation determining program, abnormality determining apparatus, abnormality determining method, abnormality determining program, and congestion estimating apparatus
CN110609281B (en) Region detection method and device
CN101681424B (en) Sequential image alignment
CN101014975B (en) Object detector
CN108985359A (en) A kind of commodity recognition method, self-service machine and computer readable storage medium
CN103577827B (en) Pattern recognition device and lift appliance
CN103003842A (en) Moving-body detection device, moving-body detection method, moving-body detection program, moving-body tracking device, moving-body tracking method, and moving-body tracking program
JP4978099B2 (en) Self-position estimation device
US20100166259A1 (en) Object enumerating apparatus and object enumerating method
CN108596128A (en) Object identifying method, device and storage medium
JP2003517910A (en) A method for learning-based object detection in cardiac magnetic resonance images
CN101571914A (en) Abnormal behavior detection device
CN101526996A (en) Method of mouse spontaneous behavior motion monitoring and posture image recognition
CN107452015A (en) A kind of Target Tracking System with re-detection mechanism
CN110263692A (en) Container switch gate state identification method under large scene

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100908

Termination date: 20170829