JPH07249138A - Residence time measuring method - Google Patents

Residence time measuring method

Info

Publication number
JPH07249138A
JPH07249138A JP6038866A JP3886694A JPH07249138A JP H07249138 A JPH07249138 A JP H07249138A JP 6038866 A JP6038866 A JP 6038866A JP 3886694 A JP3886694 A JP 3886694A JP H07249138 A JPH07249138 A JP H07249138A
Authority
JP
Japan
Prior art keywords
person
camera
time
room
leaving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP6038866A
Other languages
Japanese (ja)
Inventor
Hideki Koike
秀樹 小池
Akira Tomono
明 伴野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP6038866A priority Critical patent/JPH07249138A/en
Publication of JPH07249138A publication Critical patent/JPH07249138A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of Unknown Time Intervals (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)

Abstract

PURPOSE:To highly precisely measure residence time by individuals in marketing. CONSTITUTION:Pictures taken by a camera for entering person and a camera for leaving person are inputted (201), and input time concerned is obtained (202). Then, it is held in a time holding part for entering person 121 or a time holding part for leaving person 122. A feature quantity extraction processing (203) generates the feature vector of the input picture and it is held in a feature quantity holding part for entering person 123 or a feature quantity holding part for leaving person 124. When the picture of a leaving person is inputted, the feature vectors of the feature quantity holding parts 123 and 124 are collated in an identification processing (207). In a residence time measuring processing (208), the obtained time of the collated picture is read from the holding parts 121 and 122, and the residence time of the leaving person is calculated from the difference.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、滞留時間計測方法に係
り、詳しくは、マーケッティングにおける店舗等での人
物の滞留時間を計測する方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a staying time measuring method, and more particularly to a method for measuring a staying time of a person in a store or the like in marketing.

【0002】[0002]

【従来の技術】従来、店舗等での人の滞留時間計測は、
人海戦術、または、一定時間毎の入場者数と退場者数だ
けから算出する方法が一般的である。
2. Description of the Related Art Conventionally, for measuring the residence time of a person in a store, etc.
It is common to use the human tactics or the method of calculating from the number of visitors and the number of exits at regular intervals.

【0003】[0003]

【発明が解決しようとする課題】従来技術において、人
海戦術では、継続的計測が難しく統計的な処理が困難で
あったり、コストが高いという問題があり、また、一定
時間毎の入場者数と退場者数だけに着目する方法では、
精度の良い結果が得られないという問題があった。
In the prior art, the human naval tactics have problems that continuous measurement is difficult, statistical processing is difficult, and cost is high, and the number of visitors is constant every certain time. And in the method that focuses only on the number of exits,
There was a problem that accurate results could not be obtained.

【0004】本発明は、このような問題点を解決し、継
続的に高精度で、且つ、属性別等の計測が可能な滞留時
間計測方法を実現することを目的とする。
An object of the present invention is to solve such problems and to realize a residence time measuring method capable of continuously measuring with high accuracy and attribute classification.

【0005】[0005]

【発明が解決しようとする手段】本発明は、入室者用と
退室者用のカメラで撮影した2種類の画像から個々の人
物の照合を取り、照合が取れた2枚の画像の取得時刻の
差分から滞留時間を算出することを最も主要な特徴とす
る。
SUMMARY OF THE INVENTION According to the present invention, an individual person is collated from two types of images photographed by a camera for entering a person and a camera for leaving a person, and the acquisition time of the two images obtained by the collation is calculated. The most important feature is that the residence time is calculated from the difference.

【0006】[0006]

【作用】本発明では、個々の人物の照合を取り、個々の
人物の滞留時間を計測するため、滞留時間の計測の精度
が向上すると共に、特定の個人等の滞留時間を計測する
ことも可能になる。
In the present invention, since the individual persons are collated and the staying time of each person is measured, the accuracy of measuring the staying time is improved and the staying time of a specific individual or the like can be measured. become.

【0007】[0007]

【実施例】以下、本発明の各実施例について図面を参照
して説明する。
Embodiments of the present invention will be described below with reference to the drawings.

【0008】図1は本発明の各実施例の全体ブロック図
であり、101は入室者用のカメラ、102は退室者用
のカメラ、103は時計、110は滞留時間計測装置、
120はメモリ装置である。ここで、滞留時間計測装置
110は画像入力部111、時刻取得部112、特徴量
抽出部113、識別部114及び滞留時間計測部115
からなる。また、メモリ装置120は入室者の時刻保持
部1、退室者の時刻保持部2、入室者の特徴量保持部
1、退室者の特徴保持部2及び人物の属性保持部等の各
記憶領域121〜125を有している。
FIG. 1 is an overall block diagram of each embodiment of the present invention, in which 101 is a camera for entering a person, 102 is a camera for leaving a person, 103 is a clock, 110 is a dwell time measuring device,
120 is a memory device. Here, the residence time measuring device 110 includes an image input unit 111, a time acquisition unit 112, a feature amount extraction unit 113, an identification unit 114, and a residence time measurement unit 115.
Consists of. In addition, the memory device 120 includes storage areas 121 such as a time holding unit 1 for a person who enters the room, a time holding unit 2 for a person who leaves the room, a feature amount holding unit 1 for a person who enters the room, a feature holding unit 2 for a person who leaves the room, and an attribute holding unit for a person. Have ~ 125.

【0009】図9に、カメラ101、102の設置例を
示す。即ち、カメラ101で入室者901を撮影し、カ
メラ102で退室者902を撮影する。図1の滞留時間
計測装置110では、該カメラ101、102の画像を
入力し、メモリ装置120を各種データの保持用に使用
して、入室から退室までの時間を滞留時間として計測す
る。以下、各実施例について説明する。
FIG. 9 shows an installation example of the cameras 101 and 102. That is, the camera 101 photographs the person 901 who has entered the room, and the camera 102 photographs the person 902 who has left the room. In the dwell time measuring device 110 of FIG. 1, the images of the cameras 101 and 102 are input, the memory device 120 is used for holding various data, and the time from entering the room to leaving the room is measured as the dwell time. Hereinafter, each embodiment will be described.

【0010】〈実施例1〉これは請求項1に対応し、各
実施例の基本となるもので、図2を用いて全体の処理の
流れを説明する。画像入力部111においてカメラ10
1またはカメラ102で撮影した画像を入力し(ステッ
プ201)、時計103より入力時の時刻を時刻取得部
112で取得し(ステップ202)、カメラ101、カ
メラ102に応じてメモリ装置120の時刻保持部1
(121)、または、時刻保持部2(122)に保持す
る。次に、特徴量抽出部113で画像から人物の特徴量
を抽出する(ステップ203)。具体的には、まず、ス
テップ204の領域検出で特徴量を抽出する領域を決定
し、ステップ205のモザイク画像作成で該領域にモザ
イク処理を行いモザイク画像を作成し、ステップ206
の特徴ベクトル生成においてモザイク画像から特徴ベク
トルを生成し、カメラ101、カメラ102に応じてメ
モリ装置120の特徴量保持部1(123)、または、
特徴量保持部2(124)に保持する。こうして、退室
者が現われた時、識別部114において、特徴量保持部
2(124)に保持された特徴ベクトルを特徴量保持部
1(123)に保持してある特徴ベクトルと照合を取り
(ステップ207)、滞留時間計測部115において、
照合が取れた画像を取得した時刻を時刻保持部1(12
1)から読みだして、時刻保持部2(122)に保持し
ている該退室者が現れた時刻と差分を取り、滞留時間を
算出する(ステップ108)。
<Embodiment 1> This corresponds to claim 1 and is the basis of each embodiment. The overall processing flow will be described with reference to FIG. The camera 10 in the image input unit 111
1 or the image captured by the camera 102 is input (step 201), the time at the time of input is acquired from the clock 103 by the time acquisition unit 112 (step 202), and the time of the memory device 120 is held according to the camera 101 or the camera 102. Part 1
(121) or the time holding unit 2 (122). Next, the feature amount extraction unit 113 extracts the feature amount of the person from the image (step 203). Specifically, first, an area from which the feature amount is extracted is determined by the area detection in step 204, a mosaic image is created by performing mosaic processing on the area in the mosaic image creation in step 205, and step 206
In the feature vector generation, a feature vector is generated from the mosaic image, and the feature amount holding unit 1 (123) of the memory device 120 is selected according to the camera 101 or the camera 102, or
It is held in the feature amount holding unit 2 (124). In this way, when the person leaving the room appears, the identification unit 114 checks the feature vector held in the feature amount holding unit 2 (124) with the feature vector held in the feature amount holding unit 1 (123) (step 207), in the residence time measuring unit 115,
The time when the collated image is acquired is set to the time holding unit 1 (12
It is read from 1), and the retention time is calculated by taking the difference from the time when the person leaving the room held in the time holding unit 2 (122) appears (step 108).

【0011】〈実施例2〉これは請求項2に対応するも
ので、図3を用いて説明する。実施例1と同様に、画像
入力時の時刻と照合のための特徴ベクトルをメモリ装置
20の領域121〜124に保持する。本実施例では、
更に、特徴量抽出部113において、頭部属性抽出処理
を行って髪型情報を抽出する(ステップ307)。具体
的には、ステップ308の領域検出で頭部領域を検出
し、ステップ309のモザイク画像作成で頭部領域のモ
ザイク画像を作成し、ステップ310の特徴ベクトル生
成で頭部領域の特徴ベクトルを生成する。そして、該特
徴ベクトルによって髪型を識別し、メモリ装置120の
属性保持部125に識別結果を保持する。退室者が現れ
た場合には、識別部114において第1の実施例と同様
に照合し(ステップ311)、滞留時間計測部115に
おいて、属性保持部125に保持した値によって属性の
値別に滞留時間を記録し、統計データを作成する(ステ
ップ312)。
<Embodiment 2> This corresponds to claim 2 and will be described with reference to FIG. As in the first embodiment, the time of image input and the feature vector for matching are held in the areas 121 to 124 of the memory device 20. In this embodiment,
Further, the feature amount extraction unit 113 performs head attribute extraction processing to extract hairstyle information (step 307). Specifically, the head area is detected by the area detection in step 308, the mosaic image of the head area is created by the mosaic image creation of step 309, and the feature vector of the head area is created by the feature vector generation of step 310. To do. Then, the hairstyle is identified by the feature vector, and the identification result is held in the attribute holding unit 125 of the memory device 120. When a person who has left the room appears, the identification unit 114 collates the same as in the first embodiment (step 311), and the residence time measuring unit 115 uses the values stored in the attribute storage unit 125 to store residence time for each attribute value. Is recorded and statistical data is created (step 312).

【0012】〈実施例3〉これも請求項2に対応するも
ので、図4を用いて説明する。実施例1と同様に、画像
入力時の時刻と照合のための特徴ベクトルをメモリ装置
120の領域121〜124に保持する。本実施例で
は、更に、特徴量抽出部113で服装属性抽出処理を実
行して髪型情報を抽出する(ステップ407)。具体的
には、ステップ408の領域検出で下半身領域を検出
し、ステップ409の領域抽出で背景から下半身領域を
抽出し、ステップ410の足幅検出で足首部分の太さを
計測することによってスカートかズボンかなどを判定
し、メモリ装置120の属性保持部125に判定結果を
保持する。退室者が現れた場合には、識別部114にお
いて第1の実施例と同様に照合し(ステップ411)、
滞留時間計測部115において、属性保持部125に保
持した値によって属性の値別に滞留時間を記録し、統計
データを作成する(ステップ412)。
<Third Embodiment> This also corresponds to claim 2 and will be described with reference to FIG. Similar to the first embodiment, the time of image input and the feature vector for matching are held in the areas 121 to 124 of the memory device 120. In the present embodiment, the feature quantity extraction unit 113 further executes clothing attribute extraction processing to extract hairstyle information (step 407). Specifically, the lower body region is detected by the region detection in step 408, the lower body region is extracted from the background by the region extraction in step 409, and the width of the ankle portion is measured by the foot width detection in step 410 to determine whether the skirt is formed. It is judged whether or not it is pants, and the judgment result is held in the attribute holding unit 125 of the memory device 120. When a person leaving the room appears, the identification unit 114 collates the same as in the first embodiment (step 411),
The residence time measuring unit 115 records the residence time for each attribute value according to the value held in the attribute holding unit 125, and creates statistical data (step 412).

【0013】〈実施例4〉これは請求項3に対応するも
ので、図5と図8および図9を用いて説明する。実施例
1と同様に、画像入力部111において入室者用カメラ
101または退出者用カメラ102で撮影した画像を入
力し(ステップ501)、入力時の時刻を時刻取得部1
12で取得し、時刻保持部121、122に保持する
(ステップ502)。次に、特徴量抽出部113で画像
から人物の特徴量を抽出し、特徴量保持部123、12
4に保持する(ステップ503)。
<Embodiment 4> This corresponds to claim 3 and will be described with reference to FIGS. 5, 8 and 9. Similar to the first embodiment, the image input unit 111 inputs an image captured by the in-room camera 101 or the exit camera 102 (step 501), and the time at the time of input is input to the time acquisition unit 1.
It is acquired in step 12 and stored in the time storage units 121 and 122 (step 502). Next, the feature amount extraction unit 113 extracts the feature amount of the person from the image, and the feature amount holding units 123 and 12
4 (step 503).

【0014】本実施例では、次に、前識別時から一定時
間経過したかどうかを調べ(ステップ504)、経過し
てなければ開始に戻り、経過していれば、識別部114
の処理に進む(ステップ505)。識別部114では、
まず、前識別時(図8のtn−1)から現在(図8のt
n)までの区間(対象区間と呼ぶ)の退室者と、対象区
間及びそれ以前の入室者で前識別時までに退室者と対応
付けがされていない者との距離を、ステップ503で抽
出した特徴量を用いて算出する(ステップ506)。そ
の結果、それぞれの退室者について、最小距離を示す入
室者が複数存在するか調べ(ステップ507)、存在し
ない場合には、それぞれの退室者について距離が最小に
なる入室者を選択し(ステップ508)、存在する場合
には、入室時刻が最も速い入室者を選択する(ステップ
513)。次に、その結果、同一の入室者が複数回選択
されたか調べ(ステップ509)、選択されなかった場
合には、既に得られている入室者と退室者の組合せで、
滞留時間計測部115において滞留時間を計測し(ステ
ップ515)、選択された場合には、該入室者と該入室
者を選択した退室者との距離を比較し(ステップ51
0)、最小の距離を示す退室者が複数存在するか調べ
(ステップ511)、存在しなければ、最小となる退室
者と該入室者を対応づけ、存在する場合には、退室時刻
が最も速い退室者と該入室者とを対応付け、滞留時間計
測部115において滞留時間を計測する(ステップ51
5)。
In the present embodiment, next, it is checked whether or not a predetermined time has passed since the previous identification (step 504). If it has not elapsed, the process returns to the start, and if it has elapsed, the identification unit 114.
(Step 505). In the identification unit 114,
First, from the time of the previous identification (tn-1 in FIG. 8) to the present (tn-1 in FIG. 8).
In step 503, the distance between the leaving person in the section up to n) (referred to as the target section) and the person who has entered the target section or earlier and is not associated with the leaving person by the time of the previous identification is extracted in step 503. It is calculated using the feature amount (step 506). As a result, with respect to each leaving person, it is checked whether or not there are a plurality of entering persons having the minimum distance (step 507). If there is no entering person, the leaving person having the smallest distance is selected for each leaving person (step 508). ), If any, select the person who entered the room the earliest (step 513). Next, as a result, it is checked whether or not the same occupant has been selected multiple times (step 509), and if not selected, a combination of occupants and exiters already obtained is
The residence time measuring unit 115 measures the residence time (step 515) and, if selected, compares the distance between the person who entered the room and the person who left the room that selected the person who entered the room (step 51).
0), it is checked whether or not there are a plurality of persons leaving the room that show the minimum distance (step 511). If there is no person, the smallest person leaving the room is associated with the person who entered the room. A person leaving the room is associated with the person entering the room, and the residence time is measured by the residence time measuring unit 115 (step 51).
5).

【0015】図8は、対応付けの一例を示す図である。
図8では、対象区間における退室者810−816に対
し、(810、806)、(811、805)、(81
2、804)、(813、803)、(814、80
7)、(815、801)、(816、809)という
対応付けがなされ、。入室者802、808は対応付け
がなされず、次回の識別時まで残ることになる。
FIG. 8 is a diagram showing an example of correspondence.
In FIG. 8, (810, 806), (811, 805), (81
2,804), (813,803), (814,80)
7), (815, 801), and (816, 809) are associated with each other. The persons 802 and 808 who have entered the room are not associated with each other and remain until the next identification.

【0016】本実施例では、一定時間内に退室した人物
をまとめて入室者とのマッチングの度合を求め、該一定
時間内の退室者全体のマッチングの度合が高くなるよう
に対応付けることにより、一人退室する度に対応付けを
行う場合に比べ滞留時間の計測精度を高くすることがで
きる。
In this embodiment, persons who have left the room within a certain period of time are collectively gathered to determine the degree of matching with the person who has entered the room, and are correlated so that the degree of matching of all the persons leaving the room within the certain period of time is high. It is possible to improve the accuracy of measuring the residence time, as compared with the case of associating each time the user leaves the room.

【0017】〈実施例5〉これは請求項4に対応するも
ので、図6と図8および図9を用いて説明する。実施例
1と同様に、画像入力部111において入室者用カメラ
101または退室者用カメラ102で撮影した画像を入
力し(ステップ601)、入力時の時刻を時刻取得部1
12で取得し、保持する(ステップ602)。次に、特
徴量抽出部113で画像から人物の特徴量を抽出し、同
様に保持する(ステップ603)。
<Embodiment 5> This corresponds to claim 4 and will be described with reference to FIGS. 6, 8 and 9. Similar to the first embodiment, the image input unit 111 inputs an image captured by the in-room camera 101 or the in-room camera 102 (step 601), and the time at the time of input is input to the time acquisition unit 1.
It is acquired at 12 and held (step 602). Next, the feature amount extraction unit 113 extracts the feature amount of the person from the image and holds it in the same manner (step 603).

【0018】次に、本実施例では、前識別時から一定人
数退室したかどうかを調べ(ステップ604)、退室し
ていなければ開始に戻り、退室していれば識別部114
の処理に進む(ステップ605)。識別部114では、
まず、前識別時(図8のtn−1)から現在(図8のt
n)までの区間(対象区間と呼ぶ)の退室者と、対象区
間及びそれ以前の入室者で前識別時までに退室者と対応
付けがされていない者との距離を、ステップ603で抽
出した特徴量を用いて算出する(ステップ606)。そ
の結果、それぞれの退室者について、最小距離を示す入
室者が複数存在するか調べ(ステップ607)、存在し
ない場合には、それぞれの退室者について距離が最小に
なる入室者を選択し(ステップ608)、存在する場合
には、入室時刻が最も速い入室者を選択する(ステップ
613)。次に、その結果、同一の入室者が複数回選択
されたか調べ(ステップ609)、選択されなかった場
合には、既に得られている入室者と退室者の組合せで、
滞留時間計測部115において滞留時間を計測し(ステ
ップ615)、選択された場合には、該入室者と該入室
者を選択した退室者との距離し(ステップ610)、最
小の距離を示す退室者が複数存在するか調べ(ステップ
611)、存在しなければ、最小となる退室者と該入室
者を対応付け(ステップ612)、存在する場合には、
退室時刻が最も早い退室者と該入室者とを対応付け(ス
テップ614)、滞留時間計測部115において滞留時
間を計測する(ステップ615)。
Next, in the present embodiment, it is checked whether or not a certain number of people have left the room at the time of the previous identification (step 604).
(Step 605). In the identification unit 114,
First, from the time of the previous identification (tn-1 in FIG. 8) to the present (tn-1 in FIG. 8).
In step 603, the distance between the leaving person in the section up to (n) (referred to as the target section) and the person who has entered the target section and before that and is not associated with the leaving person by the time of the previous identification is extracted. It is calculated using the feature amount (step 606). As a result, with respect to each leaving person, it is checked whether or not there are a plurality of entering persons showing the minimum distance (step 607), and if there is no entering person, the entering person having the smallest distance is selected for each leaving person (step 608). ), If there is, the person who entered the room the earliest is selected (step 613). Next, as a result, it is checked whether or not the same occupant has been selected multiple times (step 609), and if not selected, the already obtained combination of occupants and exiters is
The residence time is measured by the residence time measuring unit 115 (step 615), and if selected, the distance between the person who has entered the room and the person who left the room that has selected the person who has entered the room is determined (step 610), and the room with the minimum distance is left. It is checked whether or not there are a plurality of persons (step 611), and if there is no person, the minimum leaving person and the person who entered the room are associated (step 612).
The leaving person having the earliest leaving time is associated with the entering person (step 614), and the staying time measuring unit 115 measures the staying time (step 615).

【0019】本実施例では、退室した人物が一定人数に
なるとまとめて入室者とのマッチングの度合を求め、該
一定人数の退室者全体のマッチングの度合が高くなるよ
うに対応付けることにより、実施例4と同様に、一人退
室する度に対応付けを行う場合に比べ滞留時間の計測精
度を高くすることができる。
In the present embodiment, when the number of persons leaving the room becomes a certain number, the degree of matching with the persons who have entered the room is collectively obtained, and the degree of matching is increased so that the degree of matching of all the persons leaving the room becomes high. As in the case of 4, it is possible to improve the accuracy of measuring the residence time, as compared with the case of associating each time one person leaves the room.

【0020】〈実施例6〉これは請求項5に対応し、図
7と図8および図9を用いて説明する。実施例1と同様
に、画像入力部111において入室用カメラ101また
は退室用カメラ102で撮影した画像を入力し(ステッ
プ701)、入力時の時刻を時刻取得部112で取得
し、保持する(ステップ702)。次に、特徴量抽出部
113で画像から人物の特徴量を抽出し、同様に保持す
る(ステップ703)。ここで、Nは2以上の自然数と
する。
<Embodiment 6> This corresponds to claim 5 and will be described with reference to FIGS. 7, 8 and 9. Similar to the first embodiment, the image input unit 111 inputs the image captured by the entry camera 101 or the exit camera 102 (step 701), and the time acquisition unit 112 acquires and holds the time at the time of input (step 701). 702). Next, the feature amount extraction unit 113 extracts the feature amount of the person from the image and holds it in the same manner (step 703). Here, N is a natural number of 2 or more.

【0021】次に、本実施例では、前識別時から一定人
数退室したかどうかを調べ(ステップ704)、退室し
ていなければ開始に戻り、退室していれば、識別部11
4の処理に進む(ステップ705)。識別部114で
は、まず、KL展開によって、ステップ703の特徴抽
出処理で抽出したN種類の特徴から該一定人数の退室者
の識別に適した特徴をM種類選択する(ステップ70
6)。但し、M≦N−1である。次に、前識別時(図8
のtn−1)から現在(図8のtn)までの区間(対象
区間と呼ぶ)の退室者と、対象区間及びそれ以前の入室
者で前識別時までに退室者と対応付けがされていない者
との距離を、ステップ703選択した特徴量を用いて算
出する(ステップ707)。その結果、それぞれの退室
者について、最小距離を示す入室者が複数存在するか調
べ(ステップ708)、存在しない場合には、それぞれ
の退室者について距離が最小になる入室者を選択し(ス
テップ709)、存在する場合には、入室時刻が最も早
い入室者を選択する(ステップ714)。次に、その結
果、同一の入室者が複数回選択されたか調べ(ステップ
710)、選択されなかった場合には、既に得られてい
る入室者と退室者の組合せで、滞留時間計測部115に
おいて滞留時間を計測し(ステップ716)、選択され
た場合には、該入室者と該入室者を選択した退室者との
距離を比較し(ステップ711)、最小の距離を示す退
室者が複数存在するか調べ(ステップ712)、存在し
なければ、最小となる退室者と該入室者を対応付け(ス
テップ713)、存在する場合には、退室時刻が最も早
い退室者と該入室者とを対応付け(ステップ715)、
滞留時間計測部115において滞留時間を計測する(ス
テップ716)。
Next, in the present embodiment, it is checked whether or not a certain number of people have left the room at the time of the previous identification (step 704).
Then, the process proceeds to step 4 (step 705). The identifying unit 114 first selects M types of features suitable for identifying the certain number of leaving persons from the N types of features extracted by the feature extraction processing of step 703 by KL expansion (step 70).
6). However, M ≦ N−1. Next, at the time of pre-identification (see FIG.
From tn-1) to the current time (tn in FIG. 8), and the exiting person in the section (referred to as the target section) is not associated with the leaving section by the previous identification in the target section and the occupants before that. The distance to the person is calculated using the feature amount selected in step 703 (step 707). As a result, with respect to each leaving person, it is checked whether or not there are a plurality of entering persons exhibiting the minimum distance (step 708). If there is no entering person, the leaving person having the smallest distance is selected for each leaving person (step 709). ), If it exists, the person who entered the room the earliest is selected (step 714). Next, as a result, it is checked whether the same occupant has been selected a plurality of times (step 710). If not selected, the residence time measuring unit 115 uses the already obtained combination of occupants and exiters. The residence time is measured (step 716), and if selected, the distance between the person who entered the room and the person who left the room that selected the person who entered the room is compared (step 711), and there are a plurality of people who leave the room. If it does not exist, the smallest leaving person is associated with the entering person (step 713). If it exists, the leaving person with the earliest leaving time is associated with the entering person. (Step 715),
The residence time measuring unit 115 measures the residence time (step 716).

【0022】本実施例においては、マッチング対象の複
数の退室者を識別するのに最も適した特徴量を選択する
ことにより、入室者との対応付けにおいても同一人物の
場合にはマッチングの度合を高く別人物の場合にはマッ
チングの度合を低くすることができる。
In the present embodiment, by selecting the most suitable feature amount for identifying a plurality of leaving persons to be matched, the degree of matching can be increased in the case of the same person even in association with the occupants. In the case of a different person, the degree of matching can be lowered.

【0023】〈実施例7〉これは、請求項6に対応し、
実施例4や実施例5において、各々の時点で対応付けの
対象となる、入室用のカメラ101に出現した人物の集
団の各人物が持つ何種類かの特徴量から、該集団に属す
る個々の人物を識別するのに適した特徴量のみを選択的
に用いて、退出用のカメラ102に出現した各々の時点
だ対応付けの対象となる人物と対応付けるものである。
本実施例では、各々の時点でマッチングの対象となる入
室者を識別するのに最も適した特徴量を選択することに
より、退室者との対応付けにおいても同一人物の場合に
はマッチングの度合を高く別人物の場合にはマッチング
の度合を低くすることができる。
<Embodiment 7> This corresponds to claim 6,
In the fourth and fifth embodiments, from several kinds of feature quantities of each person of the group of persons who appear in the camera 101 for entering the room, which are the targets of association at each time point, the individual features belonging to the group are identified. Only the feature amount suitable for identifying a person is selectively used to associate with a person who is a target of association at each time when the camera appears for exit 102.
In the present embodiment, by selecting the most suitable feature amount for identifying the person who enters the room to be matched at each time point, the degree of matching can be determined in the case of the same person even in association with the person leaving the room. In the case of a different person, the degree of matching can be lowered.

【0024】以上、各実施例では入室用、退室用の2台
のカメラを用いた場合を示したが、カメラの台数は2台
に限らず、3台以上の場合にも2台のカメラを選んで一
方を入室用、他方を退室用として扱うことにより、それ
らの2台のカメラの間の滞留時間を計測することができ
る。
As described above, in each of the embodiments, the case where two cameras for entering and leaving the room are used, but the number of cameras is not limited to two, and even when three or more cameras are used, two cameras are required. By selectively treating one as entering the room and the other as leaving, the residence time between those two cameras can be measured.

【0025】[0025]

【発明の効果】請求項1の発明によれば、入室者用、退
室者の時刻および特徴ベクトルを検出して照合をとるた
め、計測する滞留時間の精度が向上すると共に、特定の
個人の滞留時間を計測することが可能になる。
According to the first aspect of the present invention, since the time and the feature vector of the person entering the room and the characteristic vector of the person leaving the room are detected and collated, the accuracy of the dwell time to be measured is improved and the dwell time of a specific individual is improved. It becomes possible to measure time.

【0026】請求項2の発明によれば、頭部画像の特徴
ベクトルや服装画像の特徴ベクトルなどを生成すること
により、男女別、髪型別、または、服装別の滞留時間を
計測することが可能になる。
According to the second aspect of the present invention, by generating the feature vector of the head image, the feature vector of the clothing image, etc., it is possible to measure the residence time by gender, hairstyle, or clothing. become.

【0027】請求項3の発明によれば、一定時間内に退
室した人物をまとめて入室者とのマッチングの度合を求
め、該一定時間内の退室者全体のマッチングの度合が高
くなるように対応付けることにより、一人退室する度に
対応付けを行う場合に比べ滞留時間の計測精度を高くす
ることができる。
According to the third aspect of the present invention, the degree of matching with the persons who have entered the room is calculated by collectively collecting the persons who have left the room within a certain period of time, and the degree of matching with respect to all the persons leaving the room within the certain period of time is increased. As a result, it is possible to improve the accuracy of measuring the residence time, as compared with the case where the correspondence is made every time one person leaves the room.

【0028】請求項4の発明によれば、退室した人物が
一定人数になるとまとめて入室者とのマッチングの度合
を求め、該一定人数の退室者全体のマッチングの度合が
高くなるように対応付けることにより、同様に一人退室
する度に対応付けを行う場合に比べ滞留時間の計測精度
を高くすることができる。
According to the invention of claim 4, when the number of persons leaving the room becomes a certain number, the degree of matching with the persons who have entered the room is collectively obtained, and the degree of matching is increased so that the degree of matching of all the persons leaving the certain number becomes high. Accordingly, it is possible to improve the accuracy of measuring the residence time, as compared with the case where the correlation is similarly performed every time one person leaves the room.

【0029】請求項5の発明によれば、マッチング対象
の複数の退室者を識別するのに最も適した特徴量を選択
することにより、入室者との対応付けにおいても同一人
物の場合にはマッチングの度合を高く別人物の場合には
マッチングの度合を低くすることができる。
According to the fifth aspect of the present invention, by selecting a feature value most suitable for identifying a plurality of leaving persons to be matched, matching can be performed in the case of the same person even in association with an occupant. In the case of another person, the degree of matching can be lowered and the degree of matching can be lowered.

【0030】請求項6の発明によれば、各々の時点でマ
ッチングの対象となる入室者を識別するのに最も適した
特徴量を選択することにより、退室者との対応付けにお
いても同一人物の場合にはマッチングの度合を高く別人
物の場合にはマッチングの度合を低くすることができ
る。
According to the sixth aspect of the present invention, by selecting the most suitable feature amount for identifying the person who enters the room to be matched at each time point, the same person can be associated with the person leaving the room. In this case, the degree of matching can be high, and in the case of another person, the degree of matching can be low.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の一実施例のブロック図である。FIG. 1 is a block diagram of an embodiment of the present invention.

【図2】本発明の第1の一実施例の処理の流れを示した
図である。
FIG. 2 is a diagram showing a processing flow of a first exemplary embodiment of the present invention.

【図3】本発明の第2の一実施例の処理の流れを示した
図である。
FIG. 3 is a diagram showing a processing flow of a second exemplary embodiment of the present invention.

【図4】本発明の第3の一実施例の処理の流れを示した
図である。
FIG. 4 is a diagram showing a processing flow of a third exemplary embodiment of the present invention.

【図5】本発明の第4の一実施例の処理の流れを示した
図である。
FIG. 5 is a diagram showing a processing flow of a fourth embodiment of the present invention.

【図6】本発明の第5の一実施例の処理の流れを示した
図である。
FIG. 6 is a diagram showing a flow of processing of a fifth embodiment of the present invention.

【図7】本発明の第6の一実施例の処理の流れを示した
図である。
FIG. 7 is a diagram showing a processing flow of a sixth embodiment of the present invention.

【図8】本発明の第7の一実施例の説明図である。FIG. 8 is an explanatory diagram of a seventh embodiment of the present invention.

【図9】カメラの設置例を示した図である。FIG. 9 is a diagram showing an installation example of a camera.

【符号の説明】[Explanation of symbols]

101 入室者用カメラ 102 退室者用カメラ 111 画像入力部 112 時刻取得部 113 特徴量抽出部 114 識別部 115 滞留時間計測部 121、122 時刻保持部 123、124 特徴量保持部 125 属性保持部 101 camera for entering person 102 camera for leaving person 111 image input unit 112 time acquisition unit 113 feature amount extraction unit 114 identification unit 115 residence time measuring unit 121, 122 time holding unit 123, 124 feature amount holding unit 125 attribute holding unit

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】 入室者用と退室者用のカメラから画像を
入力して、どちらのカメラに人物が出現したかを検出
し、該人物の出現を検出したときの画像およびその時刻
を記憶し、該記憶した画像を画像処理して特徴ベクトル
を抽出し、一方のカメラの画像から得られた特徴ベクト
ルと他方のカメラの画像から得られた特徴ベクトルとを
照合し、照合が取られた2枚の画像に対応する時刻から
該人物の滞留時間を計測することを特徴とする滞留時間
計測方法。
1. An image is input from a camera for entering a person and a camera for leaving a person to detect which camera a person appears in, and the image and time when the appearance of the person is detected are stored. , The stored image is subjected to image processing to extract a feature vector, the feature vector obtained from the image of one camera is collated with the feature vector obtained from the image of the other camera, and the collation is performed. A staying time measuring method, characterized in that the staying time of the person is measured from a time corresponding to one image.
【請求項2】 請求項1記載の滞留時間計測方法におい
て、人物の属性の特徴ベクトルを生成し、人物の属性別
の滞留時間を計測することを特徴とする滞留時間計測方
法。
2. The residence time measuring method according to claim 1, wherein a characteristic vector of an attribute of a person is generated and a residence time for each attribute of the person is measured.
【請求項3】 請求項1記載の滞留時間計測方法におい
て、一定時間毎に区切り各区間に退室者用のカメラに出
現した人物について、該区間とそれ以前に入室者用のカ
メラに出現し、かつ、対応が取れていない人物と対応付
けることを特徴とする滞留時間計測方法。
3. The residence time measuring method according to claim 1, wherein a person who appears in a camera for a person leaving a room in each section divided at regular intervals appears in a camera for a person entering the room in the section and before that section, Moreover, a staying time measuring method characterized by associating with a person who has not been able to correspond.
【請求項4】 請求項1記載の滞留時間計測方法におい
て、退室者用カメラに出現した人物が一定の人数になる
毎に、該一定人数の人物について、該退室者用のカメラ
に出現した人物が一定の人数になった当該時点より前に
入室者用のカメラに出現し、かつ、対応が取れていない
人物と対応付けることを特徴とする滞留時間計測方法。
4. The residence time measuring method according to claim 1, wherein every time the number of persons who appear in the camera for leaving a person becomes a certain number, the number of persons who appear in the camera for leaving a certain number of persons A method for measuring a residence time, characterized in that a person who appears in a camera for an in-room person before the time when the number of people has reached a certain number and is associated with a person who has not corresponded.
【請求項5】 請求項3または4記載の滞留時間計測方
法において、各々の時点で対応付けの対象となる、退出
者用のカメラに出現した人物の集団の各人物が持つ何種
類かの特徴量から、該集団に属する個々の人物を識別す
るのに適した特徴量のみを選択的に用いて、入室者用の
カメラに出現し、かつ、対応が取れていない人物と対応
付けることを特徴とする滞留時間計測方法。
5. The residence time measuring method according to claim 3 or 4, wherein some kinds of characteristics possessed by each person of a group of persons appearing in a camera for an evacuee to be associated at each time point. From the amount, by selectively using only the characteristic amount suitable for identifying the individual persons belonging to the group, the characteristic feature is that it is associated with the person who appears in the camera for the occupants of the room and has no correspondence. Dwell time measuring method.
【請求項6】 請求項3または4記載の滞留時間計測方
法において、各々の時点で対応付けの対象となる、入室
者のカメラに出現した人物の集団の各人物が持つ何種類
かの特徴量から、該集団に属する個々の人物を識別する
のに適した特徴量のみを選択的に用いて、退出者用のカ
メラに出現した各々の時点で対応付けの対象となる人物
と対応付けることを特徴とする滞留時間計測方法。
6. The residence time measuring method according to claim 3 or 4, wherein several kinds of characteristic amounts possessed by each person of a group of persons who appear in a camera of an in-room occupant, which is a target of association at each time point. Therefore, by selectively using only the characteristic amount suitable for identifying the individual persons belonging to the group, the characteristic amount is associated with the person to be associated at each time when the person appears in the camera for the leaving person. Dwell time measurement method.
JP6038866A 1994-03-09 1994-03-09 Residence time measuring method Pending JPH07249138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP6038866A JPH07249138A (en) 1994-03-09 1994-03-09 Residence time measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP6038866A JPH07249138A (en) 1994-03-09 1994-03-09 Residence time measuring method

Publications (1)

Publication Number Publication Date
JPH07249138A true JPH07249138A (en) 1995-09-26

Family

ID=12537137

Family Applications (1)

Application Number Title Priority Date Filing Date
JP6038866A Pending JPH07249138A (en) 1994-03-09 1994-03-09 Residence time measuring method

Country Status (1)

Country Link
JP (1) JPH07249138A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000029317A (en) * 1998-10-27 2000-05-25 구보 마쯔오 Method of and device for acquiring information on the traffic line of persons
JP2001320698A (en) * 2000-05-12 2001-11-16 Nippon Signal Co Ltd:The Image type monitoring method, and image type monitoring device and safety system using it
JP2004187116A (en) * 2002-12-05 2004-07-02 Casio Comput Co Ltd Action monitoring system and program
WO2012111146A1 (en) * 2011-02-18 2012-08-23 三菱電機株式会社 Room entry/exit management device and room entry/exit management system using same
JP2015232791A (en) * 2014-06-10 2015-12-24 パナソニックIpマネジメント株式会社 Customer management device, customer management system, and customer management method
US10115140B2 (en) 2013-11-07 2018-10-30 Panasonic Intellectual Property Management Co., Ltd. Customer management device, customer management system and customer management method
CN112002039A (en) * 2020-08-22 2020-11-27 王冬井 Automatic control method for file cabinet door based on artificial intelligence and human body perception
US11210528B2 (en) 2018-10-18 2021-12-28 Canon Kabushiki Kaisha Information processing apparatus, information processing method, system, and storage medium to determine staying time of a person in predetermined region

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000029317A (en) * 1998-10-27 2000-05-25 구보 마쯔오 Method of and device for acquiring information on the traffic line of persons
US6654047B2 (en) 1998-10-27 2003-11-25 Toshiba Tec Kabushiki Kaisha Method of and device for acquiring information on a traffic line of persons
JP2001320698A (en) * 2000-05-12 2001-11-16 Nippon Signal Co Ltd:The Image type monitoring method, and image type monitoring device and safety system using it
JP4481432B2 (en) * 2000-05-12 2010-06-16 日本信号株式会社 Image-type monitoring method, image-type monitoring device, and safety system using the same
JP2004187116A (en) * 2002-12-05 2004-07-02 Casio Comput Co Ltd Action monitoring system and program
CN103238171A (en) * 2011-02-18 2013-08-07 三菱电机株式会社 Room entry/exit management device and room entry/exit management system using same
WO2012111146A1 (en) * 2011-02-18 2012-08-23 三菱電機株式会社 Room entry/exit management device and room entry/exit management system using same
JP5516761B2 (en) * 2011-02-18 2014-06-11 三菱電機株式会社 Entrance / exit management device and entrance / exit management system using the same
JPWO2012111146A1 (en) * 2011-02-18 2014-07-03 三菱電機株式会社 Entrance / exit management device and entrance / exit management system using the same
CN103238171B (en) * 2011-02-18 2015-03-25 三菱电机株式会社 Room entry/exit management device and room entry/exit management system using same
US10115140B2 (en) 2013-11-07 2018-10-30 Panasonic Intellectual Property Management Co., Ltd. Customer management device, customer management system and customer management method
JP2015232791A (en) * 2014-06-10 2015-12-24 パナソニックIpマネジメント株式会社 Customer management device, customer management system, and customer management method
US11210528B2 (en) 2018-10-18 2021-12-28 Canon Kabushiki Kaisha Information processing apparatus, information processing method, system, and storage medium to determine staying time of a person in predetermined region
CN112002039A (en) * 2020-08-22 2020-11-27 王冬井 Automatic control method for file cabinet door based on artificial intelligence and human body perception

Similar Documents

Publication Publication Date Title
Phillips et al. The FERET evaluation methodology for face-recognition algorithms
JP3584334B2 (en) Human detection tracking system and human detection tracking method
KR101291899B1 (en) Procedure for identifying a person by eyelash analysis and acquisition device using thereof
USRE36041E (en) Face recognition system
US6751340B2 (en) Method and apparatus for aligning and comparing images of the face and body from different imagers
US11138420B2 (en) People stream analysis method, people stream analysis apparatus, and people stream analysis system
Yan et al. Multi-biometrics 2D and 3D ear recognition
CN107615298A (en) Face identification method and system
JPH0795625A (en) System and method for measurement of viewing and listening
US20110150278A1 (en) Information processing apparatus, processing method thereof, and non-transitory storage medium
JP4159159B2 (en) Advertising media evaluation device
JPH11175724A (en) Person attribute identifying device
JPH07249138A (en) Residence time measuring method
CN108765014A (en) A kind of intelligent advertisement put-on method based on access control system
JP2003263641A (en) Movement analyzing device
Ravi et al. A study on face recognition technique based on Eigenface
Ivanov et al. Error weighted classifier combination for multi-modal human identification
CN111428594A (en) Identity authentication method and device, electronic equipment and storage medium
JP2002208011A (en) Image collation processing system and its method
JP2008065651A (en) Face image authentication method, face image authentication apparatus and program
Al Eidan Hand biometrics: Overview and user perception survey
JP4577771B2 (en) Face image recognition device
CN110852161A (en) Method, device and system for identifying identity of person in motion in real time
JP7098180B2 (en) Information processing equipment, information processing methods and information processing programs
Selvan et al. E-Voting Based on Face Recognition Using KNN Classifier