CN109284404A - A method of the scene coordinate in real-time video is matched with geography information - Google Patents

A method of the scene coordinate in real-time video is matched with geography information Download PDF

Info

Publication number
CN109284404A
CN109284404A CN201811044511.8A CN201811044511A CN109284404A CN 109284404 A CN109284404 A CN 109284404A CN 201811044511 A CN201811044511 A CN 201811044511A CN 109284404 A CN109284404 A CN 109284404A
Authority
CN
China
Prior art keywords
scene
main body
geography information
real
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811044511.8A
Other languages
Chinese (zh)
Inventor
李晓倩
庾农
韩彦
向子荣
侯睿
常乐
邓博杨
董笑宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Chuangjiang Information Technology Co Ltd
Original Assignee
Chengdu Chuangjiang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chuangjiang Information Technology Co Ltd filed Critical Chengdu Chuangjiang Information Technology Co Ltd
Priority to CN201811044511.8A priority Critical patent/CN109284404A/en
Publication of CN109284404A publication Critical patent/CN109284404A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention belongs to information technology fields, disclose a kind of method that the scene coordinate by real-time video matches with geography information, mainly include five big steps: S1: establishing scene geography information storage table, the scene geography information storage table includes the geography information of multiple scenes, and the geography information includes scene geographical coordinate;S2: the real-time geographic coordinate and shooting height of shooting main body are read;S3: filtering out the scene in coverage according to geographical coordinate, shooting height and each scene geographical coordinate of shooting main body is target scene;S4: screen coordinate C (x, y) of the target scene in real-time video is calculated according to the scene geographical coordinate of the real-time geographic coordinate of shooting main body, shooting height and target scene;S5: retrieval scene geography information storage table obtains the geography information of target scene, and the geography information of target scene is labeled as to the geography information of screen coordinate C (x, y).The present invention solves the problems, such as that existing video location technology is high to image processing techniques dependency degree.

Description

A method of the scene coordinate in real-time video is matched with geography information
Technical field
The invention belongs to information technology fields, and in particular to a kind of scene coordinate by real-time video and geography information phase Matched method.
Background technique
The method that the existing target in real-time video is positioned depends on image processing techniques.Specifically, Three big types are positioned including the target positioning based on Analysis of Contrast, the target based on the positioning of matched target and based on detection.
The detection of target is mainly realized in target positioning based on Analysis of Contrast using the contrast difference of target and background With positioning.
The positioning of target is mainly realized by the characteristic matching between before and after frames based on the positioning of matched target.Wherein feature It is the attribute that object distinguishes others, it is necessary to have ga s safety degree, reliability, independence and sparsity.In positioning It needs to extract target signature in the process, and finds this feature in each frame.The feature used in target positioning mainly has geometry Shape, sub-space feature, appearance profile and characteristic point.
Target positioning based on motion detection mainly realizes target according to the difference between target movement and background motion Detection and positioning.Based on the target location algorithm of motion detection by the different motion of target and background in detection sequence image come It was found that region existing for target, realizes positioning.This kind of algorithm does not need the pattern match of interframe, does not need to transmit target in interframe Kinematic parameter, it is only necessary to prominent target and the non-targeted difference in time domain or airspace.This kind of algorithm has detection multiple The ability of target can be used for multi-target detection and positioning, and this kind of moving target detecting method mainly has inter frame image calculus of finite differences, back The scape estimation technique, energy accumulation method, sports ground estimation technique etc..
Above-mentioned several localization methods are mainly handled each figure of detecing, and are then positioned, to picture quality, illumination item Part, image processing hardware configuration requirement are high, and the video number of city management at present is big, especially hundreds and thousands of road videos, and hardware is thrown It provides especially high.Therefore, it is necessary to which provide one kind just can in video determine object independent of height configuration image procossing The method of position.
Summary of the invention
In order to solve the above problems existing in the present technology, it is an object of that present invention to provide a kind of fields by real-time video The method that scape coordinate matches with geography information automatically.
The technical scheme adopted by the invention is as follows: what a kind of scene coordinate by real-time video matched with geography information Method includes the following steps:
S1: establishing scene geography information storage table, and the scene geography information storage table includes the geographical letter of multiple scenes Breath, the geography information includes geographical coordinate;
S2: the real-time geographic coordinate and shooting height of shooting main body are read;
S3: it is filtered out according to real-time geographic coordinate, shooting height and each scene geographical coordinate of shooting main body in shooting Scene in range is target scene;
S4: target is calculated according to the geographical coordinate of the real-time geographic coordinate of shooting main body, shooting height and target scene Screen coordinate C (x, y) of the scene in real-time video;
S5: retrieval scene geography information storage table obtains the geography information of target scene, and the geography of target scene is believed Breath matches with screen coordinate C (x, y).
In the technical program, by the way that the geography information of scene is pre-stored in scene geography information storage table, using scene It manages coordinate and shoots the geographical coordinate of main body and the relativeness of shooting height, calculate each scene when shooting main body is made real Screen coordinate in video, and the geography information of scene is matched to corresponding screen coordinate, it can quickly be looked in real-time video Target scene out.The localization method of this target scene needs not rely on the image processing equipment of high configuration, can identify mesh Scene is marked, identification cost and False Rate are reduced;And the shooting imaging of object is not only contained in real-time video, it further comprises The geography information of object, enriches video content.
Preferably, in the step S2, the geography information of the scene further includes scene title, scene number and scene area One or more of domain.In this optimal technical scheme, video display content can be further enriched, for example, markup information is set For scene title, object is identified convenient for supervisor;Markup information is set as scene areas, reduces object convenient for supervisor Search area.
Preferentially, the method for the present invention further includes step S6: the scene name being referred to as markup information, to screen coordinate C (x, y) is labeled.The geography information of scene is shown in screen with more intuitive.
Preferably, in the step S2, the real-time geographic coordinate and shooting height for shooting main body are obtained by MOD interface, Specifically, captured in real-time height h, the real-time warp of shooting main body are obtained by MOD (Moving Object Detection) interface Spend λ B and real-time latitude φ B, wherein it is existing for obtaining by MOD interface and shooting the geographical coordinate of main body and the method for shooting height There is technology, this will not be repeated here.
Preferably, in step s3, the judgment method that whether scene is in coverage specifically comprises the following steps:
S31: the horizontal distance s of shooting main body and each scene is calculated:
S=111.12cos { 1/ [ten cos Φ Acos Φ Bcos (λ B-λ A) of sin Φ Asin Φ B] }
Wherein, λ A is the longitude of scene, and φ A is the latitude of scene;λ B is the real-time longitude for shooting main body, and φ B is shooting The real-time latitude of main body;
S32: the space length w of shooting main body and each scene is calculated:Wherein h is the real-time of shooting main body Shooting height;
S33: by the maximum visual distance W of space length w and shooting main bodymaxCompare, if w≤Wmax, then scene, which is in, claps It takes the photograph in range.
In this optimal technical scheme, by screening the scene eliminated outside shooting main body coverage, reduce subsequent The workload of matching primitives.
Preferably, the direction of travel of the shooting main body always forward, and when the shooting main body is advanced forward, the mesh The screen coordinate y value of mark scene is gradually reduced.In this optimal technical scheme, the reason of carrying out above-mentioned restriction is, will shooting master The direction of travel of body is limited to always retreat forward, it is ensured that the imaging of target scene always (the i.e. y above screen =ynAt lines) enter screen, and then guarantee that the screen coordinate y value of target scene can uniquely determine.
With this condition, in the step S4, the specific formula for calculation of screen coordinate C (x, y) are as follows:
Wherein, s is the horizontal distance for shooting main body and target scene, and h is the captured in real-time height for shooting main body, WmaxFor Shoot the maximum visual distance of main body, ynFor the Y-axis scale maximum value of screen;y0For the Y-axis scale minimum value of screen;xnFor screen The X-axis scale maximum value of curtain;x0For the X-axis scale minimum value of screen;E is the horizontal width of coverage.
In this optimal technical scheme, the x value of the screen coordinate of target scene and y value are associated as to the function of s and h, only The real-time horizontal distance that shooting main body and target scene need to be calculated and the captured in real-time height side for shooting main body can get target field The displaing coordinate of scape on the screen calculates process see specific embodiment part.
Preferably, the shooting main body is placed in an aircraft, and the shooting main body is placed in an aircraft, the shooting Angle between the camera lens face of main body and the heading of the aircraft is between 90 ° -180 °.
Preferably, the aircraft is aircraft or unmanned plane.
The beneficial effects of the present invention are:
1, the present invention is to acquire target scene on the screen by distance between calculating shooting main body and target scene Displaing coordinate, and then the positioning completed to target scene does not need compared with existing video locating method by high-end figure As processing technique and image processing equipment, reduce to input cost.
2, compared with prior art, since the method for the present invention does not need to handle each frame image, processing is improved Speed, convenient for quickly positioning mark, the city supervision system huge suitable for information.
3, this method matches the geography information of the screen coordinate of target scene and storage, keeps video content richer, Monitoring environment is familiar with convenient for monitoring personnel.
Detailed description of the invention
Fig. 1 is the step flow diagram of the method for the present invention;
Fig. 2 is scene geography information storage table schematic diagram;
Fig. 3 is the positional diagram for shooting main body and target scene;
Fig. 4 is the screen diagrams for playing real-time video;
Fig. 5 is the coverage schematic diagram for shooting main body.
In figure: 1- shoots main body;2- target scene;3- coverage;W- shoots the space between main body and target scene Distance;H- shooting height;The horizontal distance of s- shooting main body and target scene;WmaxMaximum visual distance, E- coverage Horizontal width.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear and complete Description.It should be appreciated that specific embodiment described herein is used only for explaining the present invention, it is not intended to limit the present invention.Base In the embodiment of the present invention, those skilled in the art's every other implementation obtained under the premise of no creative work Example, belongs to protection scope of the present invention.
Referring to FIG. 1, present embodiments providing a kind of scene coordinate by real-time video and geography information matches Method includes the following steps:
S1: establishing scene geography information storage table, and the scene geography information storage table includes the geographical letter of multiple scenes Breath, the geography information includes scene geographical coordinate (the geographical coordinate i.e. longitude and latitude in geographical location);
S2: the real-time geographic coordinate and shooting height of shooting main body are read;
S3: it is filtered out according to real-time geographic coordinate, shooting height and each scene geographical coordinate of shooting main body in shooting Scene in range is target scene;
S4: it is calculated according to the scene geographical coordinate of the real-time geographic coordinate of shooting main body, shooting height and target scene Screen coordinate C (x, y) of the target scene in real-time video;
S5: retrieving the scene geography information storage table, obtains the geography information of target scene, and by the ground of target scene Reason information and screen coordinate C (x, y) match.
Based on the method that the above-mentioned scene coordinate by real-time video and geography information match, the present embodiment will provide with Under some embodiments citing, under the premise of mutual non-contravention, between each citing can any combination with formed it is new it is a kind of will The method that scene coordinate and geography information in real-time video match.It should be appreciated that mutual for being illustrated by any of the above The method that combination is formed by a kind of new scene coordinate by real-time video and geography information matches, should all fall into the present invention Protection scope.
For example, scene geography information storage table may be configured as table as shown in Figure 2, the geography information of each scene is set to include Scene number, scene title, scene areas, scene longitude and latitude.Certainly, geography information may also include area and the shooting of building The information such as height.
For example, the scene name is referred to as markup information, screen coordinate C (x, y) is labeled.More intuitively to show Show the geography information of screen coordinate C (x, y) in real-time video.
For example, the judgment method for judging whether scene is in coverage specifically comprises the following steps:
S31: the horizontal distance s of shooting main body and each scene is calculated:
S=111.12cos { 1/ [ten cos Φ Acos Φ Bcos (λ B-λ A) of sin Φ Asin Φ B] }
Wherein, λ A is the longitude of scene, and φ A is the latitude of scene;λ B is the real-time longitude for shooting main body, and φ B is shooting The real-time latitude of main body;
S32: referring to FIG. 3, calculating the space length w of shooting main body 1 and each scene 2:Wherein h is to clap Take the photograph the captured in real-time height of main body 1;
S33: by the maximum visual distance W of space length w and shooting main bodymaxCompare, if w≤Wmax, then scene, which is in, claps It takes the photograph in range.
For example, the screen for playing real-time video divides coordinates regional as shown in Figure 4, make the direction of travel of shooting main body always (if you need to retreat, shooting main body is first made to turn 180 ° further along) forward, then the imaging of target scene is always from y=ynAt lines into Enter screen.
Assuming that the t1 moment, the coordinate y value of target scene on the screen is yn, then it represents that target scene is led just into shooting The coverage of body, the i.e. real space of shooting main body and target scene distance are Wmax
Assuming that t moment, the coordinate y value of target scene on the screen is y, shoot the real space of main body and target scene away from From for w, then during shooting main body and moving to t moment position from the moment position t1, the imaging of target scene on the screen By ynPosition moves to y location, since display image screen is the scaled down version of real image, therefore:
That is:
Obtain formula 1:y=yn-(Wmax-w)*(yn-y0)/wmax
Wherein, Δ w indicates that the space length between shooting main body and target scene changes, and Δ y indicates target scene screen The variation of coordinate y value;H is the shooting height that t moment shoots main body, and s is that t moment shoots main body and target field The horizontal distance of scape can bring formula s=by reading the geographical coordinate combination scene geography information storage table of shooting main body 111.12cos { 1/ [ten cos Φ Acos Φ Bcos (λ B-λ A) of sin Φ Asin Φ B] } is acquired.Wherein, λ A is the longitude of scene, φ A is the latitude of scene;λ B is the real-time longitude for shooting main body, and φ B is the real-time latitude for shooting main body;
Correspondingly, referring to figure 5., if the horizontal width of coverage 3 is E, it is assumed that t2 target scene is on the left side of screen Edge, i.e. x value are x0, then indicate that horizontal distance of the target scene away from shooting main body is E.
Assuming that t moment, the coordinate x value of target scene on the screen is x, shoot the real standard of main body and target scene away from From for s, then during shooting main body and moving to t moment position from the moment position t2, the imaging of target scene on the screen By x0Position moves to x position, since display image screen is the scaled down version of real image, therefore:
That is:
Obtain formula 2:x=x0+(E-s)*(xn-x0)/E
Wherein, Δ s indicates that the horizontal distance between shooting main body and target scene changes, and Δ x indicates target scene screen The variation of coordinate x value;S is the horizontal distance that t moment shoots main body and target scene, can be sat by reading the geographical of shooting main body Mark combines scene geography information storage table to bring formula s=111.12cos { 1/ [ten cos Φ Acos Φ Bcos of sin Φ Asin Φ B into (λ B-λ A)] } it acquires.Wherein, λ A is the longitude of scene, and φ A is the latitude of scene;λ B is the real-time longitude for shooting main body, φ B For the real-time latitude for shooting main body;
It is similarly assumed that right hand edge of the t3 target scene in screen, i.e. x value are xn, then indicate target scene away from shooting main body Horizontal distance be E.
Assuming that t moment, the coordinate x value of target scene on the screen is x, shoot the real standard of main body and target scene away from T moment then is moved to from the moment position t3 in shooting main body since display image screen is the scaled down version of real image from for s During position, the imaging of target scene on the screen is by xnPosition moves to x position, therefore:
That is:
Obtain formula 3:x=xn-(E-s)*(xn-x0)/E
Wherein, s is the horizontal distance that t moment shoots main body and target scene, can be sat by reading the geographical of shooting main body Mark combines scene geography information storage table to bring formula s=111.12cos { 1/ [ten cos Φ Acos Φ Bcos of sin Φ Asin Φ B into (λ B-λ A)] } it acquires.Wherein, λ A is the longitude of scene, and φ A is the latitude of scene;λ B is the real-time longitude for shooting main body, φ B For the real-time latitude for shooting main body;
Wherein, if target scene is located at the left side of shooting main body direction of travel, imaging on the screen is left from screen Side enters screen, and x value calculation formula uses formula 2;If target scene is located at the right side of shooting main body direction of travel, shielding Imaging on curtain enters screen on the right side of screen, and x value calculation formula uses formula 3.
Obviously, above-mentioned calculation formula be based on screen scale be set as shown in figure 4, and shoot main body direction of travel it is total Under the premise of being forward.If screen scale set-up mode and the direction of travel for shooting main body have change, the calculating about C (x, y) Formula also can accordingly change, those skilled in the art should understand that this adaptive change also falls into protection scope of the present invention.
For example, an aircraft can be placed in shooting main body, such as aircraft or unmanned plane, make the camera lens face of the shooting main body Angle between the heading of the aircraft between 90 ° -180 ° so that target scene always above screen into Enter screen.
The present invention is not limited to above-mentioned optional embodiment, anyone can show that other are each under the inspiration of the present invention The product of kind form.Above-mentioned specific embodiment should not be understood the limitation of pairs of protection scope of the present invention, protection of the invention Range should be subject to be defined in claims, and specification can be used for interpreting the claims.

Claims (8)

1. a kind of method that scene coordinate by real-time video and geography information match, which is characterized in that including walking as follows It is rapid:
S1: establishing scene geography information storage table, and the scene geography information storage table includes the geography information of multiple scenes, institute Stating geography information includes geographical coordinate;
S2: the real-time geographic coordinate and shooting height of shooting main body are read;
S3: it is filtered out according to real-time geographic coordinate, shooting height and each scene geographical coordinate of shooting main body in coverage Interior scene is target scene;
S4: target scene is calculated according to the geographical coordinate of the real-time geographic coordinate of shooting main body, shooting height and target scene Screen coordinate C (x, y) in real-time video;
S5: retrieval scene geography information storage table obtains the geography information of target scene, and by the geography information of target scene with Screen coordinate C (x, y) matches.
2. the method that the scene coordinate according to claim 1 by real-time video and geography information match, feature Be: in the step S2, the geography information of the scene further includes one of scene title, scene number and scene areas Or it is several.
3. the method that the scene coordinate according to claim 2 by real-time video and geography information match, feature It is, further includes step S6: when the geography information of the scene includes scene title, the scene name is referred to as to mark letter Breath, is labeled screen coordinate C (x, y).
4. the method that the scene coordinate according to claim 1 by real-time video and geography information match, feature Be: in the step S2, the geographical coordinate and shooting height for shooting main body are obtained by MOD interface.
5. the method that the scene coordinate according to claim 1 by real-time video and geography information match, feature Be: in step s3, the method for judging whether scene is in coverage specifically comprises the following steps:
S31: the horizontal distance s of shooting main body and each scene is calculated:
S=111.12cos { 1/ [ten cos Φ Acos Φ Bcos (λ B-λ A) of sin Φ Asin Φ B] }
Wherein, for λ A by the longitude of the scene stored in scene geography information storage table, φ A is in scene geography information storage table The latitude of the scene stored;λ B is the real-time longitude for shooting main body, and φ B is the real-time latitude for shooting main body;
S32: the space length of shooting main body and each scene is calculatedWherein h is the captured in real-time for shooting main body Highly;
S33: by the maximum visual distance W of space length w and shooting main bodymaxCompare, if w≤Wmax, then scene is in shooting model In enclosing.
6. the method that the scene coordinate according to claim 1 by real-time video and geography information match, sign exists Always forward in: the direction of travel of the shooting main body, when and the shooting main body is advanced forward, the screen of the target scene Coordinate y value is gradually reduced, in the step S4, the specific formula for calculation of screen coordinate C (x, y) are as follows:
Wherein, s is the horizontal distance for shooting main body and target scene, and h is the captured in real-time height for shooting main body, WmaxFor shooting The maximum visual distance of main body, ynFor the Y-axis scale maximum value of screen;y0For the Y-axis scale minimum value of screen;xnFor the X of screen Axis scale maximum value;x0For the X-axis scale minimum value of screen;E is the horizontal width of coverage.
7. the method that the scene coordinate according to claim 1 by real-time video and geography information match, feature Be: the shooting main body is placed in an aircraft, the heading of the camera lens face of the shooting main body and the aircraft it Between angle between 90 ° -180 °.
8. the method that the scene coordinate according to claim 7 by real-time video and geography information match, feature Be: the aircraft is aircraft or unmanned plane.
CN201811044511.8A 2018-09-07 2018-09-07 A method of the scene coordinate in real-time video is matched with geography information Pending CN109284404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811044511.8A CN109284404A (en) 2018-09-07 2018-09-07 A method of the scene coordinate in real-time video is matched with geography information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811044511.8A CN109284404A (en) 2018-09-07 2018-09-07 A method of the scene coordinate in real-time video is matched with geography information

Publications (1)

Publication Number Publication Date
CN109284404A true CN109284404A (en) 2019-01-29

Family

ID=65183828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811044511.8A Pending CN109284404A (en) 2018-09-07 2018-09-07 A method of the scene coordinate in real-time video is matched with geography information

Country Status (1)

Country Link
CN (1) CN109284404A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886201A (en) * 2019-02-22 2019-06-14 四川宏图智慧科技有限公司 Monitoring image mask method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097893A1 (en) * 2001-01-20 2002-07-25 Lee Seong-Deok Apparatus and method for generating object-labeled image in video sequence
CN104284155A (en) * 2014-10-16 2015-01-14 浙江宇视科技有限公司 Video image information labeling method and device
CN106251330A (en) * 2016-07-14 2016-12-21 浙江宇视科技有限公司 A kind of point position mark method and device
CN107317999A (en) * 2017-05-24 2017-11-03 天津市亚安科技有限公司 Method and system for realizing automatic identification of geographic name on turntable
CN107367262A (en) * 2017-06-17 2017-11-21 周超 Positioning mapping in real time shows interconnection type control method to a kind of unmanned plane at a distance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097893A1 (en) * 2001-01-20 2002-07-25 Lee Seong-Deok Apparatus and method for generating object-labeled image in video sequence
CN104284155A (en) * 2014-10-16 2015-01-14 浙江宇视科技有限公司 Video image information labeling method and device
CN106251330A (en) * 2016-07-14 2016-12-21 浙江宇视科技有限公司 A kind of point position mark method and device
CN107317999A (en) * 2017-05-24 2017-11-03 天津市亚安科技有限公司 Method and system for realizing automatic identification of geographic name on turntable
CN107367262A (en) * 2017-06-17 2017-11-21 周超 Positioning mapping in real time shows interconnection type control method to a kind of unmanned plane at a distance

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886201A (en) * 2019-02-22 2019-06-14 四川宏图智慧科技有限公司 Monitoring image mask method and device

Similar Documents

Publication Publication Date Title
CN105550670B (en) A kind of target object dynamically track and measurement and positioning method
CN110674746B (en) Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium
CN106650620B (en) A kind of target person identification method for tracing using unmanned plane monitoring
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
CA2990758C (en) Methods circuits devices systems and associated computer executable code for multi factor image feature registration and tracking
WO2020017190A1 (en) Image analysis device, person search system, and person search method
CN110796079A (en) Multi-camera visitor identification method and system based on face depth features and human body local depth features
CN104361327A (en) Pedestrian detection method and system
CN106920250B (en) Robot target identification and localization method and system based on RGB-D video
US20160358032A1 (en) Methods, devices and computer programs for processing images in a system comprising a plurality of cameras
CN110910460B (en) Method and device for acquiring position information and calibration equipment
KR20120016479A (en) Camera tracking monitoring system and method using thermal image coordinates
KR101608889B1 (en) Monitoring system and method for queue
KR20160109761A (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
US11212510B1 (en) Multi-camera 3D content creation
WO2022127181A1 (en) Passenger flow monitoring method and apparatus, and electronic device and storage medium
CN107038714A (en) Many types of visual sensing synergistic target tracking method
JP2021060868A (en) Information processing apparatus, information processing method, and program
KR101469099B1 (en) Auto-Camera Calibration Method Based on Human Object Tracking
JPH10255057A (en) Mobile object extracting device
CN109284404A (en) A method of the scene coordinate in real-time video is matched with geography information
JP2019096062A (en) Object tracking device, object tracking method, and object tracking program
Pennisi et al. Ground truth acquisition of humanoid soccer robot behaviour
CN115880643A (en) Social distance monitoring method and device based on target detection algorithm
JP6504711B2 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190129