CN109186554B - Method for automatically positioning coordinates of scene in real-time video fixed track inspection - Google Patents

Method for automatically positioning coordinates of scene in real-time video fixed track inspection Download PDF

Info

Publication number
CN109186554B
CN109186554B CN201811045229.1A CN201811045229A CN109186554B CN 109186554 B CN109186554 B CN 109186554B CN 201811045229 A CN201811045229 A CN 201811045229A CN 109186554 B CN109186554 B CN 109186554B
Authority
CN
China
Prior art keywords
coordinates
scene
mse
screen
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811045229.1A
Other languages
Chinese (zh)
Other versions
CN109186554A (en
Inventor
李晓倩
庾农
韩彦
向子荣
侯睿
常乐
邓博杨
董笑宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Chuanjiang Information Technology Co ltd
Original Assignee
Chengdu Chuanjiang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chuanjiang Information Technology Co ltd filed Critical Chengdu Chuanjiang Information Technology Co ltd
Priority to CN201811045229.1A priority Critical patent/CN109186554B/en
Publication of CN109186554A publication Critical patent/CN109186554A/en
Application granted granted Critical
Publication of CN109186554B publication Critical patent/CN109186554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method for automatically positioning coordinates of a scene in real-time video fixed track inspection, which comprises the following steps of; s1, acquiring MSE parameters; s2, determining the coordinates of the video display area, taking the scene to be acquired as a reference object, manually calibrating the reference object to obtain the coordinates of the reference object, and recording the travel time; s3, according to the obtained MSE parameters, simultaneously carrying out the steps A and B; A. calculating the matching relation between the mobile MSE of the inspection equipment and scene coordinates in the screen video, determining scene coordinates in the screen video, marking and storing the scene coordinates, continuously updating the scene coordinates, and realizing scene tracking display; B. calculating the coordinate matching relation between the mobile MSE of the inspection equipment and the screen video display area, calling the operation time of the inspection equipment, and determining the real-time screen video display area; and S4, marking and displaying the scene in the screen video. The invention occupies less hardware resources, is convenient for processing massive videos, is not influenced by image quality and illumination conditions, and has high accuracy.

Description

Method for automatically positioning coordinates of scene in real-time video fixed track inspection
Technical Field
The invention belongs to the technical field of information technology and automation, and particularly relates to a method for automatically positioning coordinates of a scene in real-time video fixed track inspection.
Background
The video identification technology mainly comprises three links of front-end video information acquisition and transmission, middle video detection and rear-end analysis and processing. The video identification needs a front-end video acquisition camera to provide clear and stable video signals, and the quality of the video signals directly influences the effect of the video identification. After the video is identified, the position of the scene on the display screen is determined and then the standard is carried out.
However, the existing video identification technology has the following defects:
1. the monitoring environment, such as light and video effect, has great influence on the video positioning accuracy.
2. Repeated learning is needed to be carried out on different scenes through machine identification, a new model is established, and high identification degree in a short time is difficult to ensure.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention aims to provide a method for automatically positioning coordinates of a scene in real-time video fixed track inspection.
The technical scheme adopted by the invention is as follows:
a method for automatically positioning coordinates of a scene in real-time video fixed track inspection comprises the following steps;
s1, acquiring MSE parameters;
s2, determining the coordinates of the video display area, taking the scene to be acquired as a reference object, manually calibrating the reference object to obtain the coordinates of the reference object, and recording the travel time;
s3, according to the obtained MSE parameters, simultaneously carrying out the steps A and B;
A. calculating the matching relation between the mobile MSE of the inspection equipment and scene coordinates in the screen video, determining scene coordinates in the screen video, marking and storing the scene coordinates, continuously updating the scene coordinates, and realizing scene tracking display;
B. calculating the coordinate matching relation between the mobile MSE of the inspection equipment and the screen video display area, calling the operation time of the inspection equipment, and determining the real-time screen video display area;
and S4, marking and displaying the scene in the screen video.
The MSE parameter is obtained in the following mode: the MSE parameters comprise starting time, stopping time, traveling time, turning angle, traveling direction, speed, acceleration and gradient.
Or, the MSE parameters are acquired in the following manner: when the inspection equipment is restarted each time, setting and storing an initial state MSE for the inspection equipment, and acquiring MSE parameters, wherein the MSE parameters comprise starting time, stopping time, traveling time, turning angle, traveling direction, speed, acceleration and gradient.
And when the inspection equipment moves, updating the MSE synchronously and recording the MSE as the current MSE value.
Specifically, the specific implementation manner of step S2 is:
s21, dividing the display screen into coordinates, manually taking values to obtain four-corner positioning of the screen range, wherein the four-corner coordinates are Z0(Xn, Ym), Z1(Xm, Ym), Z2(Xn, Yn) and Z3(Xm, Yn);
and S22, taking the scene to be acquired as a reference object and manually calibrating the scene to obtain the coordinates A (x4, yi) of the reference object.
Further, the specific implementation method of the step a is as follows:
s31, extracting MSE parameters: a start time T0, a stop time Tn, a travel time T, a travel direction U1, a turn angle α, a speed v, an acceleration a, and a gradient p;
s32, the following operations are carried out on the inspection equipment:
a. forward or reverse U1, speed v1, angle 0, resulting in MSE parameters: MSE (u1, v1, 0);
calculating the relation between the MSE (u1, v1,0) action of the inspection equipment and the screen coordinates:
matching screen horizontal coordinates: (PX) ═ x4)/v1, PX is v 1;
matching screen vertical coordinates: (PY) y-ym/v 1, where PY is v 1;
b. turning, angle α, obtaining an MSE parameter as: MSE (α);
and calculating the relation between the horizontal rotation angle alpha of the inspection equipment and the screen coordinate as follows: (X) X (xi-X4)/α, wherein X is α;
c. the gradient p at this time is obtained as β, and the MSE parameter is obtained as: MSE (β);
and calculating the relation between the slope beta of the inspection equipment and the screen coordinate as follows: and f (Y) is (yi-ym)/beta, and beta is taken as Y.
Further, the specific implementation method of step B is:
the relation between the MSE (u1, v1,0) action of the inspection equipment and the four-corner coordinates of the screen, the relation between the horizontal rotation angle alpha of the inspection equipment and the four-corner coordinates of the screen, and the relation between the gradient beta of the inspection equipment and the four-corner coordinates of the screen perform the following operations on the inspection equipment:
a1, forward or backward operation k3, screen four-corner coordinates change as:
Z0(Xn+f(PX)*k3,Ym+f(PY)*k3);
Z1(Xm+f(PX)*k3,Ym+f(PY)*k3);
Z2(Xn+f(PX)*k3,Yn+f(PY)*k3);
Z3(Xm+f(PX)*k3,Ym+f(PY)*k3);
b1, a turn k1, and the coordinate change of the four corners of the screen is as follows:
Z0(Xn+f(X)*k1,Ym);
Z1(Xm+f(X)*k1,Ym);
Z2(Xn+f(X)*k1,Yn);
Z3(Xm+f(X)*k1,Ym);
c1, gradient k2, screen four-corner coordinate change:
Z0(Xn,Ym+f(Y)*k2);
Z1(Xm,Ym+f(Y)*k2);
Z2(Xn,Yn+f(Y)*k2);
Z3(Xm,Ym+f(Y)*k2)。
further, the specific implementation manner of step S4 is:
starting at S41 and T0, and moving according to a predetermined track;
s42, after the inspection equipment moves, calling the calculated matching relation between the mobile MSE of the inspection equipment and the coordinates of the screen video display area, determining whether the coordinates of the target scene are within the ranges of Z0, Z1, Z2 and Z3, and if yes, executing the step S43; if not, go to step S44;
s43, when the inspection equipment moves to a scene point n, judging whether the inspection equipment moves to a T (134 … … n) time point, if so, referring to the time T (134 … … n), calling the calculated matching relation between the mobile MSE of the inspection equipment and scene coordinates in a screen video, determining a target scene coordinate and positioning and displaying a label; if not, the scene coordinates and the labels are not displayed in the screen;
and S44, displaying no callout in the screen.
The invention has the beneficial effects that:
(1) the invention realizes the video structuralization of scene positions in the real-time video of the fixed-track inspection equipment.
(2) The method is convenient for fast searching the scene position in the massive videos during the fixed track inspection.
(3) When the method is used for fixed track inspection, coordinates of a scene are positioned and marked, video contents are enriched, and monitoring personnel can be familiar with a monitoring environment conveniently.
(4) When the track is fixed for inspection, the coordinate is automatically positioned, and the visualization of the position information of the monitoring area is realized.
(5) When the track is fixed for inspection, the coordinate data is only processed, and compared with an image processing and positioning mode, the track-fixed inspection method occupies less hardware resources and is convenient for processing massive videos.
(6) The invention only processes coordinate data, is not influenced by image quality and illumination conditions, and has high accuracy.
Drawings
FIG. 1 is a diagram of four corner coordinates of a screen area according to an embodiment of the present invention.
Fig. 2 is a diagram of the relation between the MSE (u1, v1,0) action of the inspection equipment and the screen coordinates according to the embodiment of the invention.
Fig. 3 is a diagram of the relation between the horizontal rotation angle alpha of the inspection equipment and the screen coordinate according to the embodiment of the invention.
Fig. 4 is a diagram of the relationship between the slope beta of the inspection equipment and the coordinates of the screen according to the embodiment of the invention.
FIG. 5 is a schematic diagram of a predetermined trajectory according to an embodiment of the present invention.
FIG. 6 is a schematic diagram of scene coordinate annotation and display in a screen video according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
Example (b):
the method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection comprises the following steps:
the first step is to obtain MSE parameters.
Mse (motion state of equipment) and equipment motion state, including start time, stop time, travel time, turn angle, direction of travel, speed, acceleration, and grade.
Acquiring an MSE parameter: start time T0, stop time Tn, travel time T, travel direction (forward: U, backward: D), turning angle (α, α ═ 0-360 °), speed v, acceleration a, and gradient p.
One way to obtain the MSE parameters is: and obtaining the data from the fixed track inspection equipment.
One way to obtain the MSE parameters is: when the inspection equipment is restarted each time, setting an initial state MSE for the inspection equipment and storing the MSE, acquiring the MSE parameters, and synchronously updating the MSE and recording the MSE as the current MSE value in the moving process of the inspection equipment.
And secondly, determining the coordinates of the video display area, taking the scene to be acquired as a reference object, manually calibrating the reference object to obtain the coordinates of the reference object, and recording the travel time. As shown in fig. 1.
(1) Dividing the display screen into coordinates, manually taking values, and obtaining four-corner positioning of a screen range, wherein the four-corner coordinates are Z0(Xn, Ym), Z1(Xm, Ym), Z2(Xn, Yn) and Z3(Xm, Yn);
(2) taking a scene to be acquired as a reference object and manually calibrating the scene to obtain a reference object coordinate A (x4, yi);
and thirdly, calculating the matching relation between the mobile MSE of the inspection equipment and scene coordinates in the screen video according to the obtained MSE parameters, determining scene coordinates in the screen video, marking and storing the scene coordinates, and continuously updating the scene coordinates to realize scene tracking display.
(1) Extracting MSE parameters: start time T0, stop time Tn, travel time T, travel direction U1, turn angle (α, α ═ 0-360 °), speed v, acceleration a, and gradient p.
(2) The inspection equipment is operated as follows:
a. forward or reverse U1, speed v1, angle 0, resulting in MSE parameters: MSE (u1, v1, 0);
as shown in fig. 2, the relationship between the motion of the inspection equipment MSE (u1, v1,0) and the screen coordinates is calculated:
matching screen horizontal coordinates: (PX) ═ x4)/v1, PX is v 1;
matching screen vertical coordinates: (PY) y-ym/v 1, where PY is v 1;
if U1 is forward, then f (PX) is positive, and f (PY) is negative, whereas f (PX) is negative, and f (PY) is positive.
b. Turning, angle α, obtaining an MSE parameter as: MSE (α);
as shown in fig. 3, the relationship between the horizontal rotation angle α of the inspection device and the screen coordinates is calculated as follows: (X) X (xi-X4)/α, wherein X is α;
c. the gradient p at this time is obtained as β, and the MSE parameter is obtained as: MSE (β);
as shown in fig. 4, the relationship between the slope β of the inspection equipment and the screen coordinates is calculated as: and f (Y) is (yi-ym)/beta, and beta is taken as Y.
And fourthly, calculating the coordinate matching relation between the mobile MSE of the inspection equipment and the screen video display area according to the obtained MSE parameters, calling the running time Tn of the inspection equipment, and determining the real-time screen video display area.
The inspection equipment is operated according to the following relation:
relation between the motion of the routing inspection equipment MSE (u1, v1,0) and screen coordinates:
matching screen horizontal coordinates: (PX) ═ x4)/v1, PX is v 1;
matching screen vertical coordinates: (PY) y-ym/v 1, where PY is v 1;
the relation between the horizontal rotation angle alpha of the inspection equipment and the screen coordinate is as follows: (X) X (xi-X4)/α, wherein X is α;
the relation between the slope beta of the inspection equipment and the screen coordinate is as follows: (Y) Y ═ yi-ym)/β, where Y is β;
size of screen: when x0 is equal to 0, the screen horizontal width is xn, and when x0 is not equal to 0, the screen horizontal width is xn-x 0; when y0 is equal to 0, the screen vertical height is yn, and when y0 is not equal to 0, the screen vertical height is yn-y 0;
a1, forward or backward operation k3, screen four-corner coordinates change as:
Z0(Xn+f(PX)*k3,Ym+f(PY)*k3);
Z1(Xm+f(PX)*k3,Ym+f(PY)*k3);
Z2(Xn+f(PX)*k3,Yn+f(PY)*k3);
Z3(Xm+f(PX)*k3,Ym+f(PY)*k3);
b1, a turn k1, and the coordinate change of the four corners of the screen is as follows:
Z0(Xn+f(X)*k1,Ym);
Z1(Xm+f(X)*k1,Ym);
Z2(Xn+f(X)*k1,Yn);
Z3(Xm+f(X)*k1,Ym);
c1, gradient k2, screen four-corner coordinate change:
Z0(Xn,Ym+f(Y)*k2);
Z1(Xm,Ym+f(Y)*k2);
Z2(Xn,Yn+f(Y)*k2);
Z3(Xm,Ym+f(Y)*k2);
and fifthly, marking and displaying scenes in the screen video.
Starting at T0, moving according to a set track, as shown in FIG. 5, after the inspection equipment moves, transforming images in the screen, changing the coordinate position of the scene in the screen, calling the calculated coordinate matching relationship between the inspection equipment moving MSE and the screen video display area coordinate, determining whether the coordinate of the target scene B (X, Y) is within the ranges of Z0, Z1, Z2 and Z3, if the coordinate is within the ranges of Z0, Z1, Z2 and Z3, moving to the scene point n at the same time, referring to the scene point n by time T (134 … … n), calling the calculated coordinate matching relationship between the inspection equipment moving MSE and the scene coordinate in the screen video, determining the target scene coordinate and positioning and displaying the mark, as shown in FIG. 6; if the scene is not within the ranges of Z0, Z1, Z2 and Z3, the scene needing to be positioned is not within the monitoring range of the camera, and no annotation is displayed in the screen. When the inspection equipment moves to a time point which is not at T (134 … … n), the scene coordinates and the callout are not displayed on the screen.
The invention creates the relation between the polling time axis of the fixed track polling equipment and the scene coordinate state in the screen video, starts coordinate positioning when a set polling time point is reached, and updates the coordinate along with time.
The invention creates the corresponding relation between the MSE of the fixed track inspection equipment and the coordinates of the screen video display area, the movement of the fixed track inspection equipment changes the screen video display area, particularly the relation between the gradient and the coordinates of the screen video display area, the coordinates of the screen video display area are obtained through a formula, the judgment of whether the scene needing to be positioned is in the range of the video display area is facilitated, and the automatic judgment can be realized.
The invention creates the corresponding relation between the MSE of the fixed track inspection equipment and the scene coordinates in the screen video, the motion of the fixed track inspection equipment changes the scene in the screen video, particularly the corresponding relation between the gradient and the scene coordinates in the screen video, obtains the new scene coordinates in the screen video through a formula, and is convenient for automatically positioning the scene again.
According to the method, for the condition that the MSE value of the fixed track inspection equipment cannot be obtained, the MSE value is set at the rear end to correspond to the motion state of the fixed track inspection equipment, an MSE is set initially, the motion state process of the inspection equipment is recorded, and the MSE is updated at any time, so that MSE unit variables are obtained and are used for calculating a corresponding relation formula of coordinates of a video display area and a corresponding relation formula of matching of the MSE and scene coordinates in a screen video to obtain corresponding results.
The invention is not limited to the above alternative embodiments, and any other various forms of products can be obtained by anyone in the light of the present invention, but any changes in shape or structure thereof, which fall within the scope of the present invention as defined in the claims, fall within the scope of the present invention.

Claims (8)

1. A method for automatically positioning coordinates of a scene in real-time video fixed track inspection is characterized by comprising the following steps: comprises the following steps;
s1, acquiring MSE parameters;
s2, determining the coordinates of the video display area, taking the scene to be acquired as a reference object, manually calibrating the reference object to obtain the coordinates of the reference object, and recording the travel time;
s3, according to the obtained MSE parameters, simultaneously carrying out the steps A and B;
A. calculating the matching relation between the mobile MSE of the inspection equipment and scene coordinates in the screen video, determining scene coordinates in the screen video, marking and storing the scene coordinates, continuously updating the scene coordinates, and realizing scene tracking display;
B. calculating the coordinate matching relation between the mobile MSE of the inspection equipment and the screen video display area, calling the operation time of the inspection equipment, and determining the real-time screen video display area;
and S4, marking and displaying the scene in the screen video.
2. The method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to the claim 1, characterized in that: the MSE parameter is obtained in the following mode: the MSE parameters comprise starting time, stopping time, traveling time, turning angle, traveling direction, speed, acceleration and gradient.
3. The method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to the claim 1, characterized in that: the MSE parameter is obtained in the following mode: when the inspection equipment is restarted each time, setting and storing an initial state MSE for the inspection equipment, and acquiring MSE parameters, wherein the MSE parameters comprise starting time, stopping time, traveling time, turning angle, traveling direction, speed, acceleration and gradient.
4. The method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to the claim 3, characterized in that: and when the inspection equipment moves, updating the MSE synchronously and recording the MSE as the current MSE value.
5. The method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to the claim 1, characterized in that: the specific implementation manner of step S2 is as follows:
s21, dividing the display screen into coordinates, manually taking values to obtain four-corner positioning of the screen range, wherein the four-corner coordinates are Z0(Xn, Ym), Z1(Xm, Ym), Z2(Xn, Yn) and Z3(Xm, Yn);
and S22, taking the scene to be acquired as a reference object and manually calibrating the scene to obtain the coordinates A (x4, yi) of the reference object.
6. The method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to claim 5, wherein the method comprises the following steps: the specific implementation method of the step A comprises the following steps:
s31, extracting MSE parameters: a start time T0, a stop time Tn, a travel time T, a travel direction U1, a turn angle α, a speed v, an acceleration a, and a gradient p;
s32, the following operations are carried out on the inspection equipment:
a. forward or reverse U1, speed v1, angle 0, resulting in MSE parameters: MSE (u1, v1, 0);
calculating the relation between the MSE (u1, v1,0) action of the inspection equipment and the screen coordinates:
matching screen horizontal coordinates: (PX) ═ x4)/v1, PX is v 1;
matching screen vertical coordinates: (PY) y-ym/v 1, where PY is v 1;
b. turning, angle α, obtaining an MSE parameter as: MSE (α);
and calculating the relation between the horizontal rotation angle alpha of the inspection equipment and the screen coordinate as follows: (X) X (xi-X4)/α, wherein X is α;
c. the gradient p at this time is obtained as β, and the MSE parameter is obtained as: MSE (β);
and calculating the relation between the slope beta of the inspection equipment and the screen coordinate as follows: and f (Y) is (yi-ym)/beta, and beta is taken as Y.
7. The method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to the claim 6, characterized in that: the specific implementation method of the step B is as follows:
the relation between the MSE (u1, v1,0) action of the inspection equipment and the four-corner coordinates of the screen, the relation between the horizontal rotation angle alpha of the inspection equipment and the four-corner coordinates of the screen, and the relation between the gradient beta of the inspection equipment and the four-corner coordinates of the screen perform the following operations on the inspection equipment:
a1, forward or backward operation k3, screen four-corner coordinates change as:
Z0(Xn+f(PX)*k3,Ym+f(PY)*k3);
Z1(Xm+f(PX)*k3,Ym+f(PY)*k3);
Z2(Xn+f(PX)*k3,Yn+f(PY)*k3);
Z3(Xm+f(PX)*k3,Ym+f(PY)*k3);
b1, a turn k1, and the coordinate change of the four corners of the screen is as follows:
Z0(Xn+f(X)*k1,Ym);
Z1(Xm+f(X)*k1,Ym);
Z2(Xn+f(X)*k1,Yn);
Z3(Xm+f(X)*k1,Ym);
c1, gradient k2, screen four-corner coordinate change:
Z0(Xn,Ym+f(Y)*k2);
Z1(Xm,Ym+f(Y)*k2);
Z2(Xn,Yn+f(Y)*k2);
Z3(Xm,Ym+f(Y)*k2)。
8. the method for automatically positioning the coordinates of the scene in the real-time video fixed track inspection according to claim 7, wherein the method comprises the following steps: the specific implementation manner of step S4 is as follows:
starting at S41 and T0, and moving according to a predetermined track;
s42, after the inspection equipment moves, calling the calculated matching relation between the mobile MSE of the inspection equipment and the coordinates of the screen video display area, determining whether the coordinates of the target scene are within the ranges of Z0, Z1, Z2 and Z3, and if yes, executing the step S43; if not, go to step S44;
s43, when the inspection equipment moves to a scene point n, judging whether the inspection equipment moves to a T (134 … … n) time point, if so, referring to the time T (134 … … n), calling the calculated matching relation between the mobile MSE of the inspection equipment and scene coordinates in a screen video, determining a target scene coordinate and positioning and displaying a label; if not, the scene coordinates and the labels are not displayed in the screen;
and S44, displaying no callout in the screen.
CN201811045229.1A 2018-09-07 2018-09-07 Method for automatically positioning coordinates of scene in real-time video fixed track inspection Active CN109186554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811045229.1A CN109186554B (en) 2018-09-07 2018-09-07 Method for automatically positioning coordinates of scene in real-time video fixed track inspection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811045229.1A CN109186554B (en) 2018-09-07 2018-09-07 Method for automatically positioning coordinates of scene in real-time video fixed track inspection

Publications (2)

Publication Number Publication Date
CN109186554A CN109186554A (en) 2019-01-11
CN109186554B true CN109186554B (en) 2021-05-07

Family

ID=64915554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811045229.1A Active CN109186554B (en) 2018-09-07 2018-09-07 Method for automatically positioning coordinates of scene in real-time video fixed track inspection

Country Status (1)

Country Link
CN (1) CN109186554B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190040B (en) * 2021-04-29 2021-10-08 集展通航(北京)科技有限公司 Method and system for line inspection based on unmanned aerial vehicle video and railway BIM

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN103942273A (en) * 2014-03-27 2014-07-23 北京空间机电研究所 Dynamic monitoring system and method for aerial quick response
DE102015101190A1 (en) * 2015-01-28 2016-07-28 Connaught Electronics Ltd. Method for determining an image depth value depending on an image area, camera system and motor vehicle
CN206712972U (en) * 2017-05-25 2017-12-05 成都川江信息技术有限公司 A kind of scenic spot security prevention and control system that AR realtime graphics are provided
CN108415453A (en) * 2018-01-24 2018-08-17 上海大学 Unmanned plane tunnel method for inspecting based on BIM technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN103942273A (en) * 2014-03-27 2014-07-23 北京空间机电研究所 Dynamic monitoring system and method for aerial quick response
DE102015101190A1 (en) * 2015-01-28 2016-07-28 Connaught Electronics Ltd. Method for determining an image depth value depending on an image area, camera system and motor vehicle
CN206712972U (en) * 2017-05-25 2017-12-05 成都川江信息技术有限公司 A kind of scenic spot security prevention and control system that AR realtime graphics are provided
CN108415453A (en) * 2018-01-24 2018-08-17 上海大学 Unmanned plane tunnel method for inspecting based on BIM technology

Also Published As

Publication number Publication date
CN109186554A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN107992881B (en) Robot dynamic grabbing method and system
CN105931263B (en) A kind of method for tracking target and electronic equipment
CN104282020B (en) A kind of vehicle speed detection method based on target trajectory
CN110580723B (en) Method for carrying out accurate positioning by utilizing deep learning and computer vision
CN104217441A (en) Mechanical arm positioning fetching method based on machine vision
CN103581614A (en) Method and system for tracking targets in video based on PTZ
CN113724193B (en) PCBA part size and clearance high-precision visual measurement method
CN108981670B (en) Method for automatically positioning coordinates of scene in real-time video
CN104168444B (en) A kind of method for tracking target for tracking ball machine and tracking ball machine
CN108364306B (en) Visual real-time detection method for high-speed periodic motion
CN104299246A (en) Production line object part motion detection and tracking method based on videos
CN104760812A (en) Monocular vision based real-time location system and method for products on conveying belt
CN109186554B (en) Method for automatically positioning coordinates of scene in real-time video fixed track inspection
CN112288741A (en) Product surface defect detection method and system based on semantic segmentation
CN110865366B (en) Intelligent driving radar and image fusion man-machine interaction method
CN114972421A (en) Workshop material identification tracking and positioning method and system
US8274597B2 (en) System and method for measuring a border of an image of an object
CN117325170A (en) Method for grabbing hard disk rack based on depth vision guiding mechanical arm
CN114581760A (en) Equipment fault detection method and system for machine room inspection
CN113643206A (en) Cow breathing condition detection method
CN104835156B (en) A kind of non-woven bag automatic positioning method based on computer vision
CN109318235B (en) Quick focusing method of robot vision servo system
CN110415275B (en) Point-to-point-based moving target detection and tracking method
CN111178244A (en) Method for identifying abnormal production scene
CN110675393A (en) Blank specification detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant