CN114998105A - Monitoring method and system based on multi-camera pantograph video image splicing - Google Patents

Monitoring method and system based on multi-camera pantograph video image splicing Download PDF

Info

Publication number
CN114998105A
CN114998105A CN202210620121.0A CN202210620121A CN114998105A CN 114998105 A CN114998105 A CN 114998105A CN 202210620121 A CN202210620121 A CN 202210620121A CN 114998105 A CN114998105 A CN 114998105A
Authority
CN
China
Prior art keywords
images
image
camera
matching
pantograph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210620121.0A
Other languages
Chinese (zh)
Inventor
杨杰
黄健煜
范志峰
王春来
李科
向朝富
杜俊宏
曾俊清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Gongwang Technology Co ltd
Original Assignee
Chengdu Gongwang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Gongwang Technology Co ltd filed Critical Chengdu Gongwang Technology Co ltd
Priority to CN202210620121.0A priority Critical patent/CN114998105A/en
Publication of CN114998105A publication Critical patent/CN114998105A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a monitoring method and a system for splicing pantograph video images based on multiple cameras, wherein the method comprises the following specific steps: synchronously acquiring images by adopting hardware synchronous trigger equipment, and synchronously processing the images on a processing host; extracting characteristic points of each acquired image to obtain a characteristic point set to be matched of all the images; matching the characteristic points, and converting the two images into the same coordinate; acquiring images after the matching coordinates, the matching area and the coordinate conversion, and copying and splicing the two images in the projection transformation area; carrying out fusion processing on the overlapped boundary cracks of the two images, carrying out fusion processing on the color of the overlapped area, and cutting the invalid area after coordinate conversion; and repeating the image matching, image copying and image fusion processing operations, and completely splicing multiple images. The invention shoots the high-definition image of the pantograph in real time, and is more favorable for monitoring the abnormal working state and integrity of the pantograph.

Description

Monitoring method and system based on multi-camera pantograph video image splicing
Technical Field
The invention relates to the field of image processing, in particular to a monitoring method and a monitoring system based on multi-camera pantograph video image splicing.
Background
Along with the rapid development of urban rail transit in China, the contact net provides power supply for the train of electrified urban rail transit, and the electric locomotive obtains the electric energy through the sliding contact between pantograph slide plate and contact net wire, and when the pantograph of motion passes through the contact net of relative rest, the contact net receives external force to disturb, then produces dynamic interact between pantograph and two systems of contact net, and good pantograph-net relation is the basic prerequisite of ensureing urban rail transit operation safety.
In the traditional pantograph-catenary relationship monitoring, a video monitoring module is installed on an engineering truck to simulate the monitoring pantograph-catenary matching relationship. The current mainstream pantograph-catenary relation monitoring mainly comprises the step of installing a high-definition camera and a light supplementing device on an electric bus to monitor pantograph-catenary contact pictures in real time. Compared with a traditional engineering truck monitoring system, the mainstream electric bus monitoring system has the characteristics of small installation space and high safety requirement, and after the traditional mainstream monitoring equipment is transplanted to an electric bus, because of the limitation of installation conditions, pain points which reduce the resolution ratio for meeting the imaging range exist, and the problems of incomplete pantograph-catenary imaging images and serious distortion influence detection personnel on judging whether the working state of a pantograph is abnormal or not.
Disclosure of Invention
The invention aims to solve the defects of the prior art and provides a monitoring method and a monitoring system based on multi-camera pantograph video image splicing.
The purpose of the invention is realized by the following technical scheme:
a monitoring method based on multi-camera pantograph video image splicing comprises the following specific steps:
synchronously acquiring images by adopting hardware synchronous trigger equipment, and synchronously processing the images on a processing host;
extracting characteristic points of each acquired image to obtain a characteristic point set to be matched of all the images;
matching the characteristic points, and converting the two images into the same coordinate;
acquiring images after the matching coordinates, the matching area and the coordinate conversion, and copying and splicing the two images in the projection transformation area;
performing fusion processing on the boundary cracks of the two images, performing fusion processing on the colors of the overlapped areas, and cutting the areas with invalid coordinates after conversion;
and repeating the image matching, image copying and image fusion processing operations, and completely splicing multiple images.
The image synchronization processing comprises the following specific steps:
the method comprises the following steps: firstly, judging whether each camera buffer queue has new data or not;
step two: if yes, taking out the data at the head of each cache queue;
step three: judging whether the exposure start time Ts of the data taken out of each camera cache queue is consistent, whether the error is within 1ms, and if so, performing image splicing processing;
step four: if the exposure start time Ts is inconsistent with the latest exposure start time, selecting one of the exposure start times Ts of the cameras which is closest to the current time to reserve, and discarding the rest of the exposure start times which are earlier than the latest exposure start time;
step five: and then a new group of data is first taken from the discarded queue, and the step three is returned to continue the execution.
Step six: if there is no data in the queue, wait 40ms and continue to execute step one.
And extracting the characteristic points of each image by using an SURF algorithm through an SURF operator in opencv to obtain a characteristic point set to be matched of all the images.
And the characteristic points are matched by adopting a FLANN matching method.
And the two images are converted under the same coordinate by using a findHomography function in an opencv library.
A monitoring system based on multi-camera pantograph video image splicing comprises a roof acquisition unit and an in-vehicle processing unit; the car roof acquisition unit comprises a plurality of groups of high-speed high-definition digital cameras, a plurality of groups of light supplementing devices and synchronous trigger devices, and the synchronous trigger devices are respectively and electrically connected with the plurality of groups of high-speed high-definition digital cameras and the plurality of groups of light supplementing devices; the in-vehicle processing unit comprises an electric control unit and a data processing host, one end of the electric control unit is connected with the data processing host, and the other end of the electric control unit is connected with the roof acquisition unit.
The invention has the beneficial effects that:
1. the bow net video monitoring system has small requirement on installation space and is suitable for bow net video monitoring in various electric passenger car environments;
2. the pantograph imaging coverage is large, and no dead angle exists;
3. the multi-camera spliced pantograph has high imaging pixel and high resolution;
4. the pantograph imaging image distortion is small, and the reality is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a block flow diagram of the present invention;
FIG. 2 is a schematic diagram of the system components of the present invention;
FIG. 3 is a schematic diagram of the apparatus layout of the present invention;
in the drawings: the method comprises the steps of 1-high-definition imaging camera A, 2-high-definition imaging camera B, 3-high-definition imaging camera C, 4-light supplementing device A, 5-light supplementing device B, 6-monitoring target.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A monitoring method based on multi-camera pantograph video image splicing comprises multi-camera image synchronous acquisition and multi-image splicing synthesis.
In the multi-camera image synchronous acquisition, except that hardware synchronous triggering equipment is adopted to enable each industrial camera to synchronously expose, the time length for transmitting camera images which are triggered at the same time to a processing host is also related to the exposure time, the image data volume (the cameras with the same resolution are selected as far as possible during design, the difference is reduced as much as possible), the performance of a transmission channel and the like of each camera, and in order to ensure that the images of each camera are shot at the same time during image splicing, the image synchronous processing needs to be carried out on the processing host. The main treatment mode and flow are as follows:
1. number all cameras: such as Camera1, Camera2, Camera3 … …;
2. registering all camera exposure starting callback functions, and returning a timestamp Ts when the camera starts to expose each time;
3. and registering all camera image callback functions, when the camera exposure is finished and the image acquisition is finished, returning image data and related image parameters such as image length, image width, image data Size, exposure time Tb, gain, bit depth, image frame number and the like through the image callback functions, and simultaneously recording the current callback time Te.
4. According to the Size of the image data returned by the image callback and the transmission bandwidth W (generally 1 Gbps) between the industrial camera and the industrial personal computer, the time Tc required by data transmission can be calculated. Namely Tc = Size/W;
5. according to the exposure time Tb and the image transmission time Tc of the camera, the total time required from the exposure start to the transmission of one image to the industrial personal computer can be calculated (certainly, the time for data packet in the camera is ignored, and the time is processed by the internal hardware of the industrial camera, and is very small and stable). I.e. Ta = Tb + Tc;
6. at the moment, whether the difference Td between the image callback time Te and the current latest exposure start time Ts is equal to or close to the total duration Ta of acquiring one image or not is judged, if (Td-Ta) < = 5ms, the image data Pic returned by the image callback and the current latest exposure start time are judged to be a group of complete image acquisition process, and the corresponding Ts and Pic are added to the corresponding image of the camera and bound at the tail of the queue. And if the time (Td-Ta) > 5ms, determining that the image corresponding to the current latest exposure start time Ts is lost (frame loss), and discarding the corresponding Ts and Pic at the time without adding a queue.
By this point, the image caching mechanism of the single industrial camera ends.
The following is the synchronization processing mechanism for multiple cameras:
1. firstly, judging whether each camera buffer queue has new data;
2. if yes, taking out the data at the head of each cache queue;
3. and judging whether the exposure start time Ts of the data taken out of each camera buffer queue is consistent (within 1ms of error), and if so, performing image stitching processing.
4. If the exposure start time Ts is inconsistent with the latest exposure start time, selecting one of the exposure start times Ts of the cameras which is closest to the current time to reserve, and discarding the rest of the exposure start times which are earlier than the latest exposure start time; .
5. And then a new group of data is first taken from the discarded queue, and the step 3 is returned to continue the execution.
6. If there is no data in the queue, wait 40ms and continue to execute step 1.
The multi-graph splicing is mainly divided into 4 steps: extracting characteristic points of each image; matching the characteristic points; image registration and copying; and (5) image fusion processing.
(1) Image feature point extraction
The SURF algorithm is mainly adopted, and SURF (speeded Up Robust features) is a Robust image identification and description algorithm. The SIFT algorithm is a variant of the SIFT algorithm, the algorithm steps are approximately the same as the SIFT algorithm, but the adopted method is different and is more efficient than the SIFT algorithm. SURF uses determinant values of the Hesseian matrix as feature point detection and accelerates the operation with an integral map.
After feature point extraction is carried out on each image through a surf operator in opencv, feature point sets to be matched of all images can be obtained.
(2) Feature point matching
And (4) matching the characteristic points of the two images by using a FLANN matching method to obtain the matched characteristic points of the two images. FLANN (fast approximation Neighbor Search library), which are all called fast Nearest Neighbor approximation Search function library, to realize fast and efficient matching. And recording the characteristic points (KeyPoint) of the target image and the image to be matched by characteristic matching, constructing a characteristic quantity (descriptor) according to the characteristic point set, comparing and screening the characteristic quantity, and finally obtaining a mapping set of the matching points.
(3) Image registration and copying
The image registration is to convert the two images into the same coordinate, and is completed by using a findHomography function in an opencv library. When the Homography principle is used, and two graphs A and B conform to the same perspective transformation (the process of converting a three-dimensional object in a space coordinate system into a two-dimensional image representation), a Homography matrix H exists, and B = A × H is obtained. A homography is in essence a projection mapping of one plane to another. By using the findHomography function of opencv, we can obtain the homography matrix H. And (3) obtaining a projective transformation area of one image relative to the other image through the homography matrix, and copying and splicing the images after the area exists.
(4) Image fusion
Because the spliced images are shot by different cameras, slight differences exist in the imaging illumination color and luster and the physical space, and therefore image fusion needs to be carried out on the spliced overlapped areas.
And image fusion processing, which is mainly to perform fusion processing on boundary cracks of image overlapping, perform fusion processing on colors of overlapping areas, and cut invalid areas after coordinate conversion. The image fusion processing mainly adopts a weighted fusion processing algorithm, namely, pixels in an image overlapping region are added according to a certain weight (the image is slightly different, a simple average weight method is adopted, the image is excessively different, and a linear weight value method is adopted according to the value of the overlapping region in the X direction) to synthesize a new image.
And repeating the above 4 steps to completely splice the multiple images.
A monitoring system based on multi-camera pantograph video image splicing is mainly divided into two parts: the vehicle roof collecting unit and the in-vehicle processing unit.
The car roof acquisition unit mainly comprises a plurality of groups of high-speed high-definition digital cameras, a light supplementing device and a synchronous trigger device, and is used for monitoring the running working state of the pantograph in real time.
The in-vehicle processing unit mainly comprises an electric control unit and a data processing host. The electric control unit is used for electrically controlling each device; the data processing host is used for acquiring and storing image data of the vehicle roof acquisition unit, and the image data is transmitted to the detection computer through the gigabit Ethernet for analysis, processing and storage.
Each high-definition camera on the roof synchronously triggers a light supplementing device to acquire a pantograph high-definition image, video image data are transmitted to an in-vehicle data processing host through a gigabit network cable, the in-vehicle data processing host runs monitoring software to receive the pantograph images acquired by each camera, and the pantograph images are processed and spliced by using a professional algorithm to form a complete pantograph image. And then the video is stored as a video recording file by using an image compression algorithm.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. A monitoring method based on multi-camera pantograph video image splicing is characterized by comprising the following specific steps:
synchronously acquiring images by adopting hardware synchronous trigger equipment, and synchronously processing the images on a processing host;
extracting characteristic points of each acquired image to obtain a characteristic point set to be matched of all the images;
matching the characteristic points, and converting the two images into the same coordinate;
acquiring images after the matching coordinates, the matching area and the coordinate conversion, and copying and splicing the two images in the projection transformation area;
carrying out fusion processing on the overlapped boundary cracks of the two images, carrying out fusion processing on the color of the overlapped area, and cutting the invalid area after coordinate conversion;
and repeating the image matching, image copying and image fusion processing operations, and completely splicing the multiple images.
2. The monitoring method based on multi-camera pantograph video image stitching of claim 1, wherein the image synchronization process comprises the following specific steps:
the method comprises the following steps: firstly, judging whether each camera buffer queue has new data;
step two: if so, taking out the data at the head of each buffer queue;
step three: judging whether the exposure start time Ts of the data taken out of each camera cache queue is consistent, whether the error is within 1ms, and if so, performing image splicing processing;
step four: if the exposure start time Ts is inconsistent with the latest exposure start time, selecting one of the exposure start times Ts of the cameras which is closest to the current time to reserve, and discarding the rest of the exposure start times which are earlier than the latest exposure start time;
step five: then a new group of data is first taken from the discarded queue, and the step three is returned to continue to be executed;
step six: if there is no data in the queue, wait 40ms and continue to execute step one.
3. The monitoring method based on multi-camera pantograph video image stitching of claim 1, wherein the feature point extraction adopts SURF algorithm, and feature point extraction is performed on each image through SURF operator in opencv to obtain feature point set to be matched for all images.
4. The multi-camera pantograph video image stitching-based monitoring method according to claim 1, wherein the matching of the feature points is performed by a FLANN matching method.
5. The method for monitoring based on multi-camera pantograph video image stitching of claim 1, wherein the two images are converted under the same coordinate by using findHomography function in opencv library.
6. A monitoring system based on multi-camera pantograph video image stitching based on the method of any one of claims 1-5, characterized by comprising a roof acquisition unit and an in-vehicle processing unit; the car roof acquisition unit comprises a plurality of groups of high-speed high-definition digital cameras, a plurality of groups of light supplement equipment and synchronous trigger equipment, and the synchronous trigger equipment is electrically connected with the plurality of groups of high-speed high-definition digital cameras and the plurality of groups of light supplement equipment respectively; the in-vehicle processing unit comprises an electric control unit and a data processing host, one end of the electric control unit is connected with the data processing host, and the other end of the electric control unit is connected with the roof acquisition unit.
CN202210620121.0A 2022-06-02 2022-06-02 Monitoring method and system based on multi-camera pantograph video image splicing Pending CN114998105A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210620121.0A CN114998105A (en) 2022-06-02 2022-06-02 Monitoring method and system based on multi-camera pantograph video image splicing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210620121.0A CN114998105A (en) 2022-06-02 2022-06-02 Monitoring method and system based on multi-camera pantograph video image splicing

Publications (1)

Publication Number Publication Date
CN114998105A true CN114998105A (en) 2022-09-02

Family

ID=83031072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210620121.0A Pending CN114998105A (en) 2022-06-02 2022-06-02 Monitoring method and system based on multi-camera pantograph video image splicing

Country Status (1)

Country Link
CN (1) CN114998105A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597551A (en) * 2023-06-21 2023-08-15 厦门万安智能有限公司 Intelligent building access management system based on private cloud

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN106851130A (en) * 2016-12-13 2017-06-13 北京搜狐新媒体信息技术有限公司 A kind of video-splicing method and device
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN106851130A (en) * 2016-12-13 2017-06-13 北京搜狐新媒体信息技术有限公司 A kind of video-splicing method and device
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹蔡旻: "机车受电弓及其车顶状态检测***", 中国优秀硕士学位论文全文数据库工程科技Ⅱ辑 2019年第02期, 15 February 2019 (2019-02-15), pages 4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597551A (en) * 2023-06-21 2023-08-15 厦门万安智能有限公司 Intelligent building access management system based on private cloud
CN116597551B (en) * 2023-06-21 2024-06-11 厦门万安智能有限公司 Intelligent building access management system based on private cloud

Similar Documents

Publication Publication Date Title
CN111023970B (en) Multi-mode three-dimensional scanning method and system
CN108257161B (en) Multi-camera-based vehicle environment three-dimensional reconstruction and motion estimation system and method
JP6548690B2 (en) Simulation system, simulation program and simulation method
WO2011074721A1 (en) Image processing device and method for matching images obtained from a plurality of wide-angle cameras
WO2021203883A1 (en) Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
CN112950785A (en) Point cloud labeling method, device and system
WO2019139234A1 (en) Apparatus and method for removing distortion of fisheye lens and omni-directional images
CN114998105A (en) Monitoring method and system based on multi-camera pantograph video image splicing
CN114666564A (en) Method for synthesizing virtual viewpoint image based on implicit neural scene representation
CN108460721A (en) A kind of panoramic video splicing system and method based on hardware accelerator card
JP6260174B2 (en) Surveillance image presentation system
JPH10269362A (en) Object recognition method and device therefor
CN116452573A (en) Defect detection method, model training method, device and equipment for substation equipment
CN113724335B (en) Three-dimensional target positioning method and system based on monocular camera
CN107493460A (en) A kind of image-pickup method and system
CN117422858A (en) Dual-light image target detection method, system, equipment and medium
CN114697528A (en) Image processor, electronic device and focusing control method
CN107094230A (en) A kind of method that image and video are obtained using many airspace data integration technologies
CN112116703A (en) 3D camera and infrared light scanning algorithm for aligning point cloud and color texture
CN116645416A (en) Three-dimensional target mechanical arm grabbing pose estimation method
CN109801339B (en) Image processing method, apparatus and storage medium
CN111024045A (en) Stereo measurement self-rotating camera system and prediction and information combination method thereof
JPWO2020039897A1 (en) Station monitoring system and station monitoring method
JP2002247585A (en) Method for transmitting moving image, method for receiving moving image, program for moving image transmitting processing, recording medium for the program, program for moving image receiving processing, recording medium for the program
CN115880591A (en) Real-time target detection method based on unmanned aerial vehicle video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination