CN115423812A - Panoramic monitoring planarization display method - Google Patents

Panoramic monitoring planarization display method Download PDF

Info

Publication number
CN115423812A
CN115423812A CN202211380122.9A CN202211380122A CN115423812A CN 115423812 A CN115423812 A CN 115423812A CN 202211380122 A CN202211380122 A CN 202211380122A CN 115423812 A CN115423812 A CN 115423812A
Authority
CN
China
Prior art keywords
projection
network
planarization
panoramic
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211380122.9A
Other languages
Chinese (zh)
Other versions
CN115423812B (en
Inventor
刘寒松
王国强
王永
刘瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202211380122.9A priority Critical patent/CN115423812B/en
Publication of CN115423812A publication Critical patent/CN115423812A/en
Application granted granted Critical
Publication of CN115423812B publication Critical patent/CN115423812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention belongs to the technical field of video planarization display, and relates to a panoramic monitoring planarization display method.

Description

Panoramic monitoring planarization display method
Technical Field
The invention belongs to the technical field of video planarization display, and relates to a panoramic monitoring planarization display method which is used for road monitoring of a smart city and realizing all-dimensional blind-corner-free video monitoring of the smart city.
Background
At present, smart cities develop rapidly, intelligent equipment in all walks of life is constantly applied, city monitoring is basically finished, very big evidence obtaining of solving the city crime problem, and simultaneously, city crime has also been suppressed, because monitoring equipment has basically realized the extensive full coverage in city, this security personnel who also monitors have proposed higher requirement, and simultaneously, monitoring equipment need in time watch, but watching of surveillance video needs to spend a large amount of time.
At present, city monitoring is mainly ordinary single-angle monitoring equipment, and there are a lot of deficiencies, for example, monitoring range is narrow, has very large-range blind area, and equipment can't realize intelligent video processing, and the ball machine can compensate the problem of blind area, but only can see local area, need the constantly regulation of viewer can reach the purpose of watching the panorama, but still can't avoid there being the blind area problem at rotatory in-process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a panoramic monitoring planarization display method based on interframe consistency aiming at the problems that the traditional city safety monitoring mode cannot realize all-around dead-angle-free monitoring, the monitoring is a common monitoring mode, intelligent monitoring cannot be realized, and the ball machine monitoring has blind areas.
In order to achieve the purpose, the specific process of the invention for realizing the panoramic monitoring planarization display comprises the following steps:
(1) Constructing a panoramic monitoring planarization display data set: collecting a panoramic camera monitoring video frequency segment of an urban road to construct a panoramic monitoring planarization display data set;
(2) City panorama monitoring projection: based on the panoramic monitoring planarization display data set constructed in the step (1), projecting a video frame by adopting a mode of combining isometric projection (ERP projection) and hexahedral projection (CMP projection);
(3) Object detection based on a combination of isometric projection (ERP) and hexahedral projection (CMP): constructing a branch taking ERP as input and a branch taking CMP as input, and respectively carrying out object detection by using the two branches;
(4) And (3) correcting an object detection result: correcting the central point of the detection result obtained in the step (3) in an inverse projection mode to obtain the spherical coordinates of the central point of the object, and relieving the problem of stretching deformation of the object;
(5) Adaptive range search based on object size: aiming at two projection modes of isometric projection and hexahedral projection, searching an adaptive range based on the size of an object, searching coordinates of a central point which belongs to the same object as the current coordinates in the range of the current central point by adopting a neighborhood mean value mode, and combining the coordinates into the same coordinate point to obtain spherical coordinates;
(6) Detecting local area projection of a central point based on an object: for the spherical coordinates obtained in the step (5) ((
Figure 139736DEST_PATH_IMAGE001
) Carrying out secondary projection to obtain an image block of the object;
(7) Object segmentation based on image blocks: inputting the image block obtained in the step (6) into an object segmentation network for object segmentation to obtain a cutting result of the object and position information of the object;
(8) ERP mapping based on image block object segmentation: pasting the cutting result of the object obtained in the step (7) and the position information of the object to an ERP to obtain an ERP map;
(9) Map misplaced area repair based on GAN network (generic adaptive Networks, generation of countermeasure network): filling the missing area in the ERP map obtained in the step (7) by adopting a GAN network (generating adaptive Networks) to obtain a planarization display result for generating panoramic contents;
(10) Object detection refinement based on panoramic content planarization: and the planarization display result is used for training the panoramic video object detection network, so that the performance of the object detection network is improved.
As a further technical scheme of the invention, the projection result of the step (2) is as follows:
Figure 260139DEST_PATH_IMAGE002
Figure 305456DEST_PATH_IMAGE003
and
Figure 679760DEST_PATH_IMAGE004
respectively representing the ERP projection and CMP projection results,
Figure 697395DEST_PATH_IMAGE005
and
Figure 785437DEST_PATH_IMAGE006
respectively showing two projection modes, namely a projection mode,
Figure 521311DEST_PATH_IMAGE007
representing the picture of the i-th frame from the video V and j representing the index of the hexahedral projection plane.
As a further technical scheme of the invention, the detailed definitions of the two branches in the step (3) are as follows:
Figure 923474DEST_PATH_IMAGE008
Figure 857932DEST_PATH_IMAGE009
network for respectively representing equidistant projection object detection
Figure 320137DEST_PATH_IMAGE010
Hexagon projection object detection network
Figure 277729DEST_PATH_IMAGE011
J represents the surfaceHas 6 faces in the hexahedral projection, and j =1->6, hexahedron projection object detection network
Figure 280320DEST_PATH_IMAGE012
Comprising a total of 6 networks, a parameter sharing of the networks, wherein the object detection network is projected equidistantly
Figure 272547DEST_PATH_IMAGE010
Hexagon projection object detection network
Figure 905653DEST_PATH_IMAGE011
YOLO-V5 is adopted as a backbone network.
As a further technical scheme of the invention, the specific process of the step (4) is as follows:
by coordinates
Figure 412858DEST_PATH_IMAGE013
Obtaining the coordinates of the center point of the anchor frame
Figure 655358DEST_PATH_IMAGE014
And
Figure 502092DEST_PATH_IMAGE015
and correspondingly
Figure 102837DEST_PATH_IMAGE016
The coordinates of the center point of the corresponding anchor frame can be obtained
Figure 300600DEST_PATH_IMAGE017
And
Figure 582677DEST_PATH_IMAGE018
based on
Figure 346234DEST_PATH_IMAGE017
And
Figure 321143DEST_PATH_IMAGE018
obtaining spherical coordinates:
Figure 740623DEST_PATH_IMAGE019
Figure 888708DEST_PATH_IMAGE020
Figure 444454DEST_PATH_IMAGE021
Figure 652581DEST_PATH_IMAGE022
obtaining the position of the object center point of the hexahedron projection in the panoramic video by the same method
Figure 559357DEST_PATH_IMAGE023
As a further technical scheme of the invention, the projection mode in the step (6) adopts a self-adaptive range mode.
As a further technical solution of the present invention, the object segmentation network in step (7) is trained by using a DAVIS dataset and a Youtube-Objects dataset, and the object segmentation network (SegmentNet network) includes an encoding network (encoder) VggNet and a network segmentation output module.
Compared with the prior art, the invention aims at the problems that the traditional city safety monitoring mode can not realize all-dimensional dead-angle-free monitoring, the watching angle is continuously adjusted by the existing panoramic monitoring mode, and a blind zone exists in the monitoring adjusting process, firstly, object coarse positioning information is generated by combining a global projection (ERP) and a local projection (CMP), accurate positioning information is generated based on the coarse positioning information back projection, then, the position is refined and then, an object segmentation network is introduced, and the object is cut out from a background area, so that the purpose that a foreground segmentation object is pasted to the global projection is realized, in order to make up for dislocation and deficiency caused by pasting, a detail supplement network based on a space-time GAN network is introduced, the planar practical applicability of panoramic monitoring is improved, and the full coverage of smart city safety monitoring is realized.
Drawings
FIG. 1 is a block diagram of a process for implementing a panoramic monitoring planarization show in accordance with the present invention.
FIG. 2 is a network block diagram of a panoramic monitoring planarization display implemented in accordance with the present invention.
Detailed Description
The invention will be further described by way of examples, without in any way limiting the scope of the invention, with reference to the accompanying drawings.
The embodiment is as follows:
in this embodiment, the flow shown in fig. 1 and the network shown in fig. 2 are used to implement panoramic monitoring and planarization display, and the specific process is as follows:
(1) Constructing a city panorama monitoring planarization display dataset
Constructing a city panoramic monitoring planarization display data set V by utilizing the acquired panoramic camera video segments, wherein the city panoramic monitoring planarization display data set V is based on DAVIS and a panoramic video data set YT-ALL is used as a bottom processing data set due to the lack of the monitoring video acquired by the panoramic camera at present;
(2) City panorama monitoring projection mode analysis (ERP and CMP, global and local)
High-order panoramic monitoring video data set V (based on the collection)
Figure 681772DEST_PATH_IMAGE024
e.V), the mode that adopts equidistance projection (ERP projection) and hexahedron projection (CMP projection) to combine together guarantees that information is lost as far as possible after the projection, and meanwhile, the equidistance projection can provide the relative position of object and object semantic collaborative information, thereby guarantee that global information portion loses in the panoramic video, and hexahedron projection can provide object detail information, guarantee that panorama projection local area information portion loses, can be very big avoid because of the stretching and distortion of the object that panorama projection leads to, and the large-scale object that hexahedron projection exists is cut and the problem that global information loses, can solve through combining global projection equally, the detailed definition is as follows:
Figure 419921DEST_PATH_IMAGE002
Figure 2212DEST_PATH_IMAGE003
and
Figure 130705DEST_PATH_IMAGE004
respectively representing the ERP projection and CMP projection results,
Figure 620592DEST_PATH_IMAGE005
and
Figure 885351DEST_PATH_IMAGE006
respectively showing two projection modes, namely a projection mode,
Figure 638544DEST_PATH_IMAGE007
represents the ith frame picture from the video V, and j represents the index of the hexahedral projection surface;
(3) Object detection based on a combination of isometric projection (ERP) and hexahedral projection (CMP)
In order to fully exert the combined complementary characteristics of the isometric projection (ERP) and the hexahedral projection (CMP) of the two, fuse the two to ensure that the object is not locally stretched and sense global information, the embodiment is divided into two branches, namely a branch taking the ERP as input and a branch taking the CMP as input, and the two branches are used for object detection, and the detailed definitions are as follows:
Figure 316650DEST_PATH_IMAGE025
Figure 547911DEST_PATH_IMAGE009
network for respectively representing equidistant projection object detection
Figure 198335DEST_PATH_IMAGE010
Hexagon projection object detection network
Figure 653587DEST_PATH_IMAGE011
J denotes the index of the face, since the hexahedral projection has 6 faces in total, so j =1->6;
Figure 22252DEST_PATH_IMAGE012
The method comprises totally 6 networks, realizes the sharing of the network parameters, and adopts YOLO-V5 as an equidistant projection object detection network for improving the detection precision of the object detection network
Figure 290160DEST_PATH_IMAGE010
Hexagon projection object detection network
Figure 60670DEST_PATH_IMAGE011
The backbone network of (2);
(4) Object detection result correction
The anchor frame of the object can be obtained through the step (3), however, the detection result is the result of regional stretching, the distortion caused by stretching of the object exists, in order to solve the problem of the object distortion caused by stretching of the object, the problem of stretching deformation of the object is relieved by means of back projection of the central point of the detection result, and according to the detection results output by the step (3) in isometric projection and hexahedron projection, the coordinate is used for outputting
Figure 686823DEST_PATH_IMAGE013
The coordinates of the center point of the anchor frame can be obtained
Figure 277204DEST_PATH_IMAGE014
And
Figure 115847DEST_PATH_IMAGE015
and correspondingly
Figure 272022DEST_PATH_IMAGE016
The coordinates of the center point of the corresponding anchor frame can be obtained
Figure 272339DEST_PATH_IMAGE017
And
Figure 350017DEST_PATH_IMAGE018
based on
Figure 789088DEST_PATH_IMAGE017
And
Figure 3032DEST_PATH_IMAGE018
the spherical coordinates of the center point can be obtained,
Figure 439829DEST_PATH_IMAGE019
Figure 801541DEST_PATH_IMAGE020
Figure 214942DEST_PATH_IMAGE021
Figure 345709DEST_PATH_IMAGE022
the position of the central point of the object projected by the hexahedron in the panoramic video can be obtained by the same method
Figure 953408DEST_PATH_IMAGE023
(5) Adaptive range search based on object size
After the spherical coordinates of the center point of the object are obtained in the step (4), the obtained center points have deviation and dislocation due to the fact that the spherical coordinates belong to different projection forms (CMP and ERP), and the embodiment adopts a neighborhood mean value mode, namely, firstly, the coordinates of the center point which belongs to the same object as the current coordinates are searched in the current center point range R1 and then are combined into the same coordinate point; because the search range needs to be different when the sizes of the objects are different, the self-adaptive range search based on the size (S) of the object is adopted, and the size (S) of the object is derived from the position of the object and is positioned at different positions of the ERPEven if the objects are the same size, there will be a difference due to stretching and therefore, will be
Figure 740099DEST_PATH_IMAGE013
The method overcomes the defects that the method projects to spherical coordinates by means of inverse projection:
Figure 458656DEST_PATH_IMAGE026
Figure 709509DEST_PATH_IMAGE027
Figure 222530DEST_PATH_IMAGE028
(6) Local area projection based on object detection center point
Although the object detection result obtained in the step (5) can locate the position of the object, the detected object has tensile deformation (ERP) and the object is split and cut by the hexahedron (CMP), and a secondary positioning mode is adopted, namely the spherical coordinates (3), (5) are obtained firstly
Figure 558833DEST_PATH_IMAGE001
) Performing secondary projection, wherein the projection mode adopts a mode of a self-adaptive range R2, the size of the range R2 is the same as the calculation mode of R1, and in order to ensure the segmentation of the prominent foreground object, the embodiment increases by 1.3 times on the basis of the original R2, and obtains an image block of the object after the secondary projection:
Patch = LocalProjection
Figure 81081DEST_PATH_IMAGE029
(7) Object segmentation based on image blocks
Acquiring an object segmentation result based on an image block, so that the object segmentation result is attached to ERP (enterprise resource planning), and a panoramic video planarization display is realized, wherein a DAVIS (data independent video) data set and a Youtube-Objects data set are adopted for training an object segmentation network, the object segmentation network (segmentNet network) mainly comprises a coding network (encoder) VggNet and a network segmentation output module, in order to ensure the output size, the rear two Maxpool layers of the VggNet are removed, the other layers load the original weight, the input of the network is 474 x 474, and the output Seg = segmentNet (Patch);
(8) ERP mapping based on image block object segmentation
The cutting result of the object and the position information of the object can be obtained through the step (7), and the detection result is pasted on the ERP, so that the panoramic content can be completely displayed while the front object can not be stretched;
(9) Map misplaced region repair based on GAN network (Generative adaptive Networks)
Although the panoramic content and the panoramic global information can be displayed in the step (8), the problem of information dislocation exists in a local area and the problem of local information loss caused by object cutting exists in the local area, so that a GAN (global adaptive network) is adopted to fill the lost area, as foreground objects do not need to be filled, only background areas need to be filled, the filling complexity is greatly reduced, the lost area is generated in a covering mode in the embodiment, the existing GAN based on panoramic video is few, the GAN based on panoramic video data sets YT-ALL is used, and the GAN is trained by sensing time sequence information through 3D convolution;
(10) Object detection refinement based on panoramic content planarization
The flattened display result of the panoramic content can be generated through the step (9), and the flattened display result can be used for training a panoramic video object detection network, so that the performance of the object detection network is improved.
Network structures, modules, and computing processes not described in detail herein are all common in the art.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (6)

1. A panoramic monitoring planarization display method is characterized by comprising the following specific processes:
(1) Constructing a panoramic monitoring planarization display data set: collecting a panoramic camera monitoring video frequency segment of an urban road to construct a panoramic monitoring planarization display data set;
(2) City panorama monitoring projection: based on the panoramic monitoring planarization display data set constructed in the step (1), projecting a video frame in a mode of combining isometric projection and hexahedral projection;
(3) Object detection based on combination of isometric projection and hexahedral projection: constructing a branch taking equidistant projection as input and a branch taking hexahedral projection as input, and respectively carrying out object detection by using the two branches;
(4) And (3) correcting an object detection result: correcting the central point of the detection result obtained in the step (3) in an inverse projection mode to obtain the spherical coordinates of the central point of the object, and relieving the problem of stretching deformation of the object;
(5) Adaptive range search based on object size: aiming at two projection modes of isometric projection and hexahedral projection, searching an adaptive range based on the size of an object, searching coordinates of a central point which belongs to the same object as the current coordinates in the range of the current central point by adopting a neighborhood mean value mode, and combining the coordinates into the same coordinate point to obtain spherical coordinates;
(6) Local area projection based on object detection center point: carrying out secondary projection on the spherical coordinates obtained in the step (5) to obtain image blocks of the object;
(7) Object segmentation based on image blocks: inputting the image block obtained in the step (6) into an object segmentation network for object segmentation to obtain a cutting result of an object and position information of the object;
(8) ERP mapping based on image block object segmentation: pasting the cutting result of the object obtained in the step (7) and the position information of the object to an ERP to obtain an ERP map;
(9) Repairing map misplaced areas based on the GAN network: filling the missing area in the ERP map obtained in the step (7) by adopting a GAN network to obtain a planarization display result for generating panoramic content;
(10) Object detection refinement based on panoramic content planarization: and the planarization display result is used for training the panoramic video object detection network, so that the performance of the object detection network is improved.
2. The method for planarly displaying with panoramic monitoring as set forth in claim 1, wherein the projection result of the step (2) is:
Figure 788159DEST_PATH_IMAGE001
Figure 783928DEST_PATH_IMAGE002
and
Figure 766927DEST_PATH_IMAGE003
respectively representing the ERP projection and CMP projection results,
Figure 490032DEST_PATH_IMAGE004
and
Figure 147148DEST_PATH_IMAGE005
respectively show two projection modes, namely a projection mode,
Figure 438452DEST_PATH_IMAGE006
representing the picture of the i-th frame from the video V and j representing the index of the hexahedral projection plane.
3. The planarization demonstration method with panoramic monitoring as claimed in claim 2, wherein the detailed definitions of the two branches in step (3) are:
Figure 298960DEST_PATH_IMAGE007
Figure 701123DEST_PATH_IMAGE008
network for respectively representing equidistant projection object detection
Figure 714209DEST_PATH_IMAGE009
Hexagon projection object detection network
Figure 35469DEST_PATH_IMAGE010
J represents the index of the face, the hexahedron projection has 6 faces in total, j =1->6, hexahedron projection object detection network
Figure 258640DEST_PATH_IMAGE011
Comprising a total of 6 networks, a parameter sharing of the networks, wherein the object detection network is projected equidistantly
Figure 838395DEST_PATH_IMAGE009
Hexagon projection object detection network
Figure 565042DEST_PATH_IMAGE010
YOLO-V5 is adopted as a backbone network.
4. The planarization demonstration method with panoramic monitoring as recited in claim 3, wherein the specific process of step (4) is:
by coordinates
Figure 588362DEST_PATH_IMAGE012
Obtaining coordinates of a center point of the anchor frame
Figure 298829DEST_PATH_IMAGE013
And
Figure 652581DEST_PATH_IMAGE014
and accordingly
Figure 889527DEST_PATH_IMAGE015
The coordinates of the center point of the corresponding anchor frame can be obtained
Figure 427956DEST_PATH_IMAGE016
And
Figure 999621DEST_PATH_IMAGE017
based on
Figure 547277DEST_PATH_IMAGE016
And
Figure 373150DEST_PATH_IMAGE017
obtaining spherical coordinates:
Figure 489005DEST_PATH_IMAGE018
Figure 908485DEST_PATH_IMAGE019
Figure 118886DEST_PATH_IMAGE020
Figure 940212DEST_PATH_IMAGE021
obtaining the position of the object center point of the hexahedron projection in the panoramic video by the same method
Figure 725503DEST_PATH_IMAGE022
5. The planarization demonstration method with panoramic monitoring as claimed in claim 4, wherein the projection manner of step (6) is adaptive range manner.
6. The method for planarly displaying based on panoramic monitoring of claim 5, wherein in step (7), the object segmentation network is trained by using a DAVIS data set and a Youtube-Objects data set, and the object segmentation network comprises a coding network VggNet and a network segmentation output module.
CN202211380122.9A 2022-11-05 2022-11-05 Panoramic monitoring planarization display method Active CN115423812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211380122.9A CN115423812B (en) 2022-11-05 2022-11-05 Panoramic monitoring planarization display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211380122.9A CN115423812B (en) 2022-11-05 2022-11-05 Panoramic monitoring planarization display method

Publications (2)

Publication Number Publication Date
CN115423812A true CN115423812A (en) 2022-12-02
CN115423812B CN115423812B (en) 2023-04-18

Family

ID=84207475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211380122.9A Active CN115423812B (en) 2022-11-05 2022-11-05 Panoramic monitoring planarization display method

Country Status (1)

Country Link
CN (1) CN115423812B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115767040A (en) * 2023-01-06 2023-03-07 松立控股集团股份有限公司 360-degree panoramic monitoring automatic cruise method based on interactive continuous learning
CN117319610A (en) * 2023-11-28 2023-12-29 松立控股集团股份有限公司 Smart city road monitoring method based on high-order panoramic camera region enhancement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2654127C1 (en) * 2016-12-20 2018-05-16 Федеральное государственное автономное образовательное учреждение высшего образования "Белгородский государственный национальный исследовательский университет" (НИУ "БелГУ") Method for generating a digital panoramic image
CN109429561A (en) * 2017-06-23 2019-03-05 联发科技股份有限公司 The method and device that motion vector in immersion coding and decoding video derives
CN112529006A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Panoramic picture detection method and device, terminal and storage medium
CN113038123A (en) * 2021-03-22 2021-06-25 上海大学 No-reference panoramic video quality evaluation method, system, terminal and medium
CN113206992A (en) * 2021-04-20 2021-08-03 聚好看科技股份有限公司 Method for converting projection format of panoramic video and display equipment
CN113947671A (en) * 2021-09-23 2022-01-18 广东科学技术职业学院 Panoramic 360-degree image segmentation and synthesis method, system and medium
CN115049935A (en) * 2022-08-12 2022-09-13 松立控股集团股份有限公司 Urban illegal building division detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2654127C1 (en) * 2016-12-20 2018-05-16 Федеральное государственное автономное образовательное учреждение высшего образования "Белгородский государственный национальный исследовательский университет" (НИУ "БелГУ") Method for generating a digital panoramic image
CN109429561A (en) * 2017-06-23 2019-03-05 联发科技股份有限公司 The method and device that motion vector in immersion coding and decoding video derives
CN112529006A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Panoramic picture detection method and device, terminal and storage medium
CN113038123A (en) * 2021-03-22 2021-06-25 上海大学 No-reference panoramic video quality evaluation method, system, terminal and medium
CN113206992A (en) * 2021-04-20 2021-08-03 聚好看科技股份有限公司 Method for converting projection format of panoramic video and display equipment
CN113947671A (en) * 2021-09-23 2022-01-18 广东科学技术职业学院 Panoramic 360-degree image segmentation and synthesis method, system and medium
CN115049935A (en) * 2022-08-12 2022-09-13 松立控股集团股份有限公司 Urban illegal building division detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOYUN YAN ET AL.: ""SALIENT REGION DETECTION VIA COLOR SPATIAL DISTRIBUTION DETERMINED GLOBAL CONTRASTS"", 《IEEE》 *
邱森森 等: ""一种立体全景图像显著性检测模型"", 《激光与光电子学进展》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115767040A (en) * 2023-01-06 2023-03-07 松立控股集团股份有限公司 360-degree panoramic monitoring automatic cruise method based on interactive continuous learning
CN117319610A (en) * 2023-11-28 2023-12-29 松立控股集团股份有限公司 Smart city road monitoring method based on high-order panoramic camera region enhancement
CN117319610B (en) * 2023-11-28 2024-01-30 松立控股集团股份有限公司 Smart city road monitoring method based on high-order panoramic camera region enhancement

Also Published As

Publication number Publication date
CN115423812B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN115423812B (en) Panoramic monitoring planarization display method
CN111047510B (en) Large-field-angle image real-time splicing method based on calibration
CN102447925B (en) Method and device for synthesizing virtual viewpoint image
US9299152B2 (en) Systems and methods for image depth map generation
US8644596B1 (en) Conversion of monoscopic visual content using image-depth database
CN101859433B (en) Image mosaic device and method
WO2018176926A1 (en) Real-time correction method and system for self-learning multi-channel image fusion
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN103763479A (en) Splicing device for real-time high speed high definition panoramic video and method thereof
US10154242B1 (en) Conversion of 2D image to 3D video
CN110992484B (en) Display method of traffic dynamic video in real scene three-dimensional platform
CN103607568A (en) Stereo street scene video projection method and system
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
CN102637293A (en) Moving image processing device and moving image processing method
CN110706151B (en) Video-oriented non-uniform style migration method
CN101510304B (en) Method, device and pick-up head for dividing and obtaining foreground image
CN107451952A (en) A kind of splicing and amalgamation method of panoramic video, equipment and system
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN102609950A (en) Two-dimensional video depth map generation process
CN103533313A (en) Geographical position based panoramic electronic map video synthesis display method and system
CN103902730A (en) Thumbnail generation method and system
CN112712487A (en) Scene video fusion method and system, electronic equipment and storage medium
CN115393192A (en) Multi-point multi-view video fusion method and system based on general plane diagram
CN102750694B (en) Local optimum belief propagation algorithm-based binocular video depth map solution method
CN115376028A (en) Target detection method based on dense feature point splicing and improved YOLOV5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant