CN115423812A - Panoramic monitoring planarization display method - Google Patents
Panoramic monitoring planarization display method Download PDFInfo
- Publication number
- CN115423812A CN115423812A CN202211380122.9A CN202211380122A CN115423812A CN 115423812 A CN115423812 A CN 115423812A CN 202211380122 A CN202211380122 A CN 202211380122A CN 115423812 A CN115423812 A CN 115423812A
- Authority
- CN
- China
- Prior art keywords
- projection
- network
- planarization
- panoramic
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 claims description 41
- 230000011218 segmentation Effects 0.000 claims description 24
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention belongs to the technical field of video planarization display, and relates to a panoramic monitoring planarization display method.
Description
Technical Field
The invention belongs to the technical field of video planarization display, and relates to a panoramic monitoring planarization display method which is used for road monitoring of a smart city and realizing all-dimensional blind-corner-free video monitoring of the smart city.
Background
At present, smart cities develop rapidly, intelligent equipment in all walks of life is constantly applied, city monitoring is basically finished, very big evidence obtaining of solving the city crime problem, and simultaneously, city crime has also been suppressed, because monitoring equipment has basically realized the extensive full coverage in city, this security personnel who also monitors have proposed higher requirement, and simultaneously, monitoring equipment need in time watch, but watching of surveillance video needs to spend a large amount of time.
At present, city monitoring is mainly ordinary single-angle monitoring equipment, and there are a lot of deficiencies, for example, monitoring range is narrow, has very large-range blind area, and equipment can't realize intelligent video processing, and the ball machine can compensate the problem of blind area, but only can see local area, need the constantly regulation of viewer can reach the purpose of watching the panorama, but still can't avoid there being the blind area problem at rotatory in-process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a panoramic monitoring planarization display method based on interframe consistency aiming at the problems that the traditional city safety monitoring mode cannot realize all-around dead-angle-free monitoring, the monitoring is a common monitoring mode, intelligent monitoring cannot be realized, and the ball machine monitoring has blind areas.
In order to achieve the purpose, the specific process of the invention for realizing the panoramic monitoring planarization display comprises the following steps:
(1) Constructing a panoramic monitoring planarization display data set: collecting a panoramic camera monitoring video frequency segment of an urban road to construct a panoramic monitoring planarization display data set;
(2) City panorama monitoring projection: based on the panoramic monitoring planarization display data set constructed in the step (1), projecting a video frame by adopting a mode of combining isometric projection (ERP projection) and hexahedral projection (CMP projection);
(3) Object detection based on a combination of isometric projection (ERP) and hexahedral projection (CMP): constructing a branch taking ERP as input and a branch taking CMP as input, and respectively carrying out object detection by using the two branches;
(4) And (3) correcting an object detection result: correcting the central point of the detection result obtained in the step (3) in an inverse projection mode to obtain the spherical coordinates of the central point of the object, and relieving the problem of stretching deformation of the object;
(5) Adaptive range search based on object size: aiming at two projection modes of isometric projection and hexahedral projection, searching an adaptive range based on the size of an object, searching coordinates of a central point which belongs to the same object as the current coordinates in the range of the current central point by adopting a neighborhood mean value mode, and combining the coordinates into the same coordinate point to obtain spherical coordinates;
(6) Detecting local area projection of a central point based on an object: for the spherical coordinates obtained in the step (5) (() Carrying out secondary projection to obtain an image block of the object;
(7) Object segmentation based on image blocks: inputting the image block obtained in the step (6) into an object segmentation network for object segmentation to obtain a cutting result of the object and position information of the object;
(8) ERP mapping based on image block object segmentation: pasting the cutting result of the object obtained in the step (7) and the position information of the object to an ERP to obtain an ERP map;
(9) Map misplaced area repair based on GAN network (generic adaptive Networks, generation of countermeasure network): filling the missing area in the ERP map obtained in the step (7) by adopting a GAN network (generating adaptive Networks) to obtain a planarization display result for generating panoramic contents;
(10) Object detection refinement based on panoramic content planarization: and the planarization display result is used for training the panoramic video object detection network, so that the performance of the object detection network is improved.
As a further technical scheme of the invention, the projection result of the step (2) is as follows:
andrespectively representing the ERP projection and CMP projection results,andrespectively showing two projection modes, namely a projection mode,representing the picture of the i-th frame from the video V and j representing the index of the hexahedral projection plane.
As a further technical scheme of the invention, the detailed definitions of the two branches in the step (3) are as follows:
network for respectively representing equidistant projection object detectionHexagon projection object detection networkJ represents the surfaceHas 6 faces in the hexahedral projection, and j =1->6, hexahedron projection object detection networkComprising a total of 6 networks, a parameter sharing of the networks, wherein the object detection network is projected equidistantlyHexagon projection object detection networkYOLO-V5 is adopted as a backbone network.
As a further technical scheme of the invention, the specific process of the step (4) is as follows:
by coordinatesObtaining the coordinates of the center point of the anchor frameAndand correspondinglyThe coordinates of the center point of the corresponding anchor frame can be obtainedAndbased onAndobtaining spherical coordinates:
obtaining the position of the object center point of the hexahedron projection in the panoramic video by the same method。
As a further technical scheme of the invention, the projection mode in the step (6) adopts a self-adaptive range mode.
As a further technical solution of the present invention, the object segmentation network in step (7) is trained by using a DAVIS dataset and a Youtube-Objects dataset, and the object segmentation network (SegmentNet network) includes an encoding network (encoder) VggNet and a network segmentation output module.
Compared with the prior art, the invention aims at the problems that the traditional city safety monitoring mode can not realize all-dimensional dead-angle-free monitoring, the watching angle is continuously adjusted by the existing panoramic monitoring mode, and a blind zone exists in the monitoring adjusting process, firstly, object coarse positioning information is generated by combining a global projection (ERP) and a local projection (CMP), accurate positioning information is generated based on the coarse positioning information back projection, then, the position is refined and then, an object segmentation network is introduced, and the object is cut out from a background area, so that the purpose that a foreground segmentation object is pasted to the global projection is realized, in order to make up for dislocation and deficiency caused by pasting, a detail supplement network based on a space-time GAN network is introduced, the planar practical applicability of panoramic monitoring is improved, and the full coverage of smart city safety monitoring is realized.
Drawings
FIG. 1 is a block diagram of a process for implementing a panoramic monitoring planarization show in accordance with the present invention.
FIG. 2 is a network block diagram of a panoramic monitoring planarization display implemented in accordance with the present invention.
Detailed Description
The invention will be further described by way of examples, without in any way limiting the scope of the invention, with reference to the accompanying drawings.
The embodiment is as follows:
in this embodiment, the flow shown in fig. 1 and the network shown in fig. 2 are used to implement panoramic monitoring and planarization display, and the specific process is as follows:
(1) Constructing a city panorama monitoring planarization display dataset
Constructing a city panoramic monitoring planarization display data set V by utilizing the acquired panoramic camera video segments, wherein the city panoramic monitoring planarization display data set V is based on DAVIS and a panoramic video data set YT-ALL is used as a bottom processing data set due to the lack of the monitoring video acquired by the panoramic camera at present;
(2) City panorama monitoring projection mode analysis (ERP and CMP, global and local)
High-order panoramic monitoring video data set V (based on the collection)e.V), the mode that adopts equidistance projection (ERP projection) and hexahedron projection (CMP projection) to combine together guarantees that information is lost as far as possible after the projection, and meanwhile, the equidistance projection can provide the relative position of object and object semantic collaborative information, thereby guarantee that global information portion loses in the panoramic video, and hexahedron projection can provide object detail information, guarantee that panorama projection local area information portion loses, can be very big avoid because of the stretching and distortion of the object that panorama projection leads to, and the large-scale object that hexahedron projection exists is cut and the problem that global information loses, can solve through combining global projection equally, the detailed definition is as follows:
andrespectively representing the ERP projection and CMP projection results,andrespectively showing two projection modes, namely a projection mode,represents the ith frame picture from the video V, and j represents the index of the hexahedral projection surface;
(3) Object detection based on a combination of isometric projection (ERP) and hexahedral projection (CMP)
In order to fully exert the combined complementary characteristics of the isometric projection (ERP) and the hexahedral projection (CMP) of the two, fuse the two to ensure that the object is not locally stretched and sense global information, the embodiment is divided into two branches, namely a branch taking the ERP as input and a branch taking the CMP as input, and the two branches are used for object detection, and the detailed definitions are as follows:
network for respectively representing equidistant projection object detectionHexagon projection object detection networkJ denotes the index of the face, since the hexahedral projection has 6 faces in total, so j =1->6;The method comprises totally 6 networks, realizes the sharing of the network parameters, and adopts YOLO-V5 as an equidistant projection object detection network for improving the detection precision of the object detection networkHexagon projection object detection networkThe backbone network of (2);
(4) Object detection result correction
The anchor frame of the object can be obtained through the step (3), however, the detection result is the result of regional stretching, the distortion caused by stretching of the object exists, in order to solve the problem of the object distortion caused by stretching of the object, the problem of stretching deformation of the object is relieved by means of back projection of the central point of the detection result, and according to the detection results output by the step (3) in isometric projection and hexahedron projection, the coordinate is used for outputtingThe coordinates of the center point of the anchor frame can be obtainedAndand correspondinglyThe coordinates of the center point of the corresponding anchor frame can be obtainedAndbased onAndthe spherical coordinates of the center point can be obtained,
the position of the central point of the object projected by the hexahedron in the panoramic video can be obtained by the same method;
(5) Adaptive range search based on object size
After the spherical coordinates of the center point of the object are obtained in the step (4), the obtained center points have deviation and dislocation due to the fact that the spherical coordinates belong to different projection forms (CMP and ERP), and the embodiment adopts a neighborhood mean value mode, namely, firstly, the coordinates of the center point which belongs to the same object as the current coordinates are searched in the current center point range R1 and then are combined into the same coordinate point; because the search range needs to be different when the sizes of the objects are different, the self-adaptive range search based on the size (S) of the object is adopted, and the size (S) of the object is derived from the position of the object and is positioned at different positions of the ERPEven if the objects are the same size, there will be a difference due to stretching and therefore, will beThe method overcomes the defects that the method projects to spherical coordinates by means of inverse projection:
(6) Local area projection based on object detection center point
Although the object detection result obtained in the step (5) can locate the position of the object, the detected object has tensile deformation (ERP) and the object is split and cut by the hexahedron (CMP), and a secondary positioning mode is adopted, namely the spherical coordinates (3), (5) are obtained firstly) Performing secondary projection, wherein the projection mode adopts a mode of a self-adaptive range R2, the size of the range R2 is the same as the calculation mode of R1, and in order to ensure the segmentation of the prominent foreground object, the embodiment increases by 1.3 times on the basis of the original R2, and obtains an image block of the object after the secondary projection:
(7) Object segmentation based on image blocks
Acquiring an object segmentation result based on an image block, so that the object segmentation result is attached to ERP (enterprise resource planning), and a panoramic video planarization display is realized, wherein a DAVIS (data independent video) data set and a Youtube-Objects data set are adopted for training an object segmentation network, the object segmentation network (segmentNet network) mainly comprises a coding network (encoder) VggNet and a network segmentation output module, in order to ensure the output size, the rear two Maxpool layers of the VggNet are removed, the other layers load the original weight, the input of the network is 474 x 474, and the output Seg = segmentNet (Patch);
(8) ERP mapping based on image block object segmentation
The cutting result of the object and the position information of the object can be obtained through the step (7), and the detection result is pasted on the ERP, so that the panoramic content can be completely displayed while the front object can not be stretched;
(9) Map misplaced region repair based on GAN network (Generative adaptive Networks)
Although the panoramic content and the panoramic global information can be displayed in the step (8), the problem of information dislocation exists in a local area and the problem of local information loss caused by object cutting exists in the local area, so that a GAN (global adaptive network) is adopted to fill the lost area, as foreground objects do not need to be filled, only background areas need to be filled, the filling complexity is greatly reduced, the lost area is generated in a covering mode in the embodiment, the existing GAN based on panoramic video is few, the GAN based on panoramic video data sets YT-ALL is used, and the GAN is trained by sensing time sequence information through 3D convolution;
(10) Object detection refinement based on panoramic content planarization
The flattened display result of the panoramic content can be generated through the step (9), and the flattened display result can be used for training a panoramic video object detection network, so that the performance of the object detection network is improved.
Network structures, modules, and computing processes not described in detail herein are all common in the art.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Claims (6)
1. A panoramic monitoring planarization display method is characterized by comprising the following specific processes:
(1) Constructing a panoramic monitoring planarization display data set: collecting a panoramic camera monitoring video frequency segment of an urban road to construct a panoramic monitoring planarization display data set;
(2) City panorama monitoring projection: based on the panoramic monitoring planarization display data set constructed in the step (1), projecting a video frame in a mode of combining isometric projection and hexahedral projection;
(3) Object detection based on combination of isometric projection and hexahedral projection: constructing a branch taking equidistant projection as input and a branch taking hexahedral projection as input, and respectively carrying out object detection by using the two branches;
(4) And (3) correcting an object detection result: correcting the central point of the detection result obtained in the step (3) in an inverse projection mode to obtain the spherical coordinates of the central point of the object, and relieving the problem of stretching deformation of the object;
(5) Adaptive range search based on object size: aiming at two projection modes of isometric projection and hexahedral projection, searching an adaptive range based on the size of an object, searching coordinates of a central point which belongs to the same object as the current coordinates in the range of the current central point by adopting a neighborhood mean value mode, and combining the coordinates into the same coordinate point to obtain spherical coordinates;
(6) Local area projection based on object detection center point: carrying out secondary projection on the spherical coordinates obtained in the step (5) to obtain image blocks of the object;
(7) Object segmentation based on image blocks: inputting the image block obtained in the step (6) into an object segmentation network for object segmentation to obtain a cutting result of an object and position information of the object;
(8) ERP mapping based on image block object segmentation: pasting the cutting result of the object obtained in the step (7) and the position information of the object to an ERP to obtain an ERP map;
(9) Repairing map misplaced areas based on the GAN network: filling the missing area in the ERP map obtained in the step (7) by adopting a GAN network to obtain a planarization display result for generating panoramic content;
(10) Object detection refinement based on panoramic content planarization: and the planarization display result is used for training the panoramic video object detection network, so that the performance of the object detection network is improved.
2. The method for planarly displaying with panoramic monitoring as set forth in claim 1, wherein the projection result of the step (2) is:
3. The planarization demonstration method with panoramic monitoring as claimed in claim 2, wherein the detailed definitions of the two branches in step (3) are:
network for respectively representing equidistant projection object detectionHexagon projection object detection networkJ represents the index of the face, the hexahedron projection has 6 faces in total, j =1->6, hexahedron projection object detection networkComprising a total of 6 networks, a parameter sharing of the networks, wherein the object detection network is projected equidistantlyHexagon projection object detection networkYOLO-V5 is adopted as a backbone network.
4. The planarization demonstration method with panoramic monitoring as recited in claim 3, wherein the specific process of step (4) is:
by coordinatesObtaining coordinates of a center point of the anchor frameAndand accordinglyThe coordinates of the center point of the corresponding anchor frame can be obtainedAndbased onAndobtaining spherical coordinates:
5. The planarization demonstration method with panoramic monitoring as claimed in claim 4, wherein the projection manner of step (6) is adaptive range manner.
6. The method for planarly displaying based on panoramic monitoring of claim 5, wherein in step (7), the object segmentation network is trained by using a DAVIS data set and a Youtube-Objects data set, and the object segmentation network comprises a coding network VggNet and a network segmentation output module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211380122.9A CN115423812B (en) | 2022-11-05 | 2022-11-05 | Panoramic monitoring planarization display method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211380122.9A CN115423812B (en) | 2022-11-05 | 2022-11-05 | Panoramic monitoring planarization display method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115423812A true CN115423812A (en) | 2022-12-02 |
CN115423812B CN115423812B (en) | 2023-04-18 |
Family
ID=84207475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211380122.9A Active CN115423812B (en) | 2022-11-05 | 2022-11-05 | Panoramic monitoring planarization display method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115423812B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115767040A (en) * | 2023-01-06 | 2023-03-07 | 松立控股集团股份有限公司 | 360-degree panoramic monitoring automatic cruise method based on interactive continuous learning |
CN117319610A (en) * | 2023-11-28 | 2023-12-29 | 松立控股集团股份有限公司 | Smart city road monitoring method based on high-order panoramic camera region enhancement |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2654127C1 (en) * | 2016-12-20 | 2018-05-16 | Федеральное государственное автономное образовательное учреждение высшего образования "Белгородский государственный национальный исследовательский университет" (НИУ "БелГУ") | Method for generating a digital panoramic image |
CN109429561A (en) * | 2017-06-23 | 2019-03-05 | 联发科技股份有限公司 | The method and device that motion vector in immersion coding and decoding video derives |
CN112529006A (en) * | 2020-12-18 | 2021-03-19 | 平安科技(深圳)有限公司 | Panoramic picture detection method and device, terminal and storage medium |
CN113038123A (en) * | 2021-03-22 | 2021-06-25 | 上海大学 | No-reference panoramic video quality evaluation method, system, terminal and medium |
CN113206992A (en) * | 2021-04-20 | 2021-08-03 | 聚好看科技股份有限公司 | Method for converting projection format of panoramic video and display equipment |
CN113947671A (en) * | 2021-09-23 | 2022-01-18 | 广东科学技术职业学院 | Panoramic 360-degree image segmentation and synthesis method, system and medium |
CN115049935A (en) * | 2022-08-12 | 2022-09-13 | 松立控股集团股份有限公司 | Urban illegal building division detection method |
-
2022
- 2022-11-05 CN CN202211380122.9A patent/CN115423812B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2654127C1 (en) * | 2016-12-20 | 2018-05-16 | Федеральное государственное автономное образовательное учреждение высшего образования "Белгородский государственный национальный исследовательский университет" (НИУ "БелГУ") | Method for generating a digital panoramic image |
CN109429561A (en) * | 2017-06-23 | 2019-03-05 | 联发科技股份有限公司 | The method and device that motion vector in immersion coding and decoding video derives |
CN112529006A (en) * | 2020-12-18 | 2021-03-19 | 平安科技(深圳)有限公司 | Panoramic picture detection method and device, terminal and storage medium |
CN113038123A (en) * | 2021-03-22 | 2021-06-25 | 上海大学 | No-reference panoramic video quality evaluation method, system, terminal and medium |
CN113206992A (en) * | 2021-04-20 | 2021-08-03 | 聚好看科技股份有限公司 | Method for converting projection format of panoramic video and display equipment |
CN113947671A (en) * | 2021-09-23 | 2022-01-18 | 广东科学技术职业学院 | Panoramic 360-degree image segmentation and synthesis method, system and medium |
CN115049935A (en) * | 2022-08-12 | 2022-09-13 | 松立控股集团股份有限公司 | Urban illegal building division detection method |
Non-Patent Citations (2)
Title |
---|
XIAOYUN YAN ET AL.: ""SALIENT REGION DETECTION VIA COLOR SPATIAL DISTRIBUTION DETERMINED GLOBAL CONTRASTS"", 《IEEE》 * |
邱森森 等: ""一种立体全景图像显著性检测模型"", 《激光与光电子学进展》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115767040A (en) * | 2023-01-06 | 2023-03-07 | 松立控股集团股份有限公司 | 360-degree panoramic monitoring automatic cruise method based on interactive continuous learning |
CN117319610A (en) * | 2023-11-28 | 2023-12-29 | 松立控股集团股份有限公司 | Smart city road monitoring method based on high-order panoramic camera region enhancement |
CN117319610B (en) * | 2023-11-28 | 2024-01-30 | 松立控股集团股份有限公司 | Smart city road monitoring method based on high-order panoramic camera region enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN115423812B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115423812B (en) | Panoramic monitoring planarization display method | |
CN111047510B (en) | Large-field-angle image real-time splicing method based on calibration | |
CN102447925B (en) | Method and device for synthesizing virtual viewpoint image | |
US9299152B2 (en) | Systems and methods for image depth map generation | |
US8644596B1 (en) | Conversion of monoscopic visual content using image-depth database | |
CN101859433B (en) | Image mosaic device and method | |
WO2018176926A1 (en) | Real-time correction method and system for self-learning multi-channel image fusion | |
CN110660125B (en) | Three-dimensional modeling device for power distribution network system | |
CN103763479A (en) | Splicing device for real-time high speed high definition panoramic video and method thereof | |
US10154242B1 (en) | Conversion of 2D image to 3D video | |
CN110992484B (en) | Display method of traffic dynamic video in real scene three-dimensional platform | |
CN103607568A (en) | Stereo street scene video projection method and system | |
CN112651881B (en) | Image synthesizing method, apparatus, device, storage medium, and program product | |
CN102637293A (en) | Moving image processing device and moving image processing method | |
CN110706151B (en) | Video-oriented non-uniform style migration method | |
CN101510304B (en) | Method, device and pick-up head for dividing and obtaining foreground image | |
CN107451952A (en) | A kind of splicing and amalgamation method of panoramic video, equipment and system | |
CN111047709A (en) | Binocular vision naked eye 3D image generation method | |
CN102609950A (en) | Two-dimensional video depth map generation process | |
CN103533313A (en) | Geographical position based panoramic electronic map video synthesis display method and system | |
CN103902730A (en) | Thumbnail generation method and system | |
CN112712487A (en) | Scene video fusion method and system, electronic equipment and storage medium | |
CN115393192A (en) | Multi-point multi-view video fusion method and system based on general plane diagram | |
CN102750694B (en) | Local optimum belief propagation algorithm-based binocular video depth map solution method | |
CN115376028A (en) | Target detection method based on dense feature point splicing and improved YOLOV5 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |