CN111652973A - Monitoring method and system based on mixed reality and related equipment - Google Patents

Monitoring method and system based on mixed reality and related equipment Download PDF

Info

Publication number
CN111652973A
CN111652973A CN202010537055.1A CN202010537055A CN111652973A CN 111652973 A CN111652973 A CN 111652973A CN 202010537055 A CN202010537055 A CN 202010537055A CN 111652973 A CN111652973 A CN 111652973A
Authority
CN
China
Prior art keywords
dimensional
image
target space
data
mixed reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010537055.1A
Other languages
Chinese (zh)
Inventor
刘聪
田第鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Artificial Intelligence and Robotics
Original Assignee
Shenzhen Institute of Artificial Intelligence and Robotics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Artificial Intelligence and Robotics filed Critical Shenzhen Institute of Artificial Intelligence and Robotics
Priority to CN202010537055.1A priority Critical patent/CN111652973A/en
Publication of CN111652973A publication Critical patent/CN111652973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a monitoring method, a monitoring system and related equipment based on mixed reality, which are used for realizing three-dimensional dynamic monitoring of a target space, improving monitoring efficiency, quickly positioning people or objects with abnormal temperature and guaranteeing the safety of public spaces. The method provided by the embodiment of the invention comprises the following steps: receiving at least two types of image information corresponding to a target space; calculating the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data; and sending the three-dimensional fusion image data to external mixed reality equipment for display.

Description

Monitoring method and system based on mixed reality and related equipment
Technical Field
The invention relates to the technical field of monitoring based on mixed reality, in particular to a monitoring method and system based on mixed reality and related equipment.
Background
The large-space-range target monitoring plays a crucial role in the field of public safety, and currently, for large-space-range monitoring, the method mainly detects a target in a monitored space through a target monitoring device (such as an infrared temperature measurement camera and an optical camera) which is fixedly erected through an existing monitoring system, and transmits an image and a detection result to an observation device for two-dimensional display.
In the existing scheme, a monitoring picture is displayed based on a two-dimensional image, the observation angle of the monitoring picture is fixed by the angle of the erection equipment, an observer cannot stereoscopically observe a monitored object in the monitoring picture, and the monitoring effect is poor.
Disclosure of Invention
The embodiment of the invention provides a monitoring method, a monitoring system and related equipment based on mixed reality, which are used for realizing three-dimensional dynamic monitoring of a target space, improving monitoring efficiency, quickly positioning people or objects with abnormal temperature and guaranteeing the safety of public spaces.
A first aspect of an embodiment of the present invention provides a monitoring method based on mixed reality, which may include:
receiving at least two types of image information corresponding to a target space;
calculating the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
and sending the three-dimensional fusion image data to external mixed reality equipment for display.
Optionally, as a possible implementation manner, in the monitoring method based on mixed reality in the embodiment of the present invention, the at least two types of image information include a depth image, an infrared light image, and a visible light image, and the calculating the at least two types of image information by using a preset data fusion algorithm to generate three-dimensional fusion image data may include:
constructing a three-dimensional point cloud model of the target space according to the depth image;
and superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space, and synthesizing three-dimensional fusion image data with infrared temperature information.
Optionally, as a possible implementation manner, in the monitoring method based on mixed reality in the embodiment of the present invention, constructing the three-dimensional point cloud model of the target space according to the depth image may include:
introducing a three-dimensional object model of a fixed object in the target space;
constructing a three-dimensional object model of the movable object in the target space according to the depth image;
and synthesizing the three-dimensional point cloud model of the target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object.
Optionally, as a possible implementation manner, in the monitoring method based on mixed reality in the embodiment of the present invention, the superimposing the infrared light image and the visible light image on the three-dimensional point cloud model of the target space may include:
inputting the depth image and the infrared light image into a preset deep learning model, and calculating to generate a three-dimensional object model in the target space;
and superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
Optionally, as a possible implementation manner, in the monitoring method based on mixed reality in the embodiment of the present invention, before sending the three-dimensional fused image data to an external mixed reality device, the method may further include:
receiving a data request message, wherein the data request message comprises the position information and the view angle information of the target space;
and performing visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment.
Optionally, as a possible implementation manner, in the monitoring method based on mixed reality in the embodiment of the present invention, before superimposing the infrared light image and the visible light image on the three-dimensional point cloud model of the target space, the method may further include:
and compressing the three-dimensional point cloud model by adopting a preset image compression algorithm so as to reduce the data volume of the three-dimensional fusion image data.
Optionally, as a possible implementation manner, the monitoring method based on mixed reality in the embodiment of the present invention may further include:
rendering the three-dimensional fusion image data in a cloud rendering mode, and sending the rendered three-dimensional fusion image data to external mixed reality equipment.
A second aspect of an embodiment of the present invention provides a monitoring system based on mixed reality, which may include:
the system comprises a data collection module, a data fusion module and mixed reality equipment;
the data collection module is used for receiving at least two types of image information corresponding to the target space and sent by at least two types of image sensors;
the data fusion module is used for calculating the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
and the mixed reality equipment is used for receiving and displaying the three-dimensional fusion image data.
Optionally, as a possible implementation, the at least two types of image information include a depth image, an infrared light image, and a visible light image, and the data fusion module may include:
the construction unit is used for constructing a three-dimensional point cloud model of the target space according to the depth image;
and the superposition unit is used for superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space and synthesizing three-dimensional fusion image data with infrared temperature information.
Optionally, as a possible implementation manner, the building unit in the embodiment of the present invention may include:
the importing subunit is used for importing a three-dimensional object model of the fixed object in the target space;
a construction subunit, configured to construct, according to the depth image, a three-dimensional object model of the moving object in the target space;
and the combination subunit is used for synthesizing the three-dimensional point cloud model of the target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object.
Optionally, as a possible implementation manner, the superimposing unit in the embodiment of the present invention may include:
the generating subunit is used for inputting the depth image and the infrared light image into a preset deep learning model, and calculating and generating a three-dimensional object model in the target space;
and the superposition subunit is used for superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
Optionally, as a possible implementation manner, the data fusion module in the embodiment of the present invention may further include:
a receiving unit, configured to receive a data request message, where the data request message includes location information of the target space and view information;
and performing visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment.
Optionally, as a possible implementation manner, the data fusion module in the embodiment of the present invention may further include:
and the compression unit is used for compressing the three-dimensional point cloud model by adopting a preset image compression algorithm so as to reduce the data volume of the three-dimensional fusion image data.
Optionally, as a possible implementation manner, the data fusion module in the embodiment of the present invention may further include:
rendering the three-dimensional fusion image data in a cloud rendering mode, and sending the rendered three-dimensional fusion image data to external mixed reality equipment.
A third aspect of embodiments of the present invention provides a computer apparatus, which includes a processor, and the processor is configured to implement the steps in any one of the possible implementation manners of the first aspect and the first aspect when executing a computer program stored in a memory.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in any one of the possible implementations of the first aspect and the first aspect.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, the monitoring system can receive at least two types of image information corresponding to the target space, calculate the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data, and finally send the three-dimensional fusion image data to the external mixed reality equipment for display. Compared with the existing scheme, the embodiment of the invention realizes the three-dimensional dynamic monitoring of the target space based on the mixed reality equipment for displaying the three-dimensional image, has a wide observation angle of the monitoring picture and improves the monitoring efficiency. And secondly, infrared temperature information can be contained in the three-dimensional fusion image, so that people or objects with abnormal temperature can be quickly positioned, and the safety of public spaces is guaranteed.
Drawings
Fig. 1 is a schematic diagram of an embodiment of a monitoring method based on mixed reality according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a monitoring method based on mixed reality according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an application scenario of a target monitoring system in an embodiment of the present invention;
FIG. 4 is a diagram illustrating an architecture of a target monitoring system in an embodiment of the invention;
fig. 5 is a flowchart illustrating an embodiment of a specific application of a monitoring method based on mixed reality according to an embodiment of the present invention.
FIG. 6 is a schematic diagram of an embodiment of a monitoring system based on mixed reality according to an embodiment of the present invention;
FIG. 7 is a diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a monitoring method, a monitoring system and related equipment based on mixed reality, which are used for realizing three-dimensional dynamic monitoring of a target space, improving monitoring efficiency, quickly positioning people or objects with abnormal temperature and guaranteeing the safety of public spaces.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the embodiment of the invention, various types of image information are acquired by various types of monitoring equipment erected in a target space range, the image information of each monitoring equipment is fused to generate three-dimensional fused image data, and finally, the three-dimensional fused image data is clearly displayed to a user by mixed reality equipment to provide multi-view three-dimensional imaging of the current scene for the user.
For convenience of understanding, a specific flow in the embodiment of the present invention is described below, and referring to fig. 1, an embodiment of a monitoring method based on mixed reality in the embodiment of the present invention may include:
s101, receiving at least two types of image information corresponding to a target space;
in order to realize monitoring in the target space range, at least two types of monitoring devices can be preset to acquire at least two types of image information in the embodiment of the invention.
The specific monitoring device may be any device for generating image information, and for example, may be one or more of a laser imaging device, a depth image sensor, an infrared image sensor, and a visible light image sensor, and may be specifically set according to requirements, which is not limited herein.
S102, calculating at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
the monitoring system based on mixed reality can determine a target space corresponding to the uploaded image information according to the GPS position information of the monitoring equipment. Furthermore, a preset data fusion algorithm can be adopted to calculate the image information corresponding to the acquired target space to generate three-dimensional fusion image data.
Optionally, as a possible implementation manner, when the acquired at least two types of image information include a depth image, an infrared light image, and a visible light image, the step of calculating the at least two types of image information by using a preset data fusion algorithm to generate three-dimensional fused image data may include: constructing a three-dimensional point cloud model of a target space according to the depth image; and superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space, and synthesizing three-dimensional fusion image data with infrared temperature information.
As a possible implementation, the process of superimposing the infrared light image and the visible light image onto the three-dimensional point cloud model of the target space may include: inputting the depth image and the infrared light image into a preset deep learning model, and calculating to generate a three-dimensional object model in a target space; and superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
For example, the preset deep learning model may be implemented based on a binocular stereo matching algorithm, for example, based on a gcnet (greenland simulation network) algorithm, and the basic principle is to use a three-dimensional Convolutional Neural Network (CNN) for aggregation of matching costs to obtain a three-dimensional object model. The preset deep learning model may also be a method of extracting high-level and global features of the input image through a 2D Convolutional Neural Network (CNN), and regularizing the matching cost through the 3D CNN. The specific establishment principle of the preset deep learning model is the prior art, and details are not described here.
It is understood that, in the above possible embodiments, the three-dimensional fused image data is generated only by taking the fusion of the visible light image, the depth image and the infrared light image as an example, in practical applications, other types of monitoring devices may also be used to generate the corresponding types of image data, for example, a laser imaging device is used to obtain a laser image, and then three-dimensional fused image data may be generated based on the fusion of the laser image, the infrared light image and the visible light image. The type of the image data generated by the monitoring equipment can be set according to requirements, and only three-dimensional fusion image data with infrared temperature information can be generated.
And S103, sending the three-dimensional fusion image data to external mixed reality equipment for display.
In practical application, three-dimensional fusion image data in a large-scale space is stored in the monitoring system based on mixed reality, and a user can request the monitoring system for the three-dimensional fusion image data corresponding to a target space to be monitored according to requirements.
When a monitoring request corresponding to a target space is received, the monitoring system based on mixed reality can send three-dimensional fused image data to external mixed reality equipment for display, for example, send the fused image data to mixed reality helmet equipment worn by monitoring personnel for three-dimensional dynamic display.
Mixed Reality (MR), including augmented Reality and augmented virtual, refers to a new visualization environment created by merging real and virtual worlds. The corresponding mixed reality equipment is a new visual environment which can dynamically display the three-dimensional fusion image.
In the embodiment of the invention, the monitoring system can receive at least two types of image information corresponding to the target space, calculate the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data, and finally send the three-dimensional fusion image data to the external mixed reality equipment for display. Compared with the existing scheme, the embodiment of the invention realizes the three-dimensional dynamic monitoring of the target space based on the mixed reality equipment for displaying the three-dimensional image, has a wide observation angle of the monitoring picture and improves the monitoring efficiency. And secondly, infrared temperature information can be contained in the three-dimensional fusion image, so that people or objects with abnormal temperature can be quickly positioned, and the safety of public spaces is guaranteed.
On the basis of the embodiment shown in fig. 1, the embodiment of the present invention may also pre-establish a three-dimensional object model of a fixed object in a target space as an offline data model, thereby reducing the generation time of the three-dimensional point cloud model. Specifically, referring to fig. 2, another embodiment of a monitoring method based on mixed reality according to an embodiment of the present invention may include:
s201, receiving at least two types of image information corresponding to a target space;
referring to step S101 in the embodiment shown in fig. 1, the received at least two types of image information may include a depth image, an infrared light image, and a visible light image.
S202, receiving a data request message;
in practical application, three-dimensional fusion image data in a large-scale space is stored in the monitoring system based on mixed reality, and a user can send a data request message to the monitoring system according to requirements for requesting the three-dimensional fusion image data corresponding to a target space to be monitored. Specifically, the data request message may include position information of the target space and may also include view angle information.
S203, importing a three-dimensional object model of a fixed object in a target space;
in practical application, when the target space contains a fixed object, a three-dimensional object model of the fixed object in the target space may be established in advance as an offline data model. When a three-dimensional point cloud model of a target space needs to be constructed, a three-dimensional object model of a fixed object in the target space can be directly imported, so that the generation time of the three-dimensional point cloud model is saved.
S204, constructing a three-dimensional object model of the movable object in the target space according to the depth image;
after the three-dimensional object model of the fixed object in the target space is introduced, the markers in the model can be used to assist in locating the moving object, and the three-dimensional object model of the moving object in the target space is constructed according to the acquired depth image.
S205, synthesizing a three-dimensional point cloud model of a target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object;
after the three-dimensional object model of the fixed object in the target space and the three-dimensional object model of the movable object in the target space are obtained, the two models can be synthesized to obtain a complete three-dimensional point cloud model in the real-time scene of the target space.
S206, compressing the three-dimensional point cloud model by adopting a preset image compression algorithm;
in actual application, the point cloud model data volume occupies a huge storage space, and the storage space of the mixed reality device is limited, so the mixed reality device may not be able to complete loading and rendering of the fine model.
Optionally, as a possible implementation manner, after the three-dimensional point cloud model is obtained, a preset image compression algorithm may be adopted to compress the three-dimensional point cloud model. For example, the three-dimensional point cloud model may be compressed based on standard algorithms of mpeg2, mpeg4, h264 and h265, and the specific compression algorithm is not limited herein.
S207, overlapping the infrared light image and the visible light image to a three-dimensional point cloud model of a target space, and synthesizing three-dimensional fusion image data with infrared temperature information;
the content described in step S207 in this embodiment is similar to that in step S102 in the embodiment shown in fig. 1, and the process of superimposing the infrared light image and the visible light image on the three-dimensional point cloud model of the target space may refer to the content in step S102, and is not described herein again.
And S208, carrying out visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data subjected to the visual angle conversion to external mixed reality equipment.
And when the received data request message contains the visual angle information selected by the user, carrying out visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment, so that the user can flexibly adjust and monitor the visual angle. The viewing angle information may include any viewing angle set by the user, for example, the user may set the viewing target person from the left 45 degrees viewing angle and the viewing target person from the right 60 degrees viewing angle, and the specific viewing angle may be set reasonably according to the requirement.
Optionally, as a possible implementation manner, the monitoring system based on mixed reality may render the three-dimensional fusion image data in a cloud rendering manner, and send the rendered three-dimensional fusion image data to an external mixed reality device, so as to reduce consumption of computing resources and storage resources of the mixed reality device.
For convenience of understanding, referring to fig. 3, fig. 4 and fig. 5, the monitoring method based on mixed reality in the embodiment of the present invention will be described with reference to a specific application embodiment. The monitoring method based on the mixed reality can be specifically applied to a target monitoring system, fig. 3 is a schematic view of an application scene of the target monitoring system in a specific application embodiment, and the data fusion device can collect image information uploaded by various types of monitoring devices (such as a visible light sensor, a depth image sensor, an infrared light sensor, and the like), perform fusion according to a request of the mixed display device to generate a multi-view three-dimensional fusion image, and return the fusion image to the mixed display device for dynamic display. FIG. 4 is a schematic diagram of a target monitoring system including a data collection module, a data fusion module, and a computational imaging module (a component of a mixed reality device)
Taking an infrared temperature measurement scene in a target space range as an example, a monitoring person wearing the mixed reality device can move freely. The monitoring method based on mixed reality comprises the following flows:
firstly, a calculation imaging module in the mixed reality device sends an information acquisition request to a data fusion module, and the module mainly sends image information and position information (used for indicating the image information in a target space range) in a monitoring range of the mixed reality terminal to the data fusion device.
And secondly, after receiving the request, the data fusion equipment reads the monitoring data of a plurality of monitoring equipment erected in the building, fuses images in the monitoring range of the mixed reality equipment by using the monitoring data, finishes mapping of infrared temperature measurement information and visible light images on the target position image, finally sends the fused image data to the mixed reality equipment, and calculates and images on the mixed reality terminal. At this time, the monitoring personnel can also call up the model diagram of the whole scene at any time and observe each corner of the scene remotely.
The process of fusing images in the monitoring range of the mixed reality device by using the monitoring data is shown in fig. 5, and the specific flow is as follows: after the monitoring data are acquired in a large space range, the data acquisition module sends the data to the data fusion module; after the data fusion module receives data, firstly, establishing a three-dimensional point cloud model of a large scene according to the position information of the data acquisition module and the depth image sent by the data acquisition module; and secondly, attaching temperature information acquired by the infrared image to the point cloud information by the data fusion module, so that parts of the model with different temperatures show different colors. And thirdly, superposing the point cloud model of the corresponding visual field and the visible light image by the module according to the positioning information and the visual field request information sent by the calculation imaging module, and finally synthesizing and calculating the image fused with the visible light, the infrared light and the depth image requested by the imaging module. And fourthly, the data fusion module sends the image data synthesized in the previous step to the calculation imaging module, and finally the calculation imaging module completes imaging.
Referring to fig. 6, an embodiment of the present invention further provides a monitoring system based on mixed reality, which may include: a data collection module 601, a data fusion module 602, and a mixed reality device 603;
the data collection module 601 is configured to receive at least two types of image information corresponding to a target space sent by at least two types of image sensors;
the data fusion module 602 is configured to calculate at least two types of image information by using a preset data fusion algorithm to generate three-dimensional fusion image data;
and a mixed reality device 603 for receiving and displaying three-dimensional fused image data.
In the embodiment of the invention, the monitoring system can receive at least two types of image information corresponding to the target space, calculate the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data, and finally send the three-dimensional fusion image data to the external mixed reality equipment for display. Compared with the existing scheme, the embodiment of the invention realizes the three-dimensional dynamic monitoring of the target space based on the mixed reality equipment for displaying the three-dimensional image, has a wide observation angle of the monitoring picture and improves the monitoring efficiency. And secondly, infrared temperature information can be contained in the three-dimensional fusion image, so that people or objects with abnormal temperature can be quickly positioned, and the safety of public spaces is guaranteed.
Optionally, as a possible implementation, the at least two types of image information include a depth image, an infrared light image, and a visible light image, and the data fusion module may include:
the construction unit is used for constructing a three-dimensional point cloud model of the target space according to the depth image;
and the superposition unit is used for superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space and synthesizing three-dimensional fusion image data with infrared temperature information.
Optionally, as a possible implementation manner, the building unit in the embodiment of the present invention may include:
the guiding-in subunit is used for guiding in a three-dimensional object model of the fixed object in the target space;
a construction subunit, configured to construct, according to the depth image, a three-dimensional object model of the moving object in the target space;
and the combination subunit is used for synthesizing a three-dimensional point cloud model of the target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object.
Optionally, as a possible implementation manner, the superimposing unit in the embodiment of the present invention may include:
the generating subunit is used for inputting the depth image and the infrared light image into a preset deep learning model and calculating and generating a three-dimensional object model in the target space;
and the superposition subunit is used for superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
Optionally, as a possible implementation manner, the data fusion module in the embodiment of the present invention may further include:
the receiving unit is used for receiving a data request message, and the data request message comprises position information and visual angle information of a target space;
and carrying out visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment.
Optionally, as a possible implementation manner, the data fusion module in the embodiment of the present invention may further include:
and the compression unit is used for compressing the three-dimensional point cloud model by adopting a preset image compression algorithm so as to reduce the data volume of the three-dimensional fusion image data.
Optionally, as a possible implementation manner, the data fusion module in the embodiment of the present invention may further include:
rendering the three-dimensional fusion image data in a cloud rendering mode, and sending the rendered three-dimensional fusion image data to external mixed reality equipment.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The mixed reality-based monitoring system in the embodiment of the present invention is described above from the perspective of the modular functional entity, please refer to fig. 7, and the computer apparatus in the embodiment of the present invention is described below from the perspective of hardware processing:
the computer device 1 may include a memory 11, a processor 12 and an input output bus 13. The processor 11, when executing the computer program, implements the steps in the embodiment of the monitoring method based on mixed reality shown in fig. 1, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor, when executing the computer program, implements the functions of each module or unit in the above-described device embodiments.
In some embodiments of the present invention, the processor is specifically configured to implement the following steps:
receiving at least two types of image information corresponding to a target space;
calculating at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
and sending the three-dimensional fused image data to external mixed reality equipment for display.
Optionally, as a possible implementation manner, the at least two types of image information include a depth image, an infrared light image, and a visible light image, and the processor may be further configured to implement the following steps:
constructing a three-dimensional point cloud model of a target space according to the depth image;
and superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space, and synthesizing three-dimensional fusion image data with infrared temperature information.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
introducing a three-dimensional object model of a fixed object in a target space;
constructing a three-dimensional object model of the moving object in the target space according to the depth image;
and synthesizing a three-dimensional point cloud model of the target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
inputting the depth image and the infrared light image into a preset deep learning model, and calculating to generate a three-dimensional object model in a target space;
and superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
receiving a data request message, wherein the data request message comprises position information and visual angle information of a target space;
and carrying out visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
and compressing the three-dimensional point cloud model by adopting a preset image compression algorithm so as to reduce the data volume of the three-dimensional fusion image data.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
rendering the three-dimensional fusion image data in a cloud rendering mode, and sending the rendered three-dimensional fusion image data to external mixed reality equipment.
The memory 11 includes at least one type of readable storage medium, and the readable storage medium includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the computer device 1, for example a hard disk of the computer device 1. The memory 11 may also be an external storage device of the computer apparatus 1 in other embodiments, such as a plug-in hard disk provided on the computer apparatus 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 11 may also include both an internal storage unit and an external storage device of the computer apparatus 1. The memory 11 may be used not only to store application software installed in the computer apparatus 1 and various types of data, such as codes of the computer program 01, but also to temporarily store data that has been output or is to be output.
The processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 11 or Processing data, such as executing the computer program 01.
The input/output bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
Further, the computer apparatus may further include a wired or wireless network interface 14, and the network interface 14 may optionally include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the computer apparatus 1 and other electronic devices.
Optionally, the computer device 1 may further include a user interface, the user interface may include a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally, the user interface may further include a standard wired interface and a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the computer device 1 and for displaying a visualized user interface.
Fig. 7 shows only the computer device 1 with the components 11-14 and the computer program 01, it being understood by a person skilled in the art that the structure shown in fig. 7 does not constitute a limitation of the computer device 1, but may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
receiving at least two types of image information corresponding to a target space;
calculating at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
and sending the three-dimensional fused image data to external mixed reality equipment for display.
Optionally, as a possible implementation manner, the at least two types of image information include a depth image, an infrared light image, and a visible light image, and the processor may be further configured to implement the following steps:
constructing a three-dimensional point cloud model of a target space according to the depth image;
and superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space, and synthesizing three-dimensional fusion image data with infrared temperature information.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
introducing a three-dimensional object model of a fixed object in a target space;
constructing a three-dimensional object model of the moving object in the target space according to the depth image;
and synthesizing a three-dimensional point cloud model of the target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
inputting the depth image and the infrared light image into a preset deep learning model, and calculating to generate a three-dimensional object model in a target space;
and superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
receiving a data request message, wherein the data request message comprises position information and visual angle information of a target space;
and carrying out visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
and compressing the three-dimensional point cloud model by adopting a preset image compression algorithm so as to reduce the data volume of the three-dimensional fusion image data.
Optionally, as a possible implementation manner, the processor may be further configured to implement the following steps:
rendering the three-dimensional fusion image data in a cloud rendering mode, and sending the rendered three-dimensional fusion image data to external mixed reality equipment.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A monitoring method based on mixed reality is characterized by comprising the following steps:
receiving at least two types of image information corresponding to a target space;
calculating the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
and sending the three-dimensional fusion image data to external mixed reality equipment for display.
2. The method according to claim 1, wherein the at least two types of image information include a depth image, an infrared light image, and a visible light image, and the calculating the at least two types of image information by using a preset data fusion algorithm generates three-dimensional fused image data, including:
constructing a three-dimensional point cloud model of the target space according to the depth image;
and superposing the infrared light image and the visible light image to the three-dimensional point cloud model of the target space, and synthesizing three-dimensional fusion image data with infrared temperature information.
3. The method of claim 2, wherein constructing a three-dimensional point cloud model of the target space from the depth image comprises:
introducing a three-dimensional object model of a fixed object in the target space;
constructing a three-dimensional object model of the movable object in the target space according to the depth image;
and synthesizing the three-dimensional point cloud model of the target space according to the three-dimensional object model of the movable object and the three-dimensional object model of the fixed object.
4. The method of claim 2 or 3, wherein superimposing the infrared light image and the visible light image onto the three-dimensional point cloud model of the target space comprises:
inputting the depth image and the infrared light image into a preset deep learning model, and calculating to generate a three-dimensional object model in the target space;
and superposing the three-dimensional object model in the target space and the three-dimensional point cloud model in the target space.
5. The method of claim 4, wherein prior to sending the three-dimensional fused image data to an external mixed reality device, the method further comprises:
receiving a data request message, wherein the data request message comprises the position information and the view angle information of the target space;
and performing visual angle conversion on the three-dimensional fusion image data according to the visual angle information, and sending the three-dimensional fusion image data after the visual angle conversion to external mixed reality equipment.
6. The method of claim 5, wherein prior to superimposing the infrared light image and the visible light image onto the three-dimensional point cloud model of the target space, the method further comprises:
and compressing the three-dimensional point cloud model by adopting a preset image compression algorithm so as to reduce the data volume of the three-dimensional fusion image data.
7. The method of claim 6, wherein the three-dimensional fused image data is transmitted to an external mixed reality device, comprising:
rendering the three-dimensional fusion image data in a cloud rendering mode, and sending the rendered three-dimensional fusion image data to external mixed reality equipment.
8. A mixed reality based monitoring system, comprising: the system comprises a data collection module, a data fusion module and mixed reality equipment;
the data collection module is used for receiving at least two types of image information corresponding to the target space and sent by at least two types of image sensors;
the data fusion module is used for calculating the at least two types of image information by adopting a preset data fusion algorithm to generate three-dimensional fusion image data;
and the mixed reality equipment is used for receiving and displaying the three-dimensional fusion image data.
9. A computer arrangement, characterized in that the computer arrangement comprises a processor for implementing the steps of the method according to any one of claims 1 to 7 when executing a computer program stored in a memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the method according to any one of claims 1 to 7.
CN202010537055.1A 2020-06-12 2020-06-12 Monitoring method and system based on mixed reality and related equipment Pending CN111652973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010537055.1A CN111652973A (en) 2020-06-12 2020-06-12 Monitoring method and system based on mixed reality and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010537055.1A CN111652973A (en) 2020-06-12 2020-06-12 Monitoring method and system based on mixed reality and related equipment

Publications (1)

Publication Number Publication Date
CN111652973A true CN111652973A (en) 2020-09-11

Family

ID=72347651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010537055.1A Pending CN111652973A (en) 2020-06-12 2020-06-12 Monitoring method and system based on mixed reality and related equipment

Country Status (1)

Country Link
CN (1) CN111652973A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310213A (en) * 2023-02-23 2023-06-23 北京百度网讯科技有限公司 Processing method and device of three-dimensional object model, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067470A (en) * 2017-04-05 2017-08-18 东北大学 Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
US20180197341A1 (en) * 2016-06-10 2018-07-12 Dirtt Environmental Solutions, Ltd. Mixed-reality architectural design environment
CN110992298A (en) * 2019-12-02 2020-04-10 深圳市唯特视科技有限公司 Genetic algorithm-based radiation source target identification and information analysis method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180197341A1 (en) * 2016-06-10 2018-07-12 Dirtt Environmental Solutions, Ltd. Mixed-reality architectural design environment
CN107067470A (en) * 2017-04-05 2017-08-18 东北大学 Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN110992298A (en) * 2019-12-02 2020-04-10 深圳市唯特视科技有限公司 Genetic algorithm-based radiation source target identification and information analysis method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310213A (en) * 2023-02-23 2023-06-23 北京百度网讯科技有限公司 Processing method and device of three-dimensional object model, electronic equipment and readable storage medium
CN116310213B (en) * 2023-02-23 2023-10-24 北京百度网讯科技有限公司 Processing method and device of three-dimensional object model, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
CN104484033A (en) BIM based virtual reality displaying method and system
JP6619871B2 (en) Shared reality content sharing
CN113574863A (en) Method and system for rendering 3D image using depth information
Jia et al. 3D image reconstruction and human body tracking using stereo vision and Kinect technology
CN110288692B (en) Illumination rendering method and device, storage medium and electronic device
CN107396069A (en) Monitor methods of exhibiting, apparatus and system
CN112039937B (en) Display method, position determination method and device
CN106683163B (en) Imaging method and system for video monitoring
JP2019008623A (en) Information processing apparatus, information processing apparatus control method, computer program, and storage medium
CN106980378B (en) Virtual display method and system
CN111696215A (en) Image processing method, device and equipment
EP3229482A1 (en) Master device, slave device, and control method therefor
US20170330384A1 (en) Product Image Processing Method, and Apparatus and System Thereof
KR20180120456A (en) Apparatus for providing virtual reality contents based on panoramic image and method for the same
JP2016012168A (en) Information sharing system
CN108932055B (en) Method and equipment for enhancing reality content
CN111652973A (en) Monitoring method and system based on mixed reality and related equipment
CN109002162B (en) Scene switching method, device, terminal and computer storage medium
CN113870439A (en) Method, apparatus, device and storage medium for processing image
CN109660508A (en) Data visualization method, electronic device, computer equipment and storage medium
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
CN109949396A (en) A kind of rendering method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination