CN111127822B - Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment - Google Patents

Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment Download PDF

Info

Publication number
CN111127822B
CN111127822B CN202010226489.XA CN202010226489A CN111127822B CN 111127822 B CN111127822 B CN 111127822B CN 202010226489 A CN202010226489 A CN 202010226489A CN 111127822 B CN111127822 B CN 111127822B
Authority
CN
China
Prior art keywords
image
fire
intelligent wearable
wearable device
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010226489.XA
Other languages
Chinese (zh)
Other versions
CN111127822A (en
Inventor
钟张翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Rongmeng Intelligent Technology Co ltd
Original Assignee
Hangzhou Rongmeng Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Rongmeng Intelligent Technology Co ltd filed Critical Hangzhou Rongmeng Intelligent Technology Co ltd
Priority to CN202010226489.XA priority Critical patent/CN111127822B/en
Publication of CN111127822A publication Critical patent/CN111127822A/en
Application granted granted Critical
Publication of CN111127822B publication Critical patent/CN111127822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Emergency Management (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a fire fighting auxiliary method based on augmented reality and intelligent wearable equipment, wherein the method comprises the following steps: acquiring a target fire image; extracting an object outline meeting a preset shape condition from a target fire image; acquiring a reference fire image sent by electronic equipment communicating with the intelligent wearable equipment, and determining the position information of the intelligent wearable equipment and the electronic equipment; calculating an optimal escape path according to a target fire image under the position information of the intelligent wearable device and a reference fire image under the position information of the electronic device; emitting a first light ray which can form a virtual image, wherein the virtual image comprises an optimal escape path and an object outline; receiving a second light ray capable of forming a live-action image; and synthesizing the first light ray and the second light ray to present a synthesized image. Therefore, the user frees both hands by wearing the intelligent wearable device, and extracts and presents the object outline in front of the user from the target fire image, thereby implementing the fire rescue work more efficiently.

Description

Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment
Technical Field
The invention relates to the technical field of augmented reality, in particular to a fire fighting auxiliary method based on augmented reality and intelligent wearable equipment.
Background
Firefighters play a very role in dealing with fire hazards. Generally, when dealing with a fire, a firefighter needs to wear a thermal insulation fireproof suit and a head mask to go to a first-line fire-resistant rescue of the fire.
However, since the firefighter is not familiar with the fire field terrain of the fire place, even if the firefighter has already done some necessary protection measures when performing rescue, the fire place is often filled with a mass of dense smoke, which shields the field of vision of the firefighter, so that the firefighter cannot perform rescue effectively, for example, cannot recognize the trapped crowd through the dense smoke with the naked eye, and thus cannot perform rescue effectively for the trapped crowd.
In order to effectively perform fire rescue work, the conventional art provides a thermal imager capable of detecting temperature distribution of each region in a fire scene, thereby identifying the disaster-stricken masses located in each region. However, in practical applications, a fire fighter needs to hold a thermal imager to detect a fire scene, however, this greatly restricts the working flexibility of the fire fighter in fire rescue.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a fire-fighting assistance method and an intelligent wearable device based on augmented reality, which can improve fire-fighting rescue efficiency.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides an augmented reality-based fire fighting assistance method, which is applied to an intelligent wearable device, and the method includes:
acquiring a target fire image;
extracting an object contour meeting a preset shape condition from the target fire image;
acquiring a reference fire image sent by electronic equipment communicating with the intelligent wearable equipment;
determining position information of the intelligent wearable device and the electronic device, wherein the intelligent wearable device and the electronic device are located at different positions in a fire scene;
calculating an optimal escape path according to a target fire image under the position information of the intelligent wearable device and a reference fire image under the position information of the electronic device, wherein each escape path takes the position information of the intelligent wearable device as a starting point and takes the position of a target place as an end point;
emitting a first light ray, wherein the first light ray can form a virtual image, and the virtual image comprises the optimal escape path and the object outline;
receiving a second light ray, wherein the second light ray can form a live-action image, and the live-action image comprises the fire scene picture;
and synthesizing the first light ray and the second light ray to present a synthesized image.
Optionally, the extracting, from the target fire image, an object contour satisfying a preset shape condition includes:
processing the target fire image by using an image edge detection algorithm, and extracting an object contour meeting a preset shape condition;
rendering the object contour, wherein the virtual image contains the rendered object contour.
Optionally, before emitting the first light, the method further comprises:
and determining object information corresponding to the object contour, wherein the virtual image further comprises the object information.
Optionally, the method further comprises:
judging whether the object information of the object is the object information of the active object;
if yes, calculating the distance between the intelligent wearable device and the object, and generating prompt information according to the distance, wherein the virtual image further comprises the prompt information.
Optionally, before emitting the first light, the method further comprises:
acquiring a reference fire image sent by electronic equipment communicating with the intelligent wearable equipment;
determining position information of the intelligent wearable device and the electronic device, wherein the intelligent wearable device and the electronic device are located at different positions in the fire scene;
and calculating an optimal escape path according to the target fire image under the position information of the intelligent wearable device and the reference fire image under the position information of the electronic device, wherein each escape path takes the position information of the intelligent wearable device as a starting point and takes the position of a target place as an end point, and the virtual image further comprises the optimal escape path.
Optionally, the calculating an optimal escape path according to the target fire image under the location information of the intelligent wearable device and the reference fire image under the location information of the electronic device includes:
processing the target fire image and the reference fire image by using a fire model to obtain a flame area and a smoke area of each position in the fire scene;
calculating the danger value of each flame area and the danger value of each smoke area;
accumulating the total danger value of a flame area and/or a smoke area passed by each escape path according to the position information of the intelligent wearable device, the position information of the electronic device and the position information of a target place;
and determining the escape path with the lowest total danger value as the optimal escape path.
Optionally, the calculating the hazard value of each of the flame zones comprises:
calculating the pixel gray value of each flame area in the target fire image and the reference fire image;
and calculating the danger value of each flame area according to the pixel gray value of each flame area.
Optionally, the calculating the risk value of each flame region according to the pixel gray value of each flame region includes:
determining a pixel gray scale range corresponding to the pixel average gray scale value of each flame area;
and determining a danger value corresponding to the pixel gray scale range.
Optionally, the acquiring the target fire image includes:
tracking a head rotation angle and/or an eyeball rotation angle of a user wearing the intelligent wearable device;
and acquiring a target fire image in a visual field range corresponding to the head rotation angle and/or the eyeball rotation angle.
In a second aspect, an embodiment of the present invention provides an intelligent wearable device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the augmented reality based fire aid methods.
Compared with the prior art, in the augmented reality-based fire-fighting auxiliary method provided by each embodiment of the invention, which is applied to intelligent wearable equipment, firstly, a target fire image is obtained, and an object outline meeting a preset shape condition is extracted from the target fire image; secondly, acquiring a reference fire image sent by electronic equipment communicating with the intelligent wearable equipment, and determining the position information of the intelligent wearable equipment and the electronic equipment, wherein the intelligent wearable equipment and the electronic equipment are at different positions in a fire scene, and calculating an optimal escape path according to a target fire image under the position information of the intelligent wearable equipment and the reference fire image under the position information of the electronic equipment, wherein each escape path takes the position information of the intelligent wearable equipment as a starting point and the position of a target place as a terminal point; thirdly, emitting a first light ray, wherein the first light ray can form a virtual image, and the virtual image comprises an object outline; thirdly, receiving a second light ray, wherein the second light ray can form a live-action image, and the live-action image comprises a fire scene; and finally, synthesizing the first light ray and the second light ray to present a synthesized image. Therefore, on the one hand, in a fire scene, the user frees both hands by wearing the intelligent wearable device, thereby performing the fire rescue work more efficiently and effectively. On the other hand, even if the dense smoke shields the visual field of eyes, the method can extract the object outline from the target fire image and display the object outline in front of the user, so that the user can know the fire scene more clearly, and the fire rescue work can be implemented efficiently.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1a is a schematic structural diagram of an intelligent wearable device according to an embodiment of the present invention;
FIG. 1b is a schematic view of the see-through light guide element of FIG. 1a disposed on a head-mount frame;
FIG. 1c is a schematic diagram of an embodiment of the present invention, which uses an image edge detection algorithm to process a target fire image and extract an object profile satisfying a predetermined shape condition;
FIG. 1d is a schematic view of two see-through light guide elements of FIG. 1a disposed on a head-mount frame;
FIG. 1e is a first graph of the side view angle and the display brightness of the display module shown in FIG. 1 a;
FIG. 1f is a second graph of the side view angle and the display brightness of the display module shown in FIG. 1 a;
FIG. 1g is a third plot of the side view angle and display brightness of the display module shown in FIG. 1 a;
fig. 2a is a schematic diagram of the position relationship between the display module and the face of the user when the intelligent wearable device shown in fig. 1a is worn;
FIG. 2b is a schematic view of the display module shown in FIG. 1a being rotated;
FIG. 3a is a schematic imaging diagram of the smart wearable device shown in FIG. 1 a;
fig. 3b is a schematic diagram of the smart wearable device displaying object information according to the embodiment of the present invention;
fig. 3c is a schematic diagram illustrating that the intelligent wearable device broadcasts object information using a voice broadcast function according to an embodiment of the present invention;
fig. 3d is a schematic diagram of the intelligent wearable device implementing navigation using a navigation function according to the embodiment of the present invention;
fig. 3e is a schematic diagram of the smart wearable device according to the embodiment of the present invention displaying environmental information in a fire scene;
FIG. 4 is a schematic view of the smart wearable device shown in FIG. 1a when connected to an external device for operation;
FIG. 5 is a schematic view of an escape path provided by an embodiment of the present invention;
fig. 6 is a flowchart illustrating a fire fighting assistance method based on augmented reality according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1a, an embodiment of the present invention provides an intelligent wearable device, where a total weight of the intelligent wearable device is less than 350 g, and the intelligent wearable device includes: a head-mounted frame 11, two display modules 12, two see-through light guide elements 13. The see-through light guide element 13 is an optical composite device capable of displaying a part of an actual image and a part of a generated virtual image.
The display module 12 and the see-through light guide element 13 are both disposed on the head-mount frame 11, and the head-mount frame 11 fixes the display module 12 and the see-through light guide element 13. The display module 12 is disposed on the upper side of the see-through light guide element 13, and light emitted from the display module 12 can pass through the see-through light guide element 13 and then be transmitted to human eyes. Optionally, the display module 12 may also be located at the side of the see-through light guide element 13.
The intelligent wearable device further comprises: and the main board 17 is arranged on the head-mounted frame 11 and is positioned between the two display modules 12. The main board 17 is provided with a processor, and the processor is used for processing the virtual image signal and displaying the virtual image information on the display module 12.
Referring to fig. 1b, the head-mounted frame 11 is further provided with a monocular camera 111, a binocular/binocular camera 112, an eye tracking camera 113, a gyroscope 114, an accelerometer 115, a magnetometer 116, a depth sensor 117, an ambient light sensor 118 and/or a distance sensor 119.
The monocular camera 111, the binocular/monocular camera 112, the eye tracking camera 113, the gyroscope 114, the accelerometer 115, the magnetometer 116, the depth of field sensor 117, the ambient light sensor 118, and/or the distance sensor 119 are electrically connected to the motherboard 17.
Specifically, the monocular camera 111 is a color monocular camera, and is placed in front of the head mount frame 11. When the user wears this intelligence wearing equipment, monocular camera 111 orientation can use this camera to shoot for the opposite side of user's face.
In some embodiments, the head-mounted frame 11 is further provided with a bone conduction earphone for transmitting sound, and in a fire scene with a noisy and severe environment, the bone conduction earphone can clearly transmit the sound to a fireman, so that the fireman can effectively perform rescue work.
In the embodiment of the present invention, the head-mounted frame 11 is adapted to be worn on the head of the user, and each of the see-through light guide elements 13 has an inward surface facing towards the eyes of the user. The camera 111 captures an image in a fire scene to obtain a fire image, and transmits the fire image to the main board 17, a processor in the main board 17 processes the fire image, and an object contour meeting a preset shape condition is extracted from a target fire image, wherein the object contour can be a human body contour, an animal contour, a gas bottle contour, other explosive substance contours, and the like.
For example, referring to fig. 1c, the processor processes the target fire image using an image edge detection algorithm, extracts an object contour 1c1 satisfying a preset shape condition, and renders an object contour 1c1, wherein the virtual image includes the rendered object contour 1c1, so that the firefighter can clearly view each object in front by wearing the intelligent wearable device even if the firefighter shields both eyes with smoke, for example, the firefighter can perform rescue by seeing the trapped crowd falling into the smoke. As another example, the target object profile is not seen in the smoke, and the firefighter can ignore the smoke.
The rendering manner is various, in some embodiments, the processor transforms the style of the object outline into a preset style, which may be defined by the user himself, for example, the preset style is a style that is easy to recognize, such as green or yellow, or renders the background of the object outline into a preset style that is easy to recognize, or simultaneously renders the background of the object outline and the lines of the object outline into a preset style that is easy to recognize, such as black lines, yellow lines, black lines, green lines, or black lines, and thus, the processor detects the object outline in the target fire image through an image edge detection algorithm, and then renders each line connected domain in the object outline, so as to transform the original style of the line connected domain into the preset style, for example, the original style of the line is a black line, which is rendered by the processor, the color of the lines can be changed into yellow, and human eyes are more sensitive to the yellow, so that the human eyes can more easily recognize the yellow lines.
In some embodiments, the image edge detection algorithm includes image filtering, image enhancement, image detection, image positioning, and other processes, and common image edge detection algorithms include detection methods such as differential edge detection, Reborts operator, Sobel operator, Prewitt operator, and the like.
After the processor renders the object outline of the target fire image, the intelligent wearable device can also acquire a reference fire image sent by the electronic device communicating with the intelligent wearable device. Secondly, the intelligent wearable device determines the position information of the intelligent wearable device and the position information of the electronic device, wherein the intelligent wearable device and the electronic device are located at different positions in a fire scene. And thirdly, the intelligent wearable device calculates an optimal escape path according to the target fire image under the position information of the intelligent wearable device and the reference fire image under the position information of the electronic device, wherein each escape path takes the position information of the intelligent wearable device as a starting point and the position of the target as an end point. The rendered target fire image and the optimal escape path are transmitted to the display module 12 and displayed by the display module 12, and the display module 12 emits a first light ray to the perspective type light guide element 13, wherein the first light ray comprises a virtual image, and the virtual image comprises the object outline and the optimal escape path. Meanwhile, the external scene emits a second light, which is also received by the see-through light guide element 13, and the second light may form a real image including a scene of the fire, the see-through light guide element 13 combines the first light and the second light, and then the combined light is transmitted through an inward surface of one of the see-through light guide elements 13 to enter a left eye of the user, and another combined light transmitted through an inward surface of another one of the see-through light guide elements 13 to enter a right eye of the user, so as to form a virtual image in the mind of the user and a combined image of the real image of the external scene.
Referring to fig. 1d, two see-through light guide elements 13 are disposed on the head frame 11 and respectively embedded in the head frame 11 independently. Alternatively, two regions corresponding to the left and right eyes of the user may be provided on the raw material for making the see-through light guide element, the shape and size of the regions being the same as the shape and size of each of the see-through light guide elements 13 in the above-described independent setting; the final effect is that a large perspective light guide element is provided with two areas corresponding to the left and right eyes of the user. It can be understood that two regions having the same size as the shape of the see-through light guide element 13 when the two see-through light guide elements are independently installed are formed on a large piece of material of the see-through light guide element, that is, the two see-through light guide elements 13 are integrally formed. The see-through type light guide elements provided corresponding to the left and right eye regions of the user are embedded in the head mount frame 11.
It should be noted that the display module 12 is detachably mounted on the head-mounted frame 11, for example, the display module is an intelligent display terminal such as a mobile phone and a tablet computer; alternatively, the display module is fixedly mounted on the head-mounted frame, for example, the display module is integrally designed with the head-mounted frame.
Two display modules 12 may be mounted on the head-mounted frame 11, and one display module 12 is correspondingly disposed for the left eye and the right eye of the user, for example, one display module 12 is used for emitting a first light ray containing left-eye virtual image information, and the other display module 12 is used for emitting another first light ray containing right-eye virtual image information. The two display modules 12 may be respectively located above the two perspective light guide elements 13 in a one-to-one correspondence manner, and when the intelligent wearable device is worn on the head of a user, the two display modules 12 are respectively located above the left eye and the right eye of the user in a one-to-one correspondence manner; the display module 12 may also be located at a side of the perspective type light guide element, that is, two perspective type light guide elements are located between two display modules, and when the intelligent wearable device is worn on the head of the user, the two display modules are located at sides of the left eye and the right eye of the user in a one-to-one correspondence manner.
A single display module 12 may also be mounted on the head-mounted frame 11, and the single display module 12 has two display regions, one display region is used for emitting a first light ray containing left-eye virtual image information, and the other display region is used for emitting another first light ray containing right-eye virtual image information.
The Display module includes, but is not limited to, LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), LCOS (Liquid Crystal On Silicon), and other types of displays.
Referring to FIG. 1e, the lateral axis identifies the side viewing angle and the longitudinal axis represents the display brightness. When the display module 12 is an LCD, the brightness of the display module 12 varies with the angle of the viewer. For a general LCD, the side viewing angle θ at a display luminance of 50% is generally large.
When the LCD is applied to an augmented reality display system, and is suitable for a small side viewing angle, the brightness of the display module 12 is concentrated in an angular region near the center. Since the augmented reality display system mainly uses an angular region near the center, the brightness of the first light and the second light projected to the eyes of the user is higher. Referring to fig. 1f, the side viewing angle θ of the first light and the second light emitted from the LCD applied in the augmented reality display system is generally smaller when the display brightness is 50%. Moreover, the distribution of the brightness of the first light and the second light emitted by the LCD applied to the augmented reality display system is bilaterally symmetrical about the side viewing angle of 0 degree, and the side viewing angle is less than 60 degrees. That is, when the user viewing angle is perpendicular to the display module 12, the display brightness of the first light ray and the second light ray emitted by the display module 12 is the maximum, when the user viewing angle is shifted to both sides, the display brightness gradually decreases, and when the side viewing angle is smaller than 60 degrees, the display brightness is 0.
Alternatively, referring to fig. 1g, the luminance distributions of the first and second light rays emitted from the LCD applied to the augmented reality display system may not be symmetrical about the 0-degree side view angle, and the side view angle when the display luminance is brightest may not be 0 degrees.
Referring to fig. 2a, the two display modules 12 are respectively located above the two perspective light guide elements 13 in a one-to-one correspondence manner, when the user wears the intelligent wearable device, the display modules 12 form an included angle a with a front plane of the head of the user, and the included angle a is 0 to 180 degrees, preferably an obtuse angle. Meanwhile, the projection of the display module 12 on the horizontal plane is perpendicular to the frontal plane.
Referring to fig. 2b, in some examples, the position of the see-through light guiding element 13 can be rotated by an angle b around a rotation axis perpendicular to the horizontal plane, wherein the angle b is 0 to 180 degrees, preferably 0 to 90 degrees. Meanwhile, the distance between the perspective light guide elements 13 corresponding to the left eye and the right eye can be adjusted through a mechanical structure on the head-mounted frame 11 to adapt to the interpupillary distance of different users, so that the comfort level and the imaging quality during use are ensured. The farthest distance between the edges of the two see-through light guiding elements 13 is less than 150 mm, i.e. the distance from the left edge of the see-through light guiding element 13 arranged corresponding to the left eye to the right edge of the see-through light guiding element 13 arranged corresponding to the right eye is less than 150 mm. Correspondingly, the display modules 12 are connected through a mechanical structure, and the distance between the display modules 12 can be adjusted, or the same effect can be achieved by adjusting the positions of the display contents on the display modules 12.
The head-mounted frame 11 may be a glasses-type frame structure for hanging on the ears and the nose bridge of a user, on which a nose pad 1110 and temples 1111 are disposed and fixed on the head of the user through the nose pad 1110 and the temples 1111, the temples 1111 are foldable structures, wherein the nose pad 1110 is correspondingly fixed on the nose bridge of the user, and the temples 1111 are correspondingly fixed on the ears of the user. Furthermore, the glasses legs 1111 can be connected through elastic bands, and the elastic bands tighten the glasses legs when the glasses are worn, so that the frame can be fixed on the head.
Optionally, the nose pad 1110 and the temple 1111 are retractable mechanisms, and the height of the nose pad 1110 and the retractable length of the temple 1111 can be adjusted respectively. Similarly, the nose piece 1110 and the temple 1111 can be detachable, and the nose piece 1110 or the temple 1111 can be replaced after the nose piece 1110 or the temple 1111 is detached.
Alternatively, the head-mounted frame 11 may include a nose pad and a flexible rubber band, and the nose pad and the flexible rubber band are fixed on the head of the user; or only comprises a telescopic rubber band which is fixed on the head of the user. Alternatively, the head-mounted frame 11 may be a helmet-type frame structure for wearing on the top of the head and the bridge of the nose of the user. In the embodiment of the present invention, the main function of the head-mounted frame 11 is to be worn on the head of the user and to provide support for the optical and electrical components such as the display module 12 and the see-through light guide element 13, and the head-mounted frame includes but is not limited to the above-mentioned modes.
Referring to fig. 1a and fig. 3a together, the rendered target fire image is transmitted to the display module 12, the display module 12 emits the first light ray 121, the first light ray 121 may form a first virtual image of a left eye, the first virtual image includes an object outline, the first light ray 121 enters a left eye 14 of a user through the first light ray 121 transmitted by an inward surface 131 of a see-through light guide element 13; similarly, the display module 12 emits another first light ray including the rendered target fire image, which may form a first virtual image of the right eye, and the another first light ray enters the right eye of the user through another first light ray conducted towards the inner surface of another perspective type light guide element, so as to form a visual experience of the virtual image in the brain of the user. Therefore, the intelligent wearable device can form a virtual image containing the content information of the preset pattern by using the image at a far place or the image which is not clearly seen by the user, so that the user can clearly identify the image at the far place.
In the embodiment of the present invention, when the intelligent wearable device realizes the function of augmented reality, each of the see-through light guide elements 13 further has an outward surface opposite to the inward surface; the second light rays containing the live-view image information of the external scene transmitted through the outward and inward facing surfaces of the see-through light guide element 13 enter both eyes of the user to form a visual sense of a mixed virtual image and real live view. Referring to fig. 1a again, one of the see-through light guide elements 13 further has an outward surface 132 opposite to the inward surface 131, and the second light ray 151 containing the live-view image information of the external scene transmitted through the outward surface 132 and the inward surface 131 of the see-through light guide element 13 enters the left eye 14 of the user.
The monocular camera 111 may also be a high-resolution camera for taking pictures or shooting videos; the video obtained by shooting can also be used for superposing virtual objects seen by the user through software, and contents seen by the user through the intelligent wearable device can be reproduced.
The binocular/multi-view camera 112 may be a monochrome or color camera, which is disposed in front of or at a side of the head mount frame 11, and is located at one side, both sides, or the periphery of the monocular camera 111. Further, the binocular/multi-view camera 112 may be provided with an infrared filter. By using the binocular camera, the depth of field information on the image can be further obtained on the basis of obtaining the environment image. By using the multi-view camera, the visual angle of the camera can be further expanded, and more environment images and depth information can be obtained.
Alternatively, each of the monocular cameras or the binocular/binocular cameras may be one of an RGB camera, a monochrome camera, or an infrared camera.
The eye tracking camera 113 is disposed on one side of the see-through light guide element 13, and when the user wears the intelligent wearable device, the eye tracking camera 113 faces the side opposite to the face of the user. The eye tracking camera 113 is used for tracking a focus of a human eye, and tracking and specially processing a virtual object watched by the human eye or a specific part in a virtual screen. For example, the specific information of the object is automatically displayed beside the object watched by the human eyes. In addition, the area watched by the human eyes can display a high-definition virtual object image, and other areas only need to display a low-definition image, so that the calculation amount of image rendering can be effectively reduced, and the user experience cannot be influenced.
In some embodiments, the smart wearable device is further capable of determining object information corresponding to the object profile, wherein the object information includes object attribute information, model information, size information, quantity information, distance information, and the like, and the object attribute information includes information such as whether the object is a natural person, height information of a person, an animal, an important item, item volume information, and an explosive item. Before emitting first light, the intelligent wearable device generates first light according to the article information, and then when the first light forms a virtual image, the virtual image further contains object information. Therefore, a user (such as a fireman) wearing the intelligent wearable device can find objects through dense smoke and can know object information of the objects, and rescue and disaster relief work is facilitated.
In some embodiments, in order to facilitate the user to perform the rescue operation more effectively, the intelligent wearable device may further determine whether the object information of the object is the object information of the active object, if so, calculate a distance between the intelligent wearable device and the object, and generate the prompt information according to the distance, wherein the virtual image further includes the prompt information, for example, please refer to fig. 3b, the intelligent wearable device scans the fire scene in real time through a tof (time of flight) depth camera in combination with a thermal imaging camera, displays the object contour of the object, and also determines whether the object is the active object, if so, automatically generates the object attribute information of the object and calculates the distance between the object and the intelligent wearable device, and prompts the object attribute information and the distance in the intelligent wearable device, for example, an open fire at a distance of 20 meters directly in front, and finding the trapped person.
Although the visual object information and prompt information can help the firefighter to effectively perform rescue work, in consideration of the severe environment of a fire scene, danger is generated at any time, and the time for the firefighter to watch an image statically is limited, therefore, in some embodiments, the intelligent wearable device can also generate audio information to prompt the relevant information of the firefighter through the audio information, for example, please refer to fig. 3c, the intelligent wearable device not only generates the object attribute information and the distance of ' the person in the direct front is 20 meters away to find out the trapped person ', but also can broadcast the audio information of ' the person in the direct front is 20 meters away from the trapped person ' to find out the trapped person '.
In some embodiments, the audio information may be converted from the object attribute information and the distance by the smart wearable device, and may also be transmitted to the smart wearable device by a headset or a player connected to the smart wearable device.
Therefore, by the method, in a fire scene with a severe environment, object information, prompt information or audio information of effective active objects is provided for firefighters in time, and rescue efficiency is greatly improved.
In some embodiments, the smart wearable device may also provide a navigation function, for example, first, obtain location information of the smart wearable device; secondly, determining the position information of the trapped people according to the target fire image; thirdly, calling a terrain image corresponding to the position information; thirdly, according to the terrain image, the position information of the intelligent wearable device is used as a starting point, the position information of the trapped people is used as a terminal point, and an optimal rescue path is calculated; and finally, generating navigation prompt information and playing voice navigation information in the intelligent wearable device according to the optimal rescue path so that the user can reach the position information of the trapped people according to the navigation information, wherein the virtual image comprises the navigation prompt information. Referring to fig. 3d, the location information of the firefighter wearing the intelligent wearable device is on the second floor of the factory, and when the firefighter scans the current environment through the intelligent wearable device, the firefighter finds that a lifesaving signal appears in a window on the third floor of the factory through the target fire image, for example, a trapped person waves his/her hand in front of the window. Then, the intelligent wearable device retrieves a topographic image of a factory building, and calculates an optimal rescue path by using the position information of the intelligent wearable device as a starting point and the position information of the trapped people as an end point, for example, by using the rescue path with the least time to reach the position information of the trapped people as the optimal rescue path. Then, the intelligent wearable device generates navigation prompt information and plays voice navigation information in the intelligent wearable device according to the optimal rescue path, for example, the navigation prompt information is that "the navigation prompt information turns right 50 steps ahead and turns right 50 steps ahead in the third corridor of the factory", and the voice navigation information is that "the navigation prompt information turns right 50 steps ahead and turns right 50 steps ahead in the third corridor of the factory".
By the method, the intelligent wearable equipment can effectively improve the rescue efficiency.
In some embodiments, referring to fig. 3e, the intelligent wearable device wears an air detector, and the air detector is used for detecting sampling information such as ambient temperature, oxygen content, harmful gas and the like in a fire scene. Air detector transmits sampling information for intelligent wearing equipment, and intelligent wearing equipment generates first light according to sampling information, and wherein, virtual image contains sampling information, for example, the temperature: 120 ℃, oxygen content 5%, carbon monoxide: 60%, sulfur content: 4 percent.
In some embodiments, the environmental images and distance information captured by the binocular/multi-purpose camera 112 may be used to fuse with the data of the gyroscope 114, accelerometer 115, and magnetometer 116 to obtain information on vital signs of the user wearing the smart wearable device, which are indicative of the patient's condition and criticality, as well as pictures and/or videos of the surrounding environment. There are mainly heart rate, pulse, blood pressure, respiration, pain, blood oxygen, changes in pupil and corneal conduction, etc. It mainly includes four major signs, respectively, respiration, body temperature, pulse and blood pressure, which are the pillars for maintaining the normal activity of the body, and it is not necessary, and any abnormality can also cause serious or fatal diseases, and some diseases can also cause the change or deterioration of these four major signs. Therefore, the processor acquires the vital sign information of the user through the fused data, and judges whether the acquired vital sign information is normal or not, the judgment basis is some vital sign information when the human being is normal, when the vital sign information of the user is not in accordance with the normal vital sign, the intelligent wearable device can automatically alarm, so that the user can preset a preset alarm condition, when the vital sign information of the user reaches the preset alarm condition, namely the vital sign information of the user is abnormal, or diseases or accidents occur, the intelligent wearable device can alarm for a hospital or a pre-stored target contact person, meanwhile, the processor can decode the video of the surrounding environment, acquire the geographical position information of the user, package and send the video of the geographical position of the user and the surrounding environment to a target party, wherein the target party can be the hospital or the target contact person and the like, therefore, when the user is in an accident or a sudden disease, the processor of the intelligent wearable device judges that the vital sign information of the user is abnormal, the intelligent wearable device can automatically alarm, and simultaneously sends the video of the surrounding environment shot by the camera to the target party, so that the target party can timely find the unexpected situation of the user, quickly find the user according to the video, and know the unexpected process through the video, and further can timely rescue. Therefore, the operation can timely know the unexpected condition of the user and timely give an alarm.
In some embodiments, intelligence wearing equipment disposes the wireless communication module, and through the wireless communication module, intelligence wearing equipment can the lug connection high in the clouds command platform to with the on-the-spot video of conflagration, audio frequency and the real-time synchronization of high in the clouds command platform. And the cloud command platform can also send a control instruction to the intelligent wearable device, the intelligent wearable device generates first light and voice information according to the control instruction, a virtual image formed by the first light contains indication information, and a fireman implements operation arrangement issued by the command platform according to the indication information or the voice information.
It is understood that the wireless communication modules include 5G communication, 4G communication, 3G communication, 2G communication, CDMA, Zig-Bee, Bluetooth (Bluetooth), Wireless broadband (Wi-Fi), Ultra Wideband (UWB) and Near Field Communication (NFC), CDMA2000, GSM, Infra (IR), ISM, RFID, UMTS/3GPPw/HSDPA, WiMAXwi-Fi, and the like.
In some embodiments, the intelligent wearable device is provided with a microphone, the microphone is used for collecting sound of the surrounding environment to obtain sound collection information, and sending the sound collection information to the intelligent wearable device, the intelligent wearable device processes the sound collection information by using a preset voice algorithm, when it is detected that a voice data frame corresponding to a preset voice segment exists in the sound collection information, the voice data frame is intercepted, and the voice data frame is amplified and played, so that a fireman can judge a position to be rescued according to voice played by the voice data frame, wherein the preset voice segment is a sharp call voice segment and the like.
The ambient light sensor 118 is disposed on the head-mounted frame 11, and can monitor the intensity of ambient light in real time. This intelligence wearing equipment can be according to the real-time luminance of adjustment display module 12 of ambient light's change to guarantee the uniformity of display effect under different ambient light.
Optionally, the smart wearable device further comprises: and the infrared/near infrared light LEDs are electrically connected to the main board 17 and are used for providing a light source for the binocular/multi-view camera 112. Specifically, the infrared/near-infrared LED emits infrared rays, and when the infrared rays reach an object acquired by the binocular/multi-view camera 112, the object transmits the infrared rays back, and a photosensitive element on the binocular/multi-view camera 112 receives the transmitted infrared rays and converts the infrared rays into an electrical signal, and then performs imaging processing.
Referring to fig. 4, the two display modules 12 are connected to the main board 17 through a cable.
The main board 17 is further provided with a camera, a video interface, a power interface, a communication chip and a memory.
The video interface is used for connecting a computer, a mobile phone or other equipment to receive video signals. Wherein the video interface may be: hmdi, display port, thunderbolt or usb type-c, micro usb, MHL (Mobile high-Definition Link), and the like.
The power interface is used for supplying power by an external power supply or a battery. The power interface comprises a USB interface or other interfaces.
The communication chip is used for data interaction with the outside through a communication protocol, specifically, the communication chip is connected with the internet through WiFi, WDMA, TD-LTE and other communication protocols, and then data are acquired through the internet or the communication chip is connected with other intelligent wearable devices; or directly connected with other intelligent wearable devices through a communication protocol.
The memory is used for storing data, and is mainly used for storing display data displayed in the display module 12.
When the intelligent wearable device only includes the head-mounted frame 11, the two display modules 12, the two perspective light guide elements 13, and the main board 17, all the rendering of the virtual scene and the generation of the image corresponding to the two eyes can be performed in the external device connected to the intelligent wearable device. The external device includes: computers, cell phones, tablet computers, and the like.
Specifically, the intelligent wearable device shoots external image information through a camera, or receives the external image information or video information through a corresponding interface, and decodes the external image and the video information and displays the decoded external image and video information on the display module 12. The external device receives data acquired by a plurality of sensors on the intelligent wearable device based on augmented reality, and after the data are processed, the image displayed by the two eyes is adjusted according to the data and is reflected on the image displayed on the display module 12. The processor on the intelligent wearing equipment based on augmented reality is only used for supporting the transmission and display of video signals and the transmission of sensor data.
Meanwhile, interaction with a user is carried out through application software on external equipment such as a computer, a mobile phone and a tablet personal computer, and interaction with the intelligent wearable equipment can be carried out through a mouse keyboard, a touch pad or buttons on the external equipment. Examples of applications for such a basic structure include, but are not limited to, large screen portable displays. The smart wearable device may project the display screen at a fixed location within the user's field of view. The user needs to adjust the size, the position and the like of the projection screen through software on the device connected with the intelligent wearable device.
Further, when the acquired external real scene image and the virtual image are synthesized and then displayed by the intelligent wearable device based on augmented reality, the display mode comprises a first display mode, a second display mode or a third display mode; the first display mode is a display mode in which the relative angle and the relative position between the virtual image and the real image are not fixed; the second display mode is a display mode in which the relative angle and the relative position between the virtual image and the real image are fixed. The third display mode is a display mode in which the relative angle between the virtual image and the real image is fixed and the relative position is not fixed.
The relationship between the first, second and third display modes and the real environment and the head of the user is shown in the following table:
position relative to the environment Angle relative to environment Relative position to head Relative angle with head
A first display mode Is not fixed Is not fixed Fixing Fixing
Second display mode Fixing Fixing Is not fixed Is not fixed
Third display mode Is not fixed Fixing Fixing Is not fixed
It should be noted that the "first display mode", "second display mode", or "third display mode" may be used in combination with different virtual images, and may be determined by system software or set by a user.
In some embodiments, in the process of acquiring the target fire image, when the user wearing the intelligent wearable device rotates the head or rotates the eyeball, the eyeball tracking camera 113 may obtain the head rotation angle and/or the eyeball rotation angle of the user by fusing the data of the gyroscope 114, the accelerometer 115, and the magnetometer 116, so that the eyeball tracking camera 113 may acquire the target fire image in the visual field range corresponding to the head rotation angle and/or the eyeball rotation angle, and transmit the target fire image to the processor, and the processor performs corresponding processing, for example, displaying an object outline in the rotated target fire image, and the like.
By the method, the intelligent wearable device can display the local area which the user wants to watch to the user, so that the experience of the user is improved.
In some embodiments, before emitting the first light, the smart wearable device may also acquire a reference fire image sent by an electronic device in communication with the smart wearable device. Secondly, the intelligent wearable device determines the position information of the intelligent wearable device and the position information of the electronic device, wherein the intelligent wearable device and the electronic device are located at different positions in a fire scene. And thirdly, the intelligent wearable device calculates an optimal escape path according to the target fire image under the position information of the intelligent wearable device and the reference fire image under the position information of the electronic device, wherein each escape path takes the position information of the intelligent wearable device as a starting point and the position of the target as an end point, and the virtual image further comprises the optimal escape path.
It is understood that the electronic device may be an intelligent wearable device, or may be other suitable types of electronic products. Also, here, since each user or firefighter can use at least one electronic device, the number of electronic devices here may be plural, that is, the reference fire image transmitted to the smart wearable device may capture fire images at different locations for the plural electronic devices.
Referring to fig. 5, a fire scene 500 includes a fire scene of a first room 51, a second room 52, a third room 53, and a fourth room 54, each of which presents flames or smoke.
Each firefighter wears a smart wearable device and is present at each position in the fire scene 500, so that the smart wearable device of the firefighter 511 present in the first room 51 can acquire a fire picture of the first room 51, the smart wearable device of the firefighter 521 present in the second room 52 can acquire a fire picture of the second room 52, and the smart wearable device of the firefighter 551 present on the left side of the corridor 55 can acquire a fire picture on the left side of the corridor 55. The intelligent wearable device of the firefighter 552 appearing in the middle of the corridor 55 can capture a picture of a fire in the middle of the corridor 55. The smart-wear device of the firefighter 553 appearing to the right of the hallway 55 can capture a picture of a fire on the right of the hallway 55.
Taking the first room 51 as an example, the process of the intelligent wearable device of the firefighter 511 in generating the optimal escape path will be described in detail. As described above, the firefighter 511 carries the trapped people to escape, the intelligent wearable devices of the firefighter 511 and the intelligent wearable devices of the firefighters are located at different positions in a fire scene, the fire image collected by the intelligent wearable devices of the firefighter 511 is a target fire image, and the fire images collected by the intelligent wearable devices of other firefighters are reference fire images, for example, the target fire image is a fire picture of the first room 51, wherein in the target fire image, the target fire image includes a flame area and a smoke area, and the reference fire image may be a fire image of each room or a fire image of the corridor 55. In the reference fire image, it contains a flame region or a smoke region.
The intelligent wearable devices of the firefighters 511 receive interaction requests sent by the intelligent wearable devices of the firefighters, wherein the interaction requests include reference fire images and position information of the intelligent wearable devices corresponding to the firefighters, and here, because the intelligent wearable devices of other firefighters all shoot the reference fire images under the self positions, the reference fire images shot at different positions include fire pictures at different positions, and the different reference fire images can embody the flame degree or the smoke degree at the corresponding positions.
In the present embodiment, the destination is preset by the user, for example, the destination is the nearest stairway opening.
Referring to fig. 5, for the firefighter 511, the existing escape paths include a first escape path composed of an OM path and an AB path, a second escape path composed of an ON path and an AB path, a third escape path composed of an OM path and an AC path, and a fourth escape path composed of an ON path and an AC path.
In general, the degree of danger of passing through flames is higher than the degree of danger of passing through smoke, and by comparing the degrees of danger of passing through the first escape path, the second escape path, the third escape path, and the fourth escape path, it is found that the degree of danger of the fourth escape path is the lowest and the degree of danger of the first escape path is the highest, and thus, the intelligent wearable device of the firefighter 511 selects the fourth escape path as the optimal escape path.
In some embodiments, in the process of calculating the optimal escape path, the intelligent wearable device of the firefighter 511 firstly processes the target fire image and the reference fire image by using a fire model to obtain a flame region and a smoke region at each position in the fire scene, wherein the fire model is trained in advance by using a deep learning algorithm, for example, a deep neural network mobility _ ssd. When the intelligent wearable device processes a target fire image and a reference fire image by using a fire model, image blocks with shapes similar to flames or smoke shapes are respectively determined in the target fire image and the reference fire image, color information of the image area is subsequently calculated, if RGB values of the color information meet preset flame color values, the image blocks are considered to be flame areas, and if not, the image blocks are considered to be smoke areas.
Secondly, the intelligent wearable device calculates a risk value of each flame region and a risk value of each smoke region, for example, the intelligent wearable device calculates a pixel gray value of each flame region in the target fire image and the reference fire image, and calculates a risk value of each flame region according to the pixel gray value of each flame region, where the pixel gray value of each flame region may be a pixel average gray value, and each type of risk value corresponds to a pixel gray range, for example, in the first room 51, the intelligent wearable device calculates a pixel average gray value of the flame region 5111 as 10, and a pixel average gray value of the flame region 5112 as 13. Within corridor 55, smart wearable device calculates the pixel mean gray value of flame area 5511 to be 5.
Therefore, the intelligent wearable device calculates the danger value of each flame area, can determine the pixel gray scale range corresponding to the pixel average gray scale value of each flame area, and determines the danger value corresponding to the pixel gray scale range. For example, in the present embodiment, when the pixel gradation value of the flame region falls within the 0-5 pixel gradation range, the hazard value is 10; when the pixel gray value of the flame area is in the range of 6-10 pixel gray values, the danger value is 6; the hazard value is 2 when the pixel gray scale value of the flame region falls within the 11-15 pixel gray scale range.
Accordingly, the risk value of flame zone 5111 is 6, the risk value of flame zone 5112 is 2, and the risk value of flame zone 5511 is 10.
Similarly, the intelligent wearable device calculates the pixel gray value of each smoke area in the target fire image and the reference fire image, and calculates the danger value of each smoke area according to the pixel gray value of each smoke area. The pixel gray scale value of each smoke region may be a pixel average gray scale value, for example, in the first room 51, the smart wearable device calculates the pixel average gray scale value of the smoke region 5113 to be 20, and the pixel average gray scale value of the smoke region 5114 to be 50. Within corridor 55, smart wearable device calculates the pixel mean gray value of smoke region 5512 to be 30.
Therefore, the intelligent wearable device calculates the risk value of each smoke area, can determine the pixel gray scale range corresponding to the pixel average gray scale value of each smoke area, and determines the risk value corresponding to the pixel gray scale range. For example, in the present embodiment, when the pixel gradation value of the smoke region falls within the range of 0 to 20 pixel gradation, the hazard value is 10; when the pixel gray scale value of the smoke area is in the range of 21-40 pixel gray scale, the danger value is 6; the hazard value is 2 when the pixel gray scale value of the smoke region falls within the 41-60 pixel gray scale range.
Thus, the risk value for smoke zone 5113 is 10, the risk value for smoke zone 5114 is 2, and the risk value for smoke zone 5512 is 6.
Thirdly, the intelligent wearable device accumulates the total danger value of the flame area and/or the smoke area through which each escape path passes according to the position information of the intelligent wearable device, the position information of the electronic device and the position information of the target place, and determines the escape path with the lowest total danger value as the optimal escape path.
With continued reference to fig. 5, the risk value for flame zone 5111 is 6, the risk value for smoke zone 5113 is 10, the risk value for smoke zone 5114 is 2, the risk value for smoke zone 5512 is 6, the risk value for smoke zone 5513 is 6, the risk value for smoke zone 5514 is 2, the risk value for flame zone 5515 is 6, the risk value for smoke zone 5516 is 6, the risk value for flame zone 5517 is 10, and the risk value for flame zone 5518 is 10.
The total risk value of the first escape path is 10+6+6+2+6+6+10+10=56, the total risk value of the second escape path is 2+6+6+10+10=48, the total risk value of the third escape path is 10+6+6+6=28, and the total risk value of the fourth escape path is 2+6+6= 14.
Since the total risk value of the fourth escape path is lowest, the fourth escape path is then selected as the optimal escape path.
By displaying the optimal escape path in the intelligent wearable device, the firefighter can rapidly and safely escape from the fire scene with the trapped masses.
As another aspect of the embodiment of the present invention, an embodiment of the present invention provides a fire fighting assistance method based on augmented reality, which is applied to an intelligent wearable device. The functions of the augmented reality-based fire fighting assistance method of the embodiment of the invention can be executed by means of a hardware platform. For example: the augmented reality based fire assistance method may be performed in an electronic device of a suitable type having a processor with computational capabilities, for example: a single chip, a Digital Signal Processing (DSP), a Programmable Logic Controller (PLC), and so on.
Functions corresponding to the augmented reality-based fire aid method according to each of the following embodiments are stored in the form of instructions in a memory of the electronic device, and when the functions corresponding to the augmented reality-based fire aid method according to each of the following embodiments are to be executed, a processor of the electronic device accesses the memory, and invokes and executes the corresponding instructions to implement the functions corresponding to the augmented reality-based fire aid method according to each of the following embodiments.
The memory, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the steps corresponding to the augmented reality-based fire assistance method of the embodiments described below. The processor executes the functions of the steps corresponding to the augmented reality-based fire fighting assistance method of the embodiments described below by executing the nonvolatile software program, instructions, and modules stored in the memory.
The memory may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory and, when executed by the one or more processors, perform the augmented reality based fire aid method of any of the above method embodiments, e.g., perform the various steps described in the embodiments below.
Referring to fig. 6, a method S600 for assisting fire fighting based on augmented reality includes:
s61, acquiring a target fire image;
s62, extracting an object outline meeting a preset shape condition from the target fire image;
s63, acquiring a reference fire image sent by the electronic equipment communicating with the intelligent wearable equipment;
s64, determining position information of the intelligent wearable device and the electronic device, wherein the intelligent wearable device and the electronic device are located at different positions in a fire scene;
s65, calculating an optimal escape path according to the target fire image under the position information of the intelligent wearable device and the reference fire image under the position information of the electronic device, wherein each escape path takes the position information of the intelligent wearable device as a starting point and takes the position of a target place as an end point;
s66, emitting a first light ray, wherein the first light ray can form a virtual image, and the virtual image comprises the optimal escape path and the object outline;
s67, receiving a second light ray, wherein the second light ray can form a live-action image, and the live-action image comprises the fire scene picture;
and S68, combining the first light ray and the second light ray to present a combined image.
Therefore, on the one hand, in a fire scene, the user frees both hands by wearing the intelligent wearable device, thereby performing the fire rescue work more efficiently and effectively. On the other hand, even if the dense smoke shields the visual field of eyes, the method can extract the object outline from the target fire image and display the object outline in front of the user, so that the user can know the fire scene more clearly, and the fire rescue work can be implemented efficiently.
In some embodiments, S62 includes: processing the target fire image by using an image edge detection algorithm, and extracting an object contour meeting a preset shape condition; rendering the object contour, wherein the virtual image contains the rendered object contour.
In some embodiments, prior to emitting the first light, the method further comprises: and determining object information corresponding to the object contour, wherein the virtual image further comprises the object information.
In some embodiments, the method further comprises: judging whether the object information of the object is the object information of the active object; if yes, calculating the distance between the intelligent wearable device and the object, and generating prompt information according to the distance, wherein the virtual image further comprises the prompt information.
In some embodiments, the calculating an optimal escape path according to the target fire image under the location information of the intelligent wearable device and the reference fire image under the location information of the electronic device includes: processing the target fire image and the reference fire image by using a fire model to obtain a flame area and a smoke area of each position in the fire scene; calculating the danger value of each flame area and the danger value of each smoke area; accumulating the total danger value of a flame area and/or a smoke area passed by each escape path according to the position information of the intelligent wearable device, the position information of the electronic device and the position information of a target place; and determining the escape path with the lowest total danger value as the optimal escape path.
In some embodiments, said calculating a hazard value for each of said flame zones comprises: calculating the pixel gray value of each flame area in the target fire image and the reference fire image; and calculating the danger value of each flame area according to the pixel gray value of each flame area.
In some embodiments, the calculating the hazard value of each flame region according to the pixel gray value of each flame region includes:
determining a pixel gray scale range corresponding to the pixel average gray scale value of each flame area;
and determining a danger value corresponding to the pixel gray scale range.
In some embodiments, the acquiring the target fire image includes:
tracking a head rotation angle and/or an eyeball rotation angle of a user wearing the intelligent wearable device;
and acquiring a target fire image in a visual field range corresponding to the head rotation angle and/or the eyeball rotation angle.
It should be noted that the description of the present invention and the accompanying drawings illustrate preferred embodiments of the present invention, but the present invention may be embodied in many different forms and is not limited to the embodiments described in the present specification, which are provided as additional limitations to the present invention and to provide a more thorough understanding of the present disclosure. Moreover, the above technical features are combined with each other to form various embodiments which are not listed above, and all the embodiments are regarded as the scope of the present invention described in the specification; further, modifications and variations will occur to those skilled in the art in light of the foregoing description, and it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A fire fighting auxiliary method based on augmented reality is applied to intelligent wearable equipment and is characterized in that the method comprises the following steps:
acquiring a target fire image;
extracting an object contour meeting a preset shape condition from the target fire image;
acquiring a reference fire image sent by electronic equipment communicating with the intelligent wearable equipment;
determining position information of the intelligent wearable device and the electronic device, wherein the intelligent wearable device and the electronic device are located at different positions in a fire scene;
calculating an optimal escape path according to a target fire image under the position information of the intelligent wearable device and a reference fire image under the position information of the electronic device, wherein each escape path takes the position information of the intelligent wearable device as a starting point and takes the position of a target place as an end point;
emitting a first light ray, wherein the first light ray can form a virtual image, and the virtual image comprises the optimal escape path and the object outline;
receiving a second light ray, wherein the second light ray can form a live-action image, and the live-action image comprises the fire scene picture;
and synthesizing the first light ray and the second light ray to present a synthesized image.
2. The method of claim 1, wherein the extracting the object contour satisfying a preset shape condition from the target fire image comprises:
processing the target fire image by using an image edge detection algorithm, and extracting an object contour meeting a preset shape condition;
rendering the object contour, wherein the virtual image contains the rendered object contour.
3. The method of claim 1, wherein prior to emitting the first light, the method further comprises:
and determining object information corresponding to the object contour, wherein the virtual image further comprises the object information.
4. The method of claim 3, further comprising:
judging whether the object information of the object is the object information of the active object;
if yes, calculating the distance between the intelligent wearable device and the object, and generating prompt information according to the distance, wherein the virtual image further comprises the prompt information.
5. The method according to claim 4, wherein the calculating of the optimal escape path according to the target fire image under the position information of the intelligent wearable device and the reference fire image under the position information of the electronic device comprises:
processing the target fire image and the reference fire image by using a fire model to obtain a flame area and a smoke area of each position in the fire scene;
calculating the danger value of each flame area and the danger value of each smoke area;
accumulating the total danger value of a flame area and/or a smoke area passed by each escape path according to the position information of the intelligent wearable device, the position information of the electronic device and the position information of a target place;
and determining the escape path with the lowest total danger value as the optimal escape path.
6. The method of claim 5, wherein said calculating a hazard value for each of said flame zones comprises:
calculating the pixel gray value of each flame area in the target fire image and the reference fire image;
and calculating the danger value of each flame area according to the pixel gray value of each flame area.
7. The method of claim 6, wherein the pixel gray scale value is a pixel average gray scale value of the flame regions, each type of hazard value corresponds to a pixel gray scale range, and calculating the hazard value of each flame region according to the pixel gray scale value of each flame region comprises:
determining a pixel gray scale range corresponding to the pixel average gray scale value of each flame area;
and determining a danger value corresponding to the pixel gray scale range.
8. The method of any one of claims 1 to 7, wherein the acquiring of the target fire image comprises:
tracking a head rotation angle and/or an eyeball rotation angle of a user wearing the intelligent wearable device;
and acquiring a target fire image in a visual field range corresponding to the head rotation angle and/or the eyeball rotation angle.
9. An intelligence wearing equipment which characterized in that includes:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the augmented reality based fire aid method of any one of claims 1 to 8.
CN202010226489.XA 2020-03-27 2020-03-27 Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment Active CN111127822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010226489.XA CN111127822B (en) 2020-03-27 2020-03-27 Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010226489.XA CN111127822B (en) 2020-03-27 2020-03-27 Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment

Publications (2)

Publication Number Publication Date
CN111127822A CN111127822A (en) 2020-05-08
CN111127822B true CN111127822B (en) 2020-06-30

Family

ID=70493946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010226489.XA Active CN111127822B (en) 2020-03-27 2020-03-27 Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment

Country Status (1)

Country Link
CN (1) CN111127822B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911538A (en) * 2021-03-26 2021-06-04 潍坊歌尔电子有限公司 Fire-fighting communication equipment, method and system and computer readable storage medium
CN113237423B (en) * 2021-04-16 2023-09-05 北京京东乾石科技有限公司 Article volume measuring device
CN114117093B (en) * 2021-12-04 2022-06-07 特斯联科技集团有限公司 Forest and grassland fire fighting method and mobile terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8922146D0 (en) * 1989-10-02 1989-11-15 Eev Ltd Thermal camera arrangement
JP4849942B2 (en) * 2006-04-14 2012-01-11 株式会社四国総合研究所 Head-mounted infrared image viewing device
KR20190138222A (en) * 2018-06-04 2019-12-12 이형주 Evacuation route guidance system in a building based on augmented reality
CN109200499A (en) * 2018-10-16 2019-01-15 广州市酷恩科技有限责任公司 A kind of fire-fighting respirator apparatus
CN109672875A (en) * 2018-11-30 2019-04-23 迅捷安消防及救援科技(深圳)有限公司 Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product
CN110708533B (en) * 2019-12-16 2020-04-14 杭州融梦智能科技有限公司 Visual assistance method based on augmented reality and intelligent wearable device

Also Published As

Publication number Publication date
CN111127822A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111127822B (en) Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment
US11061240B2 (en) Head-mountable apparatus and methods
US11455032B2 (en) Immersive displays
US9411160B2 (en) Head mounted display, control method for head mounted display, and image display system
US9964766B2 (en) Controlling reproduction of content in a head-mounted display
CN110708533B (en) Visual assistance method based on augmented reality and intelligent wearable device
CN206497255U (en) Augmented reality shows system
CN109358754B (en) Mixed reality head-mounted display system
CN109002164B (en) Display method and device of head-mounted display equipment and head-mounted display equipment
TW201437688A (en) Head-mounted display device, control method of head-mounted display device, and display system
CN104618712A (en) Head wearing type virtual reality equipment and virtual reality system comprising equipment
WO2017094606A1 (en) Display control device and display control method
US11460696B2 (en) Head-mountable apparatus and methods
KR101203921B1 (en) Information providing apparatus using an eye tracking and local based service
US11157078B2 (en) Information processing apparatus, information processing method, and program
CN109445596B (en) Integrated mixed reality head-mounted display system
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
US20190028690A1 (en) Detection system
CN110859352A (en) AR fire helmet based on distributed network
US11747897B2 (en) Data processing apparatus and method of using gaze data to generate images
CN111343449B (en) Augmented reality-based display method and intelligent wearable device
JP2022108194A (en) Image projection method, image projection device, unmanned aircraft and image projection program
KR102173732B1 (en) A remote camera control apparatus using a head mounted display
CN114779926A (en) Museum interactive glasses based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant