CN117555141A - Intelligent VR glasses system - Google Patents
Intelligent VR glasses system Download PDFInfo
- Publication number
- CN117555141A CN117555141A CN202311257185.XA CN202311257185A CN117555141A CN 117555141 A CN117555141 A CN 117555141A CN 202311257185 A CN202311257185 A CN 202311257185A CN 117555141 A CN117555141 A CN 117555141A
- Authority
- CN
- China
- Prior art keywords
- deflection
- image data
- positioning
- target
- staff
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011521 glass Substances 0.000 title claims abstract description 44
- 230000002159 abnormal effect Effects 0.000 claims abstract description 90
- 238000007689 inspection Methods 0.000 claims abstract description 68
- 238000006073 displacement reaction Methods 0.000 claims abstract description 17
- 230000003068 static effect Effects 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 7
- 230000005856 abnormality Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 9
- 238000005516 engineering process Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/10—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/20—Checking timed patrols, e.g. of watchman
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/24—Reminder alarms, e.g. anti-loss alarms
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B3/00—Audible signalling systems; Audible personal calling systems
- G08B3/10—Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B2027/0178—Eyeglass type
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0181—Adaptation to the pilot/driver
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Optics & Photonics (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an intelligent VR glasses system, which relates to the technical field of VR glasses inspection, and the invention collects real-time images in front of VR glasses of workers wearing VR glasses at night to inspect urban rails by arranging a rail inspection module, a target recognition module recognizes static abnormal targets in the urban rails based on an abnormal target recognition model, a target positioning unit acquires deflection angles and deflection displacement of visual fields of the workers and determines deflection directions of the workers in horizontal and vertical directions, and the target positioning unit generates corresponding guiding videos to guide the workers to conduct visual angle backtracking, so that the workers can be guided to quickly find out the recognized abnormal targets, the situation that the workers cannot quickly find out the abnormal targets due to poor night visual conditions and incapability of first time observation of some static abnormal targets under long-time inspection is avoided, and the inspection quality and efficiency are improved.
Description
Technical Field
The invention relates to the technical field of VR glasses inspection, in particular to an intelligent VR glasses system.
Background
As urban rail transit networks grow rapidly, a large number of rail transit infrastructures enter a maintenance period successively, and line safety and monitoring face pressure increases gradually. The method is the most important matter for ensuring the safe operation of the rail transit, and the current rail transit operation and maintenance mode is mainly the inspection mode, which is the most basic and key mode for ensuring the safe operation of the rail transit. The traditional track inspection mode is a 'people and railcar' collection mode, along with the development of VR technology, VR technology is also applied to track inspection, staff can assist staff to carry out track inspection by wearing VR glasses,
however, the visual condition of night inspection is poor, when a worker is additionally used for inspecting for a long time, a certain static abnormal target cannot be observed at the first time, when the VR glasses capture a picture and identify and analyze the abnormal target, the worker walks around and views the picture in the inspection process, so that when the VR glasses identify the abnormal target and give a prompt to the worker, the worker leaves the position of the abnormal target, and at the moment, the worker gets the prompt and returns to find the abnormal target which cannot be found quickly;
in order to solve the above problems, the present invention proposes a solution.
Disclosure of Invention
The invention aims to provide an intelligent VR glasses system, which solves the problems that in the prior art, due to the fact that the visual condition of night inspection is poor and a worker cannot observe a plurality of static abnormal targets at the first time under long-time inspection, when the VR glasses capture images and identify and analyze the abnormal targets, the worker walks and looks around in the process of inspection, so that when the VR glasses identify the abnormal targets and give prompts to the worker, the visual angle of the worker leaves the position where the abnormal target is found, and the worker is prompted to find the abnormal target quickly;
the aim of the invention can be achieved by the following technical scheme:
intelligence VR glasses system, its characterized in that:
the track inspection module is used for collecting real-time environment images in front of the VR glasses of the staff wearing the VR glasses at night to inspect the urban rail, and comprises a plurality of inspection units, wherein one inspection unit corresponds to one staff wearing the VR glasses at night to inspect the urban rail;
the patrol unit acquires images of the real environment in front of the VR glasses of the staff wearing the VR glasses at night and patrol urban rails at the current moment to acquire patrol image data of the staff at the current moment and records the acquisition moment of the patrol image data, and generates patrol record data of the staff at the current moment according to the patrol image data of the staff and the corresponding acquisition moment;
the system comprises a target identification module, a target recognition module and a display module, wherein the target identification module is used for identifying a static abnormal target appearing in patrol image data of a worker, the target identification module is stored with an abnormal target identification model, and the abnormal target identification model is used for identifying the abnormal target of the image and marking all pixel points forming the identified abnormal target by adopting a semantic segmentation marking method with the same color;
the target recognition module inputs the received patrol image data of the staff carried in the patrol record data of the staff at the current moment into an abnormal target recognition model, and temporarily stores the patrol record data of the staff at the current moment if no abnormal target exists in the patrol image data of the staff at the current moment; otherwise, the target recognition module generates abnormal target characteristic data of the staff according to the patrol record data of the staff at the current moment and generates a target alarm instruction;
the positioning guide module is used for carrying out abnormal target positioning guide on the staff, and comprises a target positioning unit and a target guide unit, wherein the target positioning unit prompts the staff to find an abnormal target in a voice broadcasting mode after receiving a transmitted target warning instruction and records the corresponding time of voice broadcasting, and the target positioning unit generates abnormal target positioning data of the staff based on the current position according to a certain generation rule;
the target guiding unit guides the staff to quickly find the identified abnormal target according to a certain guiding step based on the received target positioning data of the staff at the current position.
Further, all pixels forming the abnormal target in the generated patrol image data of the staff member carried in the characteristic data of the abnormal target of the staff member are marked by the abnormal target recognition model by adopting the same color.
Further, the specific generation rule of generating the abnormal target positioning data of the staff based on the current position by the target positioning unit is as follows:
s11: the target positioning unit marks the acquisition time of the patrol image data carried in the abnormal target characteristic data of the staff as a positioning starting time, a mark A1, and marks the corresponding time of voice broadcasting as a positioning ending time, a mark B1;
s12: acquiring all the patrol image data of the staff from the positioning starting time A1 to the positioning ending time B1, and marking all the patrol image data as C1, C2, C is more than or equal to 1 according to the distance sequence of the acquisition time corresponding to each acquired patrol image data from the current time from the far to the near;
s13: the first positioning deviation area of the patrol image data C1 and the second positioning deviation area of the patrol image data C2 are acquired according to a certain acquisition rule, and have the following:
s131: all pixel points in the inspection image data C1 and C2 are traversed, all deflection characteristic areas D1, D2, and D in the inspection image data C1 are acquired, and all deflection characteristic areas E1, E2, and Ed in the inspection image data C2 are acquired, wherein the pixel points forming an abnormal target are not included in the deflection characteristic areas;
the deflection characteristic areas D1, D2, dd and the deflection characteristic areas E1, E2 are in one-to-one correspondence, that is, any one pixel point in the deflection characteristic area in the inspection image data C1 has only one pixel point and the same RGB value in the corresponding deflection characteristic area in the inspection image data C2;
s132: obtaining deflection characteristic areas with the largest number of pixels in deflection characteristic areas D1, D2, I.D. and Dd in the inspection image data C1, recalibrating the deflection characteristic areas as first positioning deflection areas, and recalibrating the deflection characteristic areas corresponding to the first positioning deflection areas in the inspection image data C2 as second positioning deflection areas;
s14: generating deflection guiding data based on the patrol image data C1 and C2 according to a certain generating step;
s15: deflection guide data based on the patrol images C1 and C2, C2 and C3, and Cc-1 and Cc are calculated and acquired according to S12 to S14, respectively, and abnormal target positioning data based on the current position of the worker is generated therefrom.
Further, the specific generation rule of generating the deflection guidance data based on the patrol image data C1 and C2 at S14 is as follows:
s141: establishing an xa-ya rectangular coordinate system with pixels as units by taking the lower left corner of the inspection image data C1 as an origin of coordinates, wherein the abscissa xa and the ordinate ya are the number of rows and columns in an image array of the inspection image data C1 respectively;
establishing an xb-yb rectangular coordinate system taking pixels as units by taking the lower left corner of the inspection image data C2 as an origin of coordinates, wherein the abscissa xb and the ordinate yb are the number of rows and columns in an image array of the inspection image data C2 respectively;
s142: randomly acquiring a coordinate mark F1 (xa 1, ya 1) of a pixel point forming a first positioning deviation area in an xa-ya rectangular coordinate system, and acquiring a coordinate mark G1 (xb 1, yb 1) of a pixel point corresponding to the pixel point in a second positioning deviation area in an xb-yb rectangular coordinate system;
s143: creating an empty bias dictionary J1 based on the patrol image data C1 and C2;
if xa1 is not less than xb1 and ya1 is not less than yb1, then the key value pair "Horizontal": "Left" and "vertical": "Down" is added to J1, at which time J1 = { "Horizontal":
“Left”,“vertical”:“down”};
if xa1 is not less than xb1 and ya1< yb1, then the key value pair "Horizontal": "Left" and "vertical": "up" is added to J1, at which time J1 = { "Horizontal":
“Left”,“vertical”:“up”};
if xa1< xb1 and ya 1. Gtoreq.yb1, then the key pair "Horizontal" is respectively: "Right" and "vertical": "Down" is added to J1, at which time J1 = { "Horizontal":
“Right”,“vertical”:“down”};
if xa1< xb1 and ya1< yb1, then the key value pair "Horizontal", respectively:
"Right" and "vertical": "up" is added to J1, at which time J1 = { "Horizontal": "Right", "vertical": a value corresponding to a key Horizontal in the J1 is used to represent a deflection direction of the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area in the Horizontal direction, a value corresponding to a key vertical in the J1 is used to represent a deflection direction of the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area in the vertical direction, the Left and Right refer to Left and Right deflection in the Horizontal direction, respectively, and the down and up refer to downward and upward deflection in the vertical direction, respectively;
s144: using the formulaCalculating and obtaining the deflection displacement H1 from the pixel point F1 in the first deflection area to the pixel point G1 in the second deflection areaBy the formulaCalculating and obtaining a deflection angle I1 from a pixel point F1 in a first positioning deflection area to a pixel point G1 in a second positioning deflection area, wherein (x 1, y 1) is the center point coordinate of the inspection image data C1;
the target positioning unit generates deflection guide data based on the patrol image data C1 and C2 in accordance with the deflection displacement H1, the deflection angle I1, and the deflection dictionary J1.
Further, the specific guiding steps of the target guiding unit guiding the staff to quickly find the identified abnormal target are as follows:
s21: acquiring deflection displacement, deflection angle and deflection dictionary which are carried in the target positioning data based on the current position abnormality and are based on Cc-1 and Cc deflection guiding data, wherein the deflection displacement, the deflection angle and the deflection dictionary are respectively marked as K1, L1 and M1;
s22: if the values corresponding to the keys 'Horizontal' and 'vertical' in the deflection dictionary M1 are 'Left' and 'down', generating voice broadcast audio which deflects rightwards in the Horizontal direction and upwards by L1 degrees in the vertical direction according to the deflection displacement K1 and the deflection angle L1, inputting the voice broadcast audio of which the head moves by a distance K1, converting the voice broadcast audio into guide video based on Cc-1 and Cc deflection guide data, displaying the guide video on a display screen of VR glasses worn by the staff to guide the staff to rightwards in the Horizontal direction, upwards deflecting the head by L1 degrees in the vertical direction, and moving the head by a distance K1;
when the worker is detected to finish the action of deflecting the worker to the right in the horizontal direction and upwards by L1 degrees in the vertical direction and moving the head by a distance K1, converting again to generate a guide video based on Cc-2 and Cc-1 deflection guide data, and guiding the worker to finish the next action, wherein the next action corresponds to the guide video based on Cc-2 and Cc-1 deflection guide data;
s23 is sequentially converted according to S21 to S221 to generate guide videos based on Cc-1 and Cc, cc-2 and Cc-1, C1 and C2 deflection guide data, and guide the worker to sequentially complete actions corresponding to Cc-1 and Cc, cc-2 and Cc-1, C1 and C2, and when the guide videos of the C1 and C2 deflection guide data are completed, the received patrol image data carried in the abnormal target feature data of the worker are displayed on a display screen of VR glasses worn by the worker, the display time is P1, the worker can conveniently and quickly find the identified abnormal target, and P1 is preset display duration.
The invention has the beneficial effects that:
according to the invention, the track inspection module is arranged to collect images of real-time real environments in front of the VR glasses of the staff wearing the VR glasses at night to inspect the urban track, the target recognition module is used for recognizing the static abnormal targets in the urban track based on the abnormal target recognition model, the target positioning unit is used for acquiring deflection angles and deflection displacement of the visual field of the staff based on inspection image data collected by the VR eyes in the time period according to the current position of the staff and the time when an alarm instruction is sent, the target positioning unit is used for generating corresponding guiding videos to guide the staff to trace the visual angle, so that the staff is guided to quickly find the recognized abnormal targets, the situation that the staff cannot quickly find the abnormal targets due to the fact that the staff cannot observe the static abnormal targets in the first time under the condition of poor night visual conditions and long-time inspection is avoided, and the inspection quality and efficiency are improved.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a system block diagram of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the intelligent VR glasses system includes a track inspection module, a target recognition module, and a positioning guide module;
the track inspection module is used for collecting images of real-time environments in front of VR glasses of workers wearing VR glasses at night to inspect urban tracks, and comprises a plurality of inspection units, wherein one inspection unit corresponds to one worker wearing VR glasses at night to inspect urban tracks;
the patrol unit acquires images of the real environment in front of the VR glasses of the staff wearing the VR glasses at night to patrol the urban rail at the current moment to obtain patrol image data of the staff at the current moment and records the acquisition moment of the patrol image data, generates patrol record data of the staff at the current moment according to the patrol image data of the staff at the current moment and the corresponding acquisition moment, and transmits the patrol record data of the staff at the current moment to the target identification module;
the target identification module is used for identifying static abnormal targets in the patrol image data of the staff; the object recognition module is used for recognizing an abnormal object of the image and marking all pixel points forming the abnormal object by using the same color through a semantic segmentation marking method;
the abnormal targets in the urban rail inspection comprise defects of the rail and garbage on the rail, and in the embodiment, the abnormal targets which can be identified in the urban rail inspection comprise cracks, abrasion, falling off of the rail, broken stone and fallen leaves on the rail;
the target recognition module receives the patrol record data of the staff at the current moment transmitted by the patrol unit, then inputs the patrol image data of the staff at the current moment carried in the patrol record data into an abnormal target recognition model, and temporarily stores the patrol record data of the staff at the current moment if no abnormal target exists in the patrol image data of the staff at the current moment;
on the contrary, the target recognition module generates abnormal target feature data of the staff and generates a target warning instruction according to the patrol record data of the staff at the current moment, and the problem to be solved is that all pixels forming an abnormal target in the patrol image data of the staff carried in the abnormal target feature data of the staff are marked by an abnormal target recognition model by adopting the same color, and the target recognition module transmits the abnormal target feature data of the staff and the target warning instruction to the positioning guide module together;
the positioning and guiding module is used for carrying out abnormal target positioning and guiding on the staff and comprises a target positioning unit and a target guiding unit;
the positioning and guiding module receives the target warning instruction transmitted by the target recognition module and the abnormal target characteristic data of the staff and transmits the target warning instruction and the abnormal target characteristic data of the staff to the target positioning unit, and the target positioning unit prompts the staff to find the abnormal target in a voice broadcasting mode after receiving the target warning instruction transmitted by the positioning and guiding module and records the corresponding time of voice broadcasting;
the target positioning unit generates abnormal target positioning data of the staff based on the current position according to a certain generation rule, and the abnormal target positioning data is specifically as follows:
s11: the target positioning unit marks the acquisition time of the patrol image data carried in the abnormal target characteristic data of the staff as a positioning starting time, a mark A1, and marks the corresponding time of voice broadcasting as a positioning ending time, a mark B1;
s12: acquiring all patrol image data of the staff from the positioning starting time A1 to the positioning ending time B1, and marking all the patrol image data as C1, C2, C, cc and C more than or equal to 1 from far to near according to the distance sequence of the acquisition time corresponding to each patrol image data from the current time, wherein the patrol image data C1 corresponds to the positioning starting time A1, and the patrol image data Cc corresponds to the positioning ending time B1;
s13: the first positioning deviation area of the patrol image data C1 and the second positioning deviation area of the patrol image data C2 are acquired according to a certain acquisition rule, and have the following:
s131: all pixel points in the inspection image data C1 and C2 are traversed, all deflection characteristic areas D1, D2, and D in the inspection image data C1 are acquired, and all deflection characteristic areas E1, E2, and Ed in the inspection image data C2 are acquired, wherein the pixel points forming an abnormal target are not included in the deflection characteristic areas;
the deflection characteristic areas D1, D2, dd and the deflection characteristic areas E1, E2 are in one-to-one correspondence, that is, any one pixel point in the deflection characteristic area in the inspection image data C1 has only one pixel point and the same RGB value in the corresponding deflection characteristic area in the inspection image data C2;
here, the total amount of the pixels included in the deflection characteristic region in one of the inspection image data C1 is the same as the total amount of the pixels included in the corresponding deflection characteristic region in the inspection image data C2;
s132: obtaining deflection characteristic areas with the largest number of pixels in deflection characteristic areas D1, D2, I.D. and Dd in the inspection image data C1, recalibrating the deflection characteristic areas as first positioning deflection areas, and recalibrating the deflection characteristic areas corresponding to the first positioning deflection areas in the inspection image data C2 as second positioning deflection areas;
s14: the deflection guide data based on the patrol image data C1 and C2 is generated according to a certain generation step, concretely as follows:
s141: establishing an xa-ya rectangular coordinate system with pixels as units by taking the lower left corner of the inspection image data C1 as an origin of coordinates, wherein the abscissa xa and the ordinate ya are the number of rows and columns in an image array of the inspection image data C1 respectively;
establishing an xb-yb rectangular coordinate system taking pixels as units by taking the lower left corner of the inspection image data C2 as an origin of coordinates, wherein the abscissa xb and the ordinate yb are the number of rows and columns in an image array of the inspection image data C2 respectively;
s142: randomly acquiring a coordinate mark F1 (xa 1, ya 1) of a pixel point forming a first positioning deviation area in an xa-ya rectangular coordinate system, and acquiring a coordinate mark G1 (xb 1, yb 1) of a pixel point corresponding to the pixel point in a second positioning deviation area in an xb-yb rectangular coordinate system;
s143: creating an empty bias dictionary J1 based on the patrol image data C1 and C2;
if xa1 is not less than xb1 and ya1 is not less than yb1, then the key value pair "Horizontal": "Left" and "vertical": "Down" is added to J1, at which time J1 = { "Horizontal":
“Left”,“vertical”:“down”};
if xa1 is not less than xb1 and ya1< yb1, then the key value pair "Horizontal": "Left" and "vertical": "up" is added to J1, at which time J1 = { "Horizontal":
“Left”,“vertical”:“up”};
if xa1< xb1 and ya 1. Gtoreq.yb1, then the key pair "Horizontal" is respectively: "Right" and "vertical": "Down" is added to J1, at which time J1 = { "Horizontal": "Right", "vertical": "Down";
if xa1< xb1 and ya1< yb1, then the key value pair "Horizontal", respectively: "Right" and "vertical": "up" is added to J1, at which time J1 = { "Horizontal": "Right", "vertical": a value corresponding to a key Horizontal in the J1 is used to represent a deflection direction of the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area in the Horizontal direction, a value corresponding to a key vertical in the J1 is used to represent a deflection direction of the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area in the vertical direction, the Left and Right refer to Left and Right deflection in the Horizontal direction, respectively, and the down and up refer to downward and upward deflection in the vertical direction, respectively;
s144: using the formulaCalculating and obtaining deflection displacement H1 from the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area, and utilizing a formulaCalculating and obtaining a deflection angle I1 from a pixel point F1 in a first positioning deflection area to a pixel point G1 in a second positioning deflection area, wherein (x 1, y 1) is the center point coordinate of the inspection image data C1;
the target positioning unit generates deflection guiding data based on the patrol image data C1 and C2 according to the deflection displacement H1, the deflection angle I1 and the deflection dictionary J1;
s15: respectively calculating and acquiring deflection guiding data based on the patrol images C1 and C2, C2 and C3, and C-1 and Cc according to S12 to S14, and generating abnormal target positioning data of the staff based on the current position according to the deflection guiding data;
the target positioning unit transmits abnormal target positioning data of the staff based on the current position to the target guiding unit;
the target guiding unit receives the abnormal target positioning data based on the current position of the worker transmitted by the target positioning unit and then generates the worker according to a certain guiding generation step, and the method specifically comprises the following steps:
s21: acquiring deflection displacement, deflection angle and deflection dictionary which are carried in the target positioning data based on the current position abnormality and are based on Cc-1 and Cc deflection guiding data, wherein the deflection displacement, the deflection angle and the deflection dictionary are respectively marked as K1, L1 and M1;
s22: if the values corresponding to the keys 'Horizontal' and 'vertical' in the deflection dictionary M1 are 'Left' and 'down', generating voice broadcast audio which deflects rightwards in the Horizontal direction and upwards by L1 degrees in the vertical direction according to the deflection displacement K1 and the deflection angle L1, inputting the voice broadcast audio of which the head moves by a distance K1, converting the voice broadcast audio into guide video based on Cc-1 and Cc deflection guide data, displaying the guide video on a display screen of VR glasses worn by the staff to guide the staff to rightwards in the Horizontal direction, upwards deflecting the head by L1 degrees in the vertical direction, and moving the head by a distance K1;
when the worker is detected to finish the action of deflecting the worker to the right in the horizontal direction and upwards by L1 degrees in the vertical direction and moving the head by a distance K1, converting again to generate a guide video based on Cc-2 and Cc-1 deflection guide data, and guiding the worker to finish the next action, wherein the next action corresponds to the guide video based on Cc-2 and Cc-1 deflection guide data;
s23, sequentially converting according to S21 to S221 to generate guide videos based on Cc-1 and Cc, cc-2 and Cc-1, and C1 and C2 deflection guide data, and guiding the staff to sequentially complete actions corresponding to Cc-1 and Cc, cc-2 and Cc-1, C1 and C2, and displaying patrol image data carried in the received staff abnormal target feature data on a display screen of VR glasses worn by the staff after completing the guide videos of the C1 and C2 deflection guide data, wherein the display time is P1, so that the staff can conveniently and quickly find the identified abnormal target, and the P1 is preset display duration;
in the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative and explanatory of the invention, as various modifications and additions may be made to the particular embodiments described, or in a similar manner, by those skilled in the art, without departing from the scope of the invention or exceeding the scope of the invention as defined in the claims.
The foregoing describes one embodiment of the present invention in detail, but the description is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.
Claims (5)
1. Intelligence VR glasses system, its characterized in that includes:
the track inspection module is used for collecting real-time environment images in front of the VR glasses of the staff wearing the VR glasses at night to inspect the urban rail, and comprises a plurality of inspection units, wherein one inspection unit corresponds to one staff wearing the VR glasses at night to inspect the urban rail;
the patrol unit acquires images of the real environment in front of the VR glasses of the staff wearing the VR glasses at night and patrol urban rails at the current moment to acquire patrol image data of the staff at the current moment and records the acquisition moment of the patrol image data, and generates patrol record data of the staff at the current moment according to the patrol image data of the staff and the corresponding acquisition moment;
the system comprises a target identification module, a target recognition module and a display module, wherein the target identification module is used for identifying a static abnormal target appearing in patrol image data of a worker, the target identification module is stored with an abnormal target identification model, and the abnormal target identification model is used for identifying the abnormal target of the image and marking all pixel points forming the identified abnormal target by adopting a semantic segmentation marking method with the same color;
the target recognition module inputs the received patrol image data of the staff carried in the patrol record data of the staff at the current moment into an abnormal target recognition model, and temporarily stores the patrol record data of the staff at the current moment if no abnormal target exists in the patrol image data of the staff at the current moment; otherwise, the target recognition module generates abnormal target characteristic data of the staff according to the patrol record data of the staff at the current moment and generates a target alarm instruction;
the positioning guide module is used for carrying out abnormal target positioning guide on the staff, and comprises a target positioning unit and a target guide unit, wherein the target positioning unit prompts the staff to find an abnormal target in a voice broadcasting mode after receiving a transmitted target warning instruction and records the corresponding time of voice broadcasting, and the target positioning unit generates abnormal target positioning data of the staff based on the current position according to a certain generation rule;
the target guiding unit guides the staff to quickly find the identified abnormal target according to a certain guiding step according to the received abnormal target positioning data of the staff based on the current position.
2. The intelligent VR glasses system of claim 1, wherein all pixels constituting the abnormal target in the generated patrol image data of the worker carried in the abnormal target feature data of the worker are marked by the abnormal target recognition model with the same color.
3. The intelligent VR glasses system of claim 1, wherein the specific generation rules for the target positioning unit to generate the abnormal target positioning data for the staff based on the current position are as follows:
s11: the target positioning unit marks the acquisition time of the patrol image data carried in the abnormal target characteristic data of the staff as a positioning starting time, a mark A1, and marks the corresponding time of voice broadcasting as a positioning ending time, a mark B1;
s12: acquiring all the patrol image data of the staff from the positioning starting time A1 to the positioning ending time B1, and marking all the patrol image data as C1, C2, C is more than or equal to 1 according to the distance sequence of the acquisition time corresponding to each acquired patrol image data from the current time from the far to the near;
s13: the first positioning deviation area of the patrol image data C1 and the second positioning deviation area of the patrol image data C2 are acquired according to a certain acquisition rule, and have the following:
s131: all pixel points in the inspection image data C1 and C2 are traversed, all deflection characteristic areas D1, D2, and D in the inspection image data C1 are acquired, and all deflection characteristic areas E1, E2, and Ed in the inspection image data C2 are acquired, wherein the pixel points forming an abnormal target are not included in the deflection characteristic areas;
the deflection characteristic areas D1, D2, dd and the deflection characteristic areas E1, E2 are in one-to-one correspondence, that is, any one pixel point in the deflection characteristic area in the inspection image data C1 has only one pixel point and the same RGB value in the corresponding deflection characteristic area in the inspection image data C2;
s132: obtaining deflection characteristic areas with the largest number of pixels in deflection characteristic areas D1, D2, I.D. and Dd in the inspection image data C1, recalibrating the deflection characteristic areas as first positioning deflection areas, and recalibrating the deflection characteristic areas corresponding to the first positioning deflection areas in the inspection image data C2 as second positioning deflection areas;
s14: generating deflection guiding data based on the patrol image data C1 and C2 according to a certain generating step;
s15: deflection guide data based on the patrol images C1 and C2, C2 and C3, and Cc-1 and Cc are calculated and acquired according to S12 to S14, respectively, and abnormal target positioning data based on the current position of the worker is generated therefrom.
4. The intelligent VR glasses system of claim 3, wherein the specific generation rules for generating the bias guidance data based on the patrol image data C1 and C2 at S14 are as follows:
s141: establishing an xa-ya rectangular coordinate system with pixels as units by taking the lower left corner of the inspection image data C1 as an origin of coordinates, wherein the abscissa xa and the ordinate ya are the number of rows and columns in an image array of the inspection image data C1 respectively;
establishing an xb-yb rectangular coordinate system taking pixels as units by taking the lower left corner of the inspection image data C2 as an origin of coordinates, wherein the abscissa xb and the ordinate yb are the number of rows and columns in an image array of the inspection image data C2 respectively;
s142: randomly acquiring a coordinate mark F1 (xa 1, ya 1) of a pixel point forming a first positioning deviation area in an xa-ya rectangular coordinate system, and acquiring a coordinate mark G1 (xb 1, yb 1) of a pixel point corresponding to the pixel point in a second positioning deviation area in an xb-yb rectangular coordinate system;
s143: creating an empty bias dictionary J1 based on the patrol image data C1 and C2;
if xa1 is not less than xb1 and ya1 is not less than yb1, then the key value pair "Horizontal": "Left" and "vertical": "Down" is added to J1, at which time J1 = { "Horizontal": "Left", "vertical": "Down";
if xa1 is not less than xb1 and ya1< yb1, then the key value pair "Horizontal": "Left" and "vertical": "up" is added to J1, at which time J1 = { "Horizontal": "Left", "vertical": "up";
if xa1< xb1 and ya 1. Gtoreq.yb1, then the key pair "Horizontal" is respectively: "Right" and "vertical": "Down" is added to J1, at which time J1 = { "Horizontal": "Right", "vertical": "Down";
if xa1< xb1 and ya1< yb1, then the key value pair "Horizontal", respectively: "Right" and "vertical": "up" is added to J1, at which time J1 = { "Horizontal": "Right", "vertical": a value corresponding to a key Horizontal in the J1 is used to represent a deflection direction of the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area in the Horizontal direction, a value corresponding to a key vertical in the J1 is used to represent a deflection direction of the pixel point F1 in the first positioning deflection area to the pixel point G1 in the second positioning deflection area in the vertical direction, the Left and Right refer to Left and Right deflection in the Horizontal direction, respectively, and the down and up refer to downward and upward deflection in the vertical direction, respectively;
s144: using the formulaCalculating and acquiring the deflection of the pixel point F1 in the first positioning deflection area to the second positioning deflection areaDeflection displacement H1 of pixel G1 is determined by the formulaCalculating and obtaining a deflection angle I1 from a pixel point F1 in a first positioning deflection area to a pixel point G1 in a second positioning deflection area, wherein (x 1, y 1) is the center point coordinate of the inspection image data C1;
the target positioning unit generates deflection guide data based on the patrol image data C1 and C2 in accordance with the deflection displacement H1, the deflection angle I1, and the deflection dictionary J1.
5. The intelligent VR glasses system of claim 4, wherein the specific guiding step of the target guiding unit for guiding the worker to quickly find the identified abnormal target is as follows:
s21: acquiring deflection displacement, deflection angle and deflection dictionary which are carried in the target positioning data based on the current position abnormality and are based on Cc-1 and Cc deflection guiding data, wherein the deflection displacement, the deflection angle and the deflection dictionary are respectively marked as K1, L1 and M1;
s22: if the values corresponding to the keys 'Horizontal' and 'vertical' in the deflection dictionary M1 are 'Left' and 'down', generating voice broadcast audio which deflects rightwards in the Horizontal direction and upwards by L1 degrees in the vertical direction according to the deflection displacement K1 and the deflection angle L1, inputting the voice broadcast audio of which the head moves by a distance K1, converting the voice broadcast audio into guide video based on Cc-1 and Cc deflection guide data, displaying the guide video on a display screen of VR glasses worn by the staff to guide the staff to rightwards in the Horizontal direction, upwards deflecting the head by L1 degrees in the vertical direction, and moving the head by a distance K1;
when the worker is detected to finish the action of deflecting the worker to the right in the horizontal direction and upwards by L1 degrees in the vertical direction and moving the head by a distance K1, converting again to generate a guide video based on Cc-2 and Cc-1 deflection guide data, and guiding the worker to finish the next action, wherein the next action corresponds to the guide video based on Cc-2 and Cc-1 deflection guide data;
s23 is sequentially converted according to S21 to S221 to generate guide videos based on Cc-1 and Cc, cc-2 and Cc-1, C1 and C2 deflection guide data, and guide the worker to sequentially complete actions corresponding to Cc-1 and Cc, cc-2 and Cc-1, C1 and C2, and when the guide videos of the C1 and C2 deflection guide data are completed, the received patrol image data carried in the abnormal target feature data of the worker are displayed on a display screen of VR glasses worn by the worker, the display time is P1, the worker can conveniently and quickly find the identified abnormal target, and P1 is preset display duration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311257185.XA CN117555141A (en) | 2023-09-27 | 2023-09-27 | Intelligent VR glasses system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311257185.XA CN117555141A (en) | 2023-09-27 | 2023-09-27 | Intelligent VR glasses system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117555141A true CN117555141A (en) | 2024-02-13 |
Family
ID=89817414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311257185.XA Pending CN117555141A (en) | 2023-09-27 | 2023-09-27 | Intelligent VR glasses system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117555141A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109658397A (en) * | 2018-12-12 | 2019-04-19 | 广州地铁集团有限公司 | A kind of rail polling method and system |
US20190191805A1 (en) * | 2017-12-22 | 2019-06-27 | State Grid Hunan Electric Power Company Limited | Wearable equipment for substation maintenance mobile inspection and application method thereof |
CN110297498A (en) * | 2019-06-13 | 2019-10-01 | 暨南大学 | A kind of rail polling method and system based on wireless charging unmanned plane |
CN211787224U (en) * | 2020-04-26 | 2020-10-27 | 深圳市阡丘越科技有限公司 | Track traffic inspection equipment |
CN113726606A (en) * | 2021-08-30 | 2021-11-30 | 杭州申昊科技股份有限公司 | Abnormality detection method and apparatus, electronic device, and storage medium |
CN114582039A (en) * | 2022-03-18 | 2022-06-03 | 广东电网有限责任公司 | Intelligent inspection system, method, electronic equipment and storage medium |
US20220201985A1 (en) * | 2020-12-28 | 2022-06-30 | Zhejiang University | Patrol helmet and method for livestock and poultry farms |
CN114863489A (en) * | 2022-07-05 | 2022-08-05 | 河北工业大学 | Movable intelligent construction site auxiliary inspection method and system based on virtual reality |
CN115600124A (en) * | 2022-09-07 | 2023-01-13 | 昆明地铁运营有限公司(Cn) | Subway tunnel inspection system and inspection method |
CN116384963A (en) * | 2023-03-20 | 2023-07-04 | 国家能源集团海控新能源有限公司大广坝水电厂 | AR equipment-based equipment inspection guiding method |
-
2023
- 2023-09-27 CN CN202311257185.XA patent/CN117555141A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190191805A1 (en) * | 2017-12-22 | 2019-06-27 | State Grid Hunan Electric Power Company Limited | Wearable equipment for substation maintenance mobile inspection and application method thereof |
CN109658397A (en) * | 2018-12-12 | 2019-04-19 | 广州地铁集团有限公司 | A kind of rail polling method and system |
CN110297498A (en) * | 2019-06-13 | 2019-10-01 | 暨南大学 | A kind of rail polling method and system based on wireless charging unmanned plane |
CN211787224U (en) * | 2020-04-26 | 2020-10-27 | 深圳市阡丘越科技有限公司 | Track traffic inspection equipment |
US20220201985A1 (en) * | 2020-12-28 | 2022-06-30 | Zhejiang University | Patrol helmet and method for livestock and poultry farms |
CN113726606A (en) * | 2021-08-30 | 2021-11-30 | 杭州申昊科技股份有限公司 | Abnormality detection method and apparatus, electronic device, and storage medium |
CN114582039A (en) * | 2022-03-18 | 2022-06-03 | 广东电网有限责任公司 | Intelligent inspection system, method, electronic equipment and storage medium |
CN114863489A (en) * | 2022-07-05 | 2022-08-05 | 河北工业大学 | Movable intelligent construction site auxiliary inspection method and system based on virtual reality |
CN115600124A (en) * | 2022-09-07 | 2023-01-13 | 昆明地铁运营有限公司(Cn) | Subway tunnel inspection system and inspection method |
CN116384963A (en) * | 2023-03-20 | 2023-07-04 | 国家能源集团海控新能源有限公司大广坝水电厂 | AR equipment-based equipment inspection guiding method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110826538B (en) | Abnormal off-duty identification system for electric power business hall | |
CN110738127B (en) | Helmet identification method based on unsupervised deep learning neural network algorithm | |
CN108491758B (en) | Track detection method and robot | |
CN103413395B (en) | Flue gas intelligent detecting prewarning method and device | |
CN103279949B (en) | Based on the multi-camera parameter automatic calibration system operation method of self-align robot | |
CN106128053A (en) | A kind of wisdom gold eyeball identification personnel stay hover alarm method and device | |
CN113947731B (en) | Foreign matter identification method and system based on contact net safety inspection | |
CN104966304A (en) | Kalman filtering and nonparametric background model-based multi-target detection tracking method | |
CN102759347A (en) | Online in-process quality control device and method for high-speed rail contact networks and composed high-speed rail contact network detection system thereof | |
CN112960014B (en) | Rail transit operation safety online real-time monitoring and early warning management cloud platform based on artificial intelligence | |
CN110096945B (en) | Indoor monitoring video key frame real-time extraction method based on machine learning | |
US11927944B2 (en) | Method and system for connected advanced flare analytics | |
CN113011252B (en) | Rail foreign matter intrusion detection system and method | |
CN114565845A (en) | Intelligent inspection system for underground tunnel | |
CN113807240A (en) | Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition | |
CN113450471A (en) | Intelligent inspection system for production park | |
CN115311740A (en) | Method and system for recognizing abnormal human body behaviors in power grid infrastructure site | |
CN113487821A (en) | Power equipment foreign matter intrusion identification system and method based on machine vision | |
CN113111771A (en) | Method for identifying unsafe behaviors of power plant workers | |
CN112508911A (en) | Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof | |
CN116824726A (en) | Campus environment intelligent inspection method and system | |
CN117555141A (en) | Intelligent VR glasses system | |
CN113487166A (en) | Chemical fiber floating filament quality detection method and system based on convolutional neural network | |
CN112183532A (en) | Safety helmet identification method based on weak supervision collaborative learning algorithm and storage medium | |
CN116152945A (en) | Under-mine inspection system and method based on AR technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |