WO2023058092A1 - Structure display system, structure display method, program - Google Patents

Structure display system, structure display method, program Download PDF

Info

Publication number
WO2023058092A1
WO2023058092A1 PCT/JP2021/036654 JP2021036654W WO2023058092A1 WO 2023058092 A1 WO2023058092 A1 WO 2023058092A1 JP 2021036654 W JP2021036654 W JP 2021036654W WO 2023058092 A1 WO2023058092 A1 WO 2023058092A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
position data
dimensional position
virtual
interest
Prior art date
Application number
PCT/JP2021/036654
Other languages
French (fr)
Japanese (ja)
Inventor
剛久 金丸
尚永 大北
省太 田村
広軌 鴨林
Original Assignee
日揮株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日揮株式会社 filed Critical 日揮株式会社
Priority to PCT/JP2021/036654 priority Critical patent/WO2023058092A1/en
Publication of WO2023058092A1 publication Critical patent/WO2023058092A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a structure display system, a structure display method, and a program.
  • Patent Document 1 discloses a technique for creating an attributed 3D model of an object consisting of a plurality of interconnected components based on point cloud data on the surface of the object.
  • Patent Literature 2 discloses a technique of associating a plurality of image data and point cloud data via a common three-dimensional coordinate system and processing attribute information in association with the point cloud data.
  • an object of the present invention is to provide a structure display system, a structure display method, and a program that accurately generate a virtual object representing a structure.
  • virtual object generating means for generating a virtual object representing the structure in the virtual three-dimensional space based on the three-dimensional position data of interest; and displaying at least part of the photographed image together with the virtual object.
  • the obtaining means includes a first photographed image obtained by photographing the real space from a first position, and a first tertiary image corresponding to the real object appearing in the first photographed image. Acquiring original position data, a second captured image obtained by capturing the real space from a second position, and second three-dimensional position data corresponding to the real object appearing in the second captured image.
  • the selecting means selects first three-dimensional position data of interest related to the structure from the first three-dimensional position data based on the region recognized in the first captured image; 2 Selecting second three-dimensional position data of interest related to the structure from the second three-dimensional position data based on the region recognized in the captured image, the virtual object generating means generating the first If it is determined that the three-dimensional position data of interest and the second three-dimensional position data of interest in the virtual three-dimensional space are related to the same structure, the first A structure display system that generates one of the virtual objects based on the three-dimensional position data of interest and the second three-dimensional position data of interest.
  • the virtual object generation means includes partial object identification means for identifying first and second partial objects related to the structure based on the three-dimensional position data of interest; When it is determined that the extension directions of the second partial objects are the same, and one of the partial objects is arranged on the extension line of the other, the single virtual object is generated based on the first and second partial objects.
  • the virtual object generating means includes partial object identification for identifying first and second partial objects related to the structure based on the three-dimensional position data of interest. means for generating one said virtual object based on said first and second partial objects when an end of one of said first and second partial objects is continuous with the outer surface of the other of said first and second partial objects;
  • a structure display system comprising:
  • the A structure display system comprising an edge determination means for determining an edge of a virtual object.
  • the structure display system wherein the virtual object correction means extends the virtual object generated by the virtual object generation means based on the user's input on the display screen.
  • the virtual object correction means deletes at least part of the virtual object generated by the virtual object generation means based on the user's input on the display screen. , a structure display system.
  • the virtual object correcting means connects the plurality of virtual objects generated by the virtual object generating means based on the user's input on the display screen. Structure display system.
  • the user's input on the display screen is an input operation along the shape of the structure.
  • an attribute imparting means for imparting a predetermined attribute to a portion of the virtual object, and the display means is the portion to which the predetermined attribute is imparted.
  • a structure display system that identifies and displays the
  • the display means color-codes and displays the part to which the predetermined attribute is assigned for identification.
  • a virtual object representing a structure can be generated with high accuracy.
  • FIG. 2 is a diagram schematically showing one pipe arranged in real space and imaging devices arranged at two different imaging positions; It is a figure which shows the 1st picked-up image image
  • FIG. 4 is a diagram showing an example of a virtual object
  • FIG. 4 is a diagram showing an example of a partial object
  • FIG. 10 is a diagram showing a screen on which a virtual object is displayed together with a first captured image
  • FIG. 11 is a diagram showing a screen on which a virtual object is displayed together with a second captured image
  • It is a flowchart which shows the processing flow in the user terminal of this embodiment. 4 is a flow chart showing a processing flow of virtual object generation according to the present embodiment;
  • FIG. 1 is a diagram showing an example of the overall configuration of a structure display system according to this embodiment.
  • the structure display system 100 is a display system that enables the user to comprehensively and bird's-eye view the plant equipment.
  • the structure display system 100 displays a captured image of plant equipment in which pipes, which are extension structures, are arranged, and a virtual object representing the pipes in a virtual three-dimensional space.
  • the user can focus on the specific pipe on the display screen. Therefore, the user can quickly recognize on the display screen the place where the piping requiring maintenance, inspection, or the like is arranged in the plant equipment. After recognizing the piping requiring maintenance or inspection on the display screen, the user should go to the actual plant facility and perform maintenance or inspection on the piping.
  • the plant equipment may be equipment composed of a wide variety of structures, such as chemical plants, thermal power plants, and nuclear power plants.
  • a pipe is a cylindrical structure that extends in a particular direction and has a fluid flowing through it.
  • the structure display system 100 includes a user terminal 10, an imaging device 20, and a point cloud recognition system 30.
  • Each of the user terminal 10, the imaging device 20, and the point cloud recognition system 30 can be connected to a network N such as the Internet.
  • the imaging device 20 is, for example, a camera that includes an infrared ranging sensor and can generate a captured image that includes color information and depth information. That is, each pixel included in the captured image captured by the image capturing device 20 is given a value indicating the distance from the image capturing device 20 to the target in addition to the RGB values, which are color information.
  • the infrared ranging sensor may be provided independently of the photographing device 20, and may output data indicating the distance to the photographed object, that is, the position in the three-dimensional space, separately from the photographed image.
  • the imaging device 20 may be, for example, an omnidirectional camera that includes a pan/tilt mechanism that allows the imaging angle to be changed in the vertical and horizontal directions, and that generates a panoramic image that includes at least a horizontally or vertically elongated portion.
  • the imaging device 20 is not limited to this, and may be a wide-angle camera with a wide-angle lens or a fish-eye camera with a fish-eye lens.
  • the captured image is not a panoramic image but an image having an aspect ratio that can be displayed on a standard display. (see FIGS. 4(a), 4(b), 6, 7, 11 and 12).
  • a photographed image photographed by the photographing device 20 is sent to the point cloud recognition system 30 via the network N.
  • the point cloud recognition system 30 stores a photographed image of the real space in which the pipes are arranged, and point cloud data in the virtual three-dimensional space corresponding to the real object appearing in the photographed image. Also, the point cloud recognition system 30 recognizes a point cloud indicated by point cloud data represented in a virtual three-dimensional space.
  • point cloud data is used as the three-dimensional position data in the virtual three-dimensional space corresponding to the real object, but the present invention is not limited to this.
  • the imaging device 20 may include the functions provided by the point cloud recognition system 30 .
  • the point group recognition system 30 may be configured by a virtual server such as a cloud server that is virtually created on the cloud.
  • FIG. 2 is a diagram showing an example of the physical configuration of a user terminal in this embodiment.
  • the user terminal 10 is a computer operated by a user.
  • the user terminal 10 is, for example, a computer such as a personal computer, a tablet terminal, a mobile phone (including a smart phone), or a wearable terminal.
  • the user terminal 10 includes a processor 12, a memory 14, a communication section 16, a display 18, and an operation section 19, for example.
  • the processor 12 is, for example, a program control device such as a CPU that operates according to a program installed in the user terminal 10.
  • the memory 14 is, for example, a storage element such as ROM or RAM, a hard disk drive, or the like.
  • the memory 14 stores programs and the like executed by the processor 12 .
  • the communication unit 16 is, for example, a communication interface for wired communication or wireless communication, and performs data communication via the network N.
  • the display 18 is a display device such as a liquid crystal display, and displays various images according to instructions from the processor 12 .
  • the display 18 may be a head-mounted display that the user can wear on his or her head. A user wearing a head-mounted display can simulate walking in plant equipment.
  • the operation unit 19 is a user interface such as a keyboard, a mouse, a touch panel, or the like, and receives user's operation input and outputs a signal indicating the content of the input to the processor 12 .
  • the programs and data described as being stored in the memory 14 may be supplied via the network N.
  • the hardware configuration of each computer described above is not limited to the above example, and various hardware can be applied. For example, even if a reading unit (e.g., optical disk drive or memory card slot) for reading a computer-readable information storage medium and an input/output unit (e.g., USB port) for inputting/outputting data with an external device are included. good.
  • programs and data stored in an information storage medium may be supplied to a computer via a reading section or an input/output section.
  • FIG. 3 is a functional block diagram showing an example of functions implemented by a user terminal.
  • the user terminal 10 includes an acquisition unit 41, a recognition unit 42, a selection unit 43, a virtual object generation unit 44, a display unit 45, an attribute assignment unit 46, a virtual object correction unit 47, and a storage unit 48.
  • the acquisition unit 41 , the recognition unit 42 , the selection unit 43 , the virtual object generation unit 44 , the attribute assignment unit 46 , and the virtual object correction unit 47 are realized mainly by the processor 12 .
  • the storage unit 48 is realized mainly by the memory 14 .
  • Each of these functions is implemented by a computer executing the program according to the present embodiment. This program may be stored in a computer-readable information storage medium.
  • the acquisition unit 41 acquires a photographed image of a real space in which pipes, which are extending structures, are arranged, and point cloud data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image.
  • the information acquired by the acquisition unit 41 may be sent from the point group recognition system 30 to the user terminal 10 via the network N and stored in the storage unit 48 . That is, the acquisition unit 41 may acquire the captured image and the point cloud data by reading them from the storage unit 48 .
  • the recognition unit 42 recognizes a region in which the pipe appears in the photographed image acquired by the acquisition unit 41, using the learned model 48c.
  • the learned model 48 c is generated by performing machine learning on a plurality of pre-photographed images showing pipes as teacher images, and is pre-stored in the storage unit 48 .
  • the trained model 48c may be learned using a known machine learning algorithm.
  • the features included in the teacher image may be automatically learned in each layer of a multi-layered neural network. Good to have.
  • a threshold is set for the teacher image, each pixel is labeled, and the labeled image is used as teacher data for classification by semantic segmentation.
  • a known algorithm such as instance segmentation may be used.
  • the selection unit 43 selects point cloud data of interest relating to the pipe from the point cloud data in the virtual three-dimensional space based on the region in which the pipe appears in the captured image recognized by the recognition unit 42 .
  • the selection of the target point cloud data by the selection unit 43 may be performed by extracting the target point cloud data from the point cloud data in the virtual three-dimensional space, or by extracting the target point cloud data from the point cloud data in the virtual three-dimensional space. It may be performed by deleting the point cloud data other than the data.
  • the virtual object generation unit 44 generates a virtual object representing pipes in the virtual three-dimensional space based on the target point group data selected by the selection unit 43 .
  • the virtual object is preferably configured by a polygon along the contour of the point cloud indicated by the point cloud data of interest and including a plurality of faces having points belonging to the point cloud as vertices.
  • the above processing is performed when point cloud data is used as 3D position data. It is a process of selecting mesh data corresponding to piping from the mesh data of various structures existing in the system as the three-dimensional position data of interest.
  • the first A single virtual object may be generated based on the point cloud data of interest and the second point cloud data of interest.
  • the first point cloud data of interest is point cloud data relating to piping selected from point cloud data corresponding to a real object appearing in a first captured image obtained by capturing an image of the real space from a first position.
  • the second point cloud data of interest refers to the piping selected from the point cloud data corresponding to the real object appearing in the second photographed image obtained by photographing the real space from a second position different from the first position. This is point cloud data.
  • the virtual object generation unit 44 may include a partial object identification unit 44a, a generation unit 44b, and an end determination unit 44c.
  • the partial object specifying unit 44a specifies a partial object related to piping based on the point cloud data of interest.
  • the generating unit 44b determines that the extending directions of the first partial object and the second partial object specified by the partial object specifying unit 44a match and the other is arranged on the extension line of one Then, one virtual object is generated based on the first and second partial objects. Further, the generation unit 44b generates one virtual object based on the first and second partial objects when one end of the first and second partial objects is continuous with the outer surface of the other. .
  • the edge determining unit 44c may determine the edge of the virtual object based on the geometric conditions of the virtual object. Specifically, the edge determining unit 44c may determine the edge of the virtual object according to the change in the shape of the virtual object.
  • a change in the shape of the virtual object is, for example, a change in the diameter of the cylindrical virtual object.
  • the end determination unit 44c may determine a portion adjacent to the shape portion as the end of the virtual object. Further, the end determining unit 44c may generate a virtual object having a constant cross-sectional shape of the pipe based on part of the target point cloud data relating to the pipe whose cross-sectional shape changes midway.
  • the display unit 45 displays, on the display 18, a part of the photographed image showing how the piping is viewed from the photographing device 20.
  • the display unit 45 displays the virtual object generated by the virtual object generation unit 44 on the display 18 by superimposing it on the captured image.
  • the display unit 45 may identify and display a part of the virtual object to which a predetermined attribute has been assigned by the attribute assigning unit 46, which will be described later.
  • the display unit 45 may display a portion of the virtual object to which the predetermined attribute has been assigned by the attribute assigning unit 46 in different colors for identification.
  • the display unit 45 may display a mark indicating that the attribute imparting unit 46 has imparted a predetermined attribute. Also, the display unit 45 may display the mark in an identifiable manner according to the attribute. For example, the display unit 45 may display a blue circle mark on a portion to which a certain attribute is assigned, and a red circle mark on a portion to which another attribute is assigned.
  • attribute assigning unit 46 assigns a predetermined attribute to part of the virtual object.
  • the portion to which attributes are assigned may be, for example, a virtual object indicating a specific pipe to be maintained among a plurality of pipes, or a portion of a virtual object corresponding to a portion to be maintained among one pipe. .
  • Predetermined attributes may be given to portions of the virtual object that correspond to portions of piping that have characteristic shapes and properties, such as branch portions and joint portions, for example. Also, the predetermined attribute may be assigned to a portion of the virtual object corresponding to a portion of the pipe that is likely to deteriorate due to wear or the like, for example.
  • the virtual object correction unit 47 corrects the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Specifically, for example, the virtual object correction unit 47 divides the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Also, for example, the virtual object correction unit 47 extends the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Also, for example, the virtual object correction unit 47 deletes at least part of the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Also, for example, the virtual object correction unit 47 connects a plurality of virtual objects generated by the virtual object generation unit 44 based on the user's input on the display screen.
  • the user's input on the display screen should be based on the input operation along the shape of the piping.
  • the storage unit 48 stores the image storage unit 48a that stores the captured image supplied from the point cloud recognition system 30 via the network N, and the point cloud data in the virtual three-dimensional space corresponding to the real object appearing in the captured image.
  • a point cloud data storage unit 48b is included.
  • the storage unit 48 also stores a pre-generated learned model 38a.
  • FIG. 4 is a diagram for explaining selection of target point cloud data in this embodiment.
  • FIG. 5 is a diagram schematically showing one pipe arranged in real space and imaging devices arranged at two different imaging positions.
  • FIG. 6 is a diagram showing a first captured image captured by the imaging device at the first position.
  • FIG. 7 is a diagram showing a second captured image captured by the imaging device at the second position.
  • FIG. 8 is a diagram showing an example of a point group indicated by target point group data.
  • FIG. 9 is a diagram showing an example of a virtual object.
  • FIG. 10 is a diagram showing an example of a partial object.
  • the photographing device 20 is preferably an omnidirectional camera, but in order to avoid complicating the explanation, an example in which the photographing range is less than 180° will be described here.
  • the coordinate system of the point cloud data in the virtual three-dimensional space corresponding to the first captured image shown in FIG. 6 and the coordinate system of the point cloud data in the virtual three-dimensional space corresponding to the second captured image shown in FIG. 7 are common. be.
  • Such sharing of the coordinate system may be realized, for example, by recognizing the relative position of the second position with respect to the first position in the photographing device 20 .
  • sharing a coordinate system means, for example, either a shape based on the point cloud indicated by the point cloud data corresponding to the first captured image or a shape based on the point cloud indicated by the point cloud data corresponding to the second captured image. It may be realized by fitting one of them to the other by shape mapping.
  • the two shot images shown in FIGS. 6 and 7 are used for explanation, but the number of shot images may be three or more, and the point cloud in the virtual three-dimensional space corresponding to the shot images It is preferable that the data coordinate system be common to each other.
  • a configuration is adopted in which point cloud data related to piping is selected using the learned model 48c, and a virtual object representing piping is generated based on the point cloud data of interest.
  • the recognition unit 42 recognizes an area in which piping appears (hereinafter, also referred to as a piping area).
  • FIG. 4B shows an image in which the piping area is recognized, that is, an image in which the piping area is partitioned from other areas.
  • FIG. 4(c) shows the point cloud indicated by the point cloud data related to the entire plant facility in the virtual three-dimensional space, corresponding to the real object appearing in the captured image.
  • the selection unit 43 selects point cloud data of interest related to the pipe from the point cloud data in the virtual three-dimensional space based on the pipe region in the captured image recognized by the recognition unit 42 .
  • FIG. 4D shows a point group indicated by point group data of interest related to piping in the virtual three-dimensional space.
  • the virtual object generation unit 44 generates a virtual object representing pipes in the virtual three-dimensional space based on the target point cloud data selected by the selection unit 43 .
  • a virtual object can be generated with high accuracy by adopting a configuration in which a virtual object representing a pipe is generated after selecting target point cloud data related to the pipe using the trained model 48c. .
  • Piping installed in plant equipment is a longitudinal structure extending in a particular direction. Therefore, one pipe may not fit in one captured image, and may be represented across a plurality of captured images. In this way, in a pipe extending over a plurality of captured images, a plurality of divided virtual objects may be generated even though the pipe is one pipe.
  • one pipe may be, for example, a continuous structure in which the same fluid continuously flows.
  • a configuration is adopted that can generate a virtual object that accurately shows one pipe based on a plurality of captured images captured from different capturing positions.
  • the acquisition unit 41 acquires the first captured image obtained by capturing the real space from the first position.
  • the acquisition unit 41 also acquires a second captured image obtained by capturing the real space from the second position.
  • FIG. 5 shows how the pipe P existing in the real space is photographed by the photographing device 20 from the first position and the second position.
  • illustration of structures other than the pipe P existing in the real space is omitted.
  • part of the pipe P is shown in the captured image captured by the imaging device 20 at the first position.
  • another part of the pipe P is shown in the photographed image photographed by the photographing device 20 at the second position.
  • the acquisition unit 41 also acquires first point cloud data corresponding to the real object appearing in the first captured image and second point cloud data corresponding to the real object appearing in the second captured image. Then, the selection unit 43 selects the first target point cloud data related to the pipe from the first point cloud data based on the pipe region recognized in the first captured image by the learned model 48c. Further, the selection unit 43 selects second target point cloud data related to piping from the second point cloud data based on the piping region recognized in the second captured image by the learned model 48c.
  • FIG. 8 shows part of the point group PA1 indicated by the first target point cloud data and part of the point group PA2 indicated by the second target point cloud data.
  • FIG. 8 shows that part of the point group PA1 indicated by the first point cloud data of interest and the point group PA2 indicated by the second point group data are arranged so as to overlap each other in the virtual three-dimensional space. . That is, the first target point cloud data and the second target point cloud data include common target point cloud data, which is common position information.
  • the overlapping portion of the point cloud PA1 and the point cloud PA2, that is, the point cloud indicated by the common point cloud data of interest is represented as a point cloud PA3.
  • the virtual object generation unit 44 when the point cloud PA3 indicated by the common point cloud data exists, the virtual object generation unit 44 generates one virtual object based on the first point cloud data and the second point cloud data. That is, when the virtual object generation unit 44 determines that the point group data of the first point of interest and the second point of interest group data are related to the same structure based on the positional information in the virtual three-dimensional space. , one virtual object is generated based on the first target point cloud data and the second target point cloud data. Specifically, when the first target point group data and the second target point group data include position information within a predetermined distance, the virtual object generation unit 44 generates A virtual object may be generated based on the FIG. 9 shows one virtual object OB generated by the virtual object generator 44 based on the point group indicated by the point cloud data shown in FIG.
  • a configuration is adopted in which a partial object related to piping is specified based on the point cloud data of interest, and one virtual object is generated based on a plurality of specified partial objects.
  • FIG. 8 shows how the point cloud indicated by the point cloud data of interest is divided in the blind spots caused by the structure S shown in FIG. In FIG. 8, the divided portion is indicated by symbol "D".
  • the partial object identifying unit 44a identifies the first partial object OB1 from the point cloud indicated by one of the divided target point cloud data, and identifies the second partial object OB2 from the point cloud indicated by the other of the divided target point cloud data. do.
  • FIG. 10 shows a first partial object OB1 and a second partial object OB2 specified by the partial object specifying unit 44a.
  • the first partial object OB1 has a shape including a portion extending around the central axis O1.
  • the second partial object OB2 has a shape including a portion extending around the central axis O2.
  • the central axis O1 and the central axis O2 are on the same straight line. That is, the first partial object OB1 and the second partial object OB2 shown in FIG. 10 have the same extension direction, and the other is arranged on the extension line of one.
  • the pipe P includes a branch portion B1.
  • the partial object specifying unit 44a specifies the 2-1 partial object OB21 and the 2-2 partial object OB2, as shown in FIG.
  • the 2-2nd partial object OB22 is continuous at its end with the outer surface of the 2-1st partial object OB21.
  • the generation unit 44b generates one virtual object OB based on the first partial object OB1 and the second partial object OB2 (the 2-1st partial object OB21 and the 2-2nd partial object OB22). That is, the generation unit 44b generates one virtual object OB shown in FIG. 9 based on the first partial object OB1 and the second partial object OB2 shown in FIG.
  • the generation unit 44b when it is determined that the extending directions of a plurality of partial objects are the same and the other is arranged on the extension line of one, the generation unit 44b generates a plurality of partial objects. Create a single virtual object based on the object. Further, when one end of a plurality of partial objects is continuous with the other outer surface, the generation unit 44b generates one virtual object based on the plurality of partial objects. By adopting such a configuration, it is possible to accurately generate a virtual object OB representing piping.
  • the T-shaped branched portion B1 is taken as an example, but the branched portion is not limited to this, and the branched portion may be Y-shaped or the like.
  • the edge determination unit 44c determines the edge of the virtual object according to the change in the shape of the virtual object.
  • the pipe P is connected to a separate pipe P1 via a flange F that is a joint.
  • the flange F is a portion having a larger diameter than the pipe P.
  • a portion having a different diameter is generated in the middle.
  • the end portion determining portion 44c determines the portion adjacent to the flange F as the end portion of the pipe P.
  • the virtual object generator 44 determines the end E of the virtual object OB as shown in FIG.
  • the portion that causes the shape of the virtual object to change is not limited to the flange F, and may be any portion that has a shape different from that of the pipe, such as a valve that controls the flow rate.
  • FIG. 11 is a diagram showing a screen on which a virtual object is displayed together with the first captured image.
  • FIG. 12 is a diagram showing a screen on which a virtual object is displayed together with the second captured image.
  • FIG. 11 shows a display screen on which the first captured image shown in FIG. 6 and the virtual object shown in FIG. 10 are displayed.
  • FIG. 12 shows a display screen on which the virtual object shown in FIG. 10 is displayed together with the second captured image shown in FIG.
  • hatched portions represent virtual objects OB.
  • the virtual object OB is shown slightly larger than the contour of the pipe shown in the captured image, but in reality, they should be approximately the same size.
  • a virtual object is displayed superimposed on the selected pipe.
  • the virtual object may preferably be composed of polygons.
  • a virtual object composed of polygons may be displayed as, for example, a colored transparent object.
  • the virtual object may be displayed as a blue and transparent object.
  • the user can recognize a specific pipe on the screen by displaying a virtual object OB superimposed on one pipe represented in the captured image.
  • the virtual object OB may be similarly superimposed and displayed on pipes other than the pipes shown in FIGS. 11 and 12 . Then, it is preferable that the pipes on which the virtual object OB is superimposed and displayed are switched according to the user's input.
  • FIG. 11 shows how a mark M is displayed to indicate that a predetermined attribute has been assigned by the attribute assigning unit 46 .
  • the mark M is selected based on the user's input on the display screen, information regarding attributes associated with the mark M may be displayed.
  • FIG. 11 shows how the characters "management point" are displayed as the information about the attribute associated with the mark M.
  • the user can grasp in advance on the display screen which part of the pipe should be focused on when performing maintenance or inspection.
  • FIGS. 11 and 12 show examples in which a list of recognized objects is displayed.
  • 11 and 12 show that virtual objects representing pipes with pipe IDs of "001 to 005" among the structures displayed on the screen can be displayed. It also indicates that virtual objects representing devices with device IDs “011 to 012” among the structures displayed on the screen can be displayed.
  • FIGS. 11 and 12 show that the pipe with the pipe ID “002” is selected and the virtual object OB representing the pipe is displayed.
  • the illustration of structures other than the pipe with the pipe ID "002" is partially omitted.
  • the structure display system 100 can accurately generate virtual objects. However, since plant equipment has a complicated configuration, it is not always the case that a virtual object corresponding to a real object is generated. Therefore, in this embodiment, a function of correcting a virtual object based on user input is employed. Specifically, the structure display system 100 includes a virtual object correction section 47 .
  • the virtual object correction unit 47 may exhibit, for example, an extension function, a division function, a union function, and a deletion function. For example, when the user selects any one of "trace and recognize", “divide”, “merge”, and “delete” shown in FIGS. good. It should be noted that the selection by the user is preferably performed by the user's input on the display screen.
  • the virtual object correction unit 47 may extend the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. As a result, the edge of the virtual object that was not generated by the virtual object generator 44 can be represented appropriately.
  • the user's input in the correction mode in which the extension function is exhibited should be an input operation that conforms to the shape of the pipe.
  • the virtual object may extend as the cursor displayed on the display screen is moved along the shape of the pipe.
  • the virtual object may extend as the user traces the shape of the pipe with a finger on the touch panel.
  • a mode in which a function of giving a predetermined attribute by the attribute assigning unit 46 is exhibited may be activated. That is, a predetermined attribute may be given to a region in which the user has performed an input action along the shape of the pipe. This makes it possible to give a predetermined attribute to an area having a certain length.
  • the virtual object correction unit 47 may divide the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. As a result, one virtual object that has been erroneously generated as one pipe by the virtual object generator 44 can be expressed as a plurality of virtual objects.
  • the virtual object correction unit 47 may connect the plurality of virtual objects generated by the virtual object generation unit 44 based on the user's input on the display screen. Thereby, a plurality of virtual objects erroneously generated by the virtual object generation unit 44 can be represented as one virtual object.
  • the virtual object correction unit 47 preferably deletes part or all of the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen.
  • the virtual object can be expressed by deleting part or all of the virtual object that was erroneously generated by the virtual object generation unit 44 .
  • FIG. 13 is a flow chart showing the processing flow in the user terminal of this embodiment.
  • FIG. 14 is a flow chart showing the processing flow of virtual object generation according to this embodiment.
  • the acquisition unit 41 acquires a photographed image of a real space in which pipes, which are extending structures, are arranged, and point cloud data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image ( S1). Further, the recognizing unit 42 recognizes the pipe region in which the pipe appears in the photographed image using the learned model 48c (S2).
  • the selection unit 43 selects point cloud data of interest related to the structure from the point cloud data based on the pipe region in which the pipe is recognized in the captured image (S3).
  • the virtual object generation unit 44 generates a virtual object representing the pipe in the virtual three-dimensional space based on the target point cloud data (S4).
  • the edge determination unit 44c determines the edge of the virtual object according to the change in the shape of the virtual object (S41).
  • the partial object specifying unit 44a specifies a partial object related to piping based on the target point cloud data (S42).
  • the generation unit 44b generates one virtual object based on the plurality of partial objects (S43).
  • a virtual object is generated through the procedures of S41 to S43 described above.
  • the display unit 45 displays the virtual object generated by the virtual object generation unit 44 together with the captured image on the display 16 (S5). After that, the virtual object may be corrected based on the user's input on the display screen.
  • an information processing device capable of processing a large amount of data at a higher speed than the user terminal 10 performs processing from acquisition of captured images and point cloud data to generation of virtual objects. It may be a configuration.
  • the user terminal 10 acquires and displays the captured image and information of the virtual object from the information processing device, and receives correction processing and data input processing of the virtual object as shown in FIGS. 11 and 12 . With such a system configuration, even the user terminal 10 with relatively low data processing capabilities can sufficiently perform certain functions.
  • a virtual object representing a pipe can be generated with high accuracy.
  • a virtual object corresponding to its shape can be generated with high accuracy. Since the virtual object can be generated with high accuracy in this way, the piping can be identified and displayed with high accuracy.
  • the virtual object generation unit 44 after the virtual object generation unit 44 generates the virtual object, by adopting a function of correcting the virtual object according to the user's input on the display screen, the accuracy of the shape of the virtual object can be improved. can be further improved.
  • the structure display system 100 may generate a virtual object representing a structure having a three-dimensional shape. Also, the structure display system 100 may display facilities other than plant facilities.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Provided are a structure display system (100), a structure display method, and a program that accurately generate a virtual object representing a structure. The structure display system (100) comprises: an acquisition unit (41) that acquires a captured image of a real space in which a structure is disposed, and three-dimensional position data in a virtual three-dimensional space corresponding to real objects appearing in the captured image; a recognition unit (42) that uses a trained model (48c) to recognize the area of the captured image where the structure appears; a selection unit (43) that selects, from the three-dimensional position data, three-dimensional position data of interest associated with the structure on the basis of the area of the captured image recognized by the recognition unit 42; a virtual object generation unit (44) that generates, in the virtual three-dimensional space, a virtual object (OB) representing the structure on the basis of the three-dimensional position data of interest; and a display unit (45) that displays at least a part of the captured image together with the virtual object (OB).

Description

構造物表示システム、構造物表示方法、プログラムSTRUCTURE DISPLAY SYSTEM, STRUCTURE DISPLAY METHOD, PROGRAM
 本発明は、構造物表示システム、構造物表示方法、及びプログラムに関する。 The present invention relates to a structure display system, a structure display method, and a program.
 特許文献1には、相互に接続された複数の構成要素からなる物体の属性付き3Dモデルを、物体の表面上の点群データに基づいて作成する技術が開示されている。また、特許文献2には、複数の画像データ及び点群データを共通の三次元座標系を介して対応付けて、点群データと関連付けて属性情報を処理する技術が開示されている。 Patent Document 1 discloses a technique for creating an attributed 3D model of an object consisting of a plurality of interconnected components based on point cloud data on the surface of the object. Further, Patent Literature 2 discloses a technique of associating a plurality of image data and point cloud data via a common three-dimensional coordinate system and processing attribute information in association with the point cloud data.
国際公開第2016/088553号公報International Publication No. 2016/088553 特開2017-102742号公報JP 2017-102742 A
 ここで、仮想三次元空間において、特定の構造物に注目可能とするために、特定の構造物を識別可能に表示する技術の要請がある。そのためには、画像に表れた構造物に対応する点群データ等の三次元位置データに基づいて、構造物を示す仮想オブジェクトを生成し、画像と共に仮想オブジェクトを表示するとよい。しかしながら、画像に複数の異なる構造物が表れている場合、その画像に表れた構造物に対応する三次元位置データを精度良く抽出することは難しい。すなわち、構造物を示す仮想オブジェクトを精度良く生成することは難しい。 Here, in the virtual three-dimensional space, there is a demand for technology that displays specific structures in an identifiable manner so that they can be noticed. For this purpose, it is preferable to generate a virtual object representing the structure based on three-dimensional position data such as point cloud data corresponding to the structure appearing in the image, and display the virtual object together with the image. However, when a plurality of different structures appear in an image, it is difficult to accurately extract three-dimensional position data corresponding to the structures appearing in the image. That is, it is difficult to accurately generate a virtual object representing a structure.
 上記課題を鑑みて、本発明は、構造物を示す仮想オブジェクトを精度良く生成する構造物表示システム、構造物表示方法、及びプログラムを提供することを目的とする。 In view of the above problems, an object of the present invention is to provide a structure display system, a structure display method, and a program that accurately generate a virtual object representing a structure.
 上記課題を解決すべく本出願において開示される発明は種々の側面を有しており、それら側面の代表的なものの概要は以下のとおりである。 The invention disclosed in this application to solve the above problems has various aspects, and the outlines of typical aspects are as follows.
 (1)構造物が配置された実空間の撮影画像と、前記撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における三次元位置データと、を取得する取得手段と、学習済みモデルにより、前記撮影画像における前記構造物が表れた領域を認識する認識手段と、前記撮影画像において認識された前記領域に基づいて、前記三次元位置データから前記構造物に係る注目三次元位置データを選択する選択手段と、前記注目三次元位置データに基づいて、前記仮想三次元空間における前記構造物を示す仮想オブジェクトを生成する仮想オブジェクト生成手段と、前記撮影画像の少なくとも一部を前記仮想オブジェクトと共に表示する表示手段と、を有する構造物表示システム。 (1) Acquisition means for acquiring a photographed image of a real space in which a structure is arranged and three-dimensional position data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image, and a trained model recognizing means for recognizing an area in which the structure appears in the photographed image; and selecting attention three-dimensional position data relating to the structure from the three-dimensional position data based on the recognized area in the photographed image. virtual object generating means for generating a virtual object representing the structure in the virtual three-dimensional space based on the three-dimensional position data of interest; and displaying at least part of the photographed image together with the virtual object. and a display means for displaying a structure.
 (2)(1)において、前記取得手段は、第1位置から前記実空間を撮影することにより得られる第1撮影画像と、前記第1撮影画像に表れた実オブジェクトに対応する第1の三次元位置データと、第2位置から前記実空間を撮影することにより得られる第2撮影画像と、前記第2撮影画像に表れた実オブジェクトに対応する第2の三次元位置データと、を取得し、前記選択手段は、前記第1撮影画像において認識された前記領域に基づいて、前記第1の三次元位置データから前記構造物に係る第1の注目三次元位置データを選択するとともに、前記第2撮影画像において認識された前記領域に基づいて、前記第2の三次元位置データから前記構造物に係る第2の注目三次元位置データを選択し、前記仮想オブジェクト生成手段は、前記第1の注目三次元位置データと前記第2の注目三次元位置データの前記仮想三次元空間における位置情報に基づいて同一の構造物に係る注目三次元位置データであると判断される場合、前記第1の注目三次元位置データと前記第2の注目三次元位置データに基づいて一つの前記仮想オブジェクトを生成する、構造物表示システム。 (2) In (1), the obtaining means includes a first photographed image obtained by photographing the real space from a first position, and a first tertiary image corresponding to the real object appearing in the first photographed image. Acquiring original position data, a second captured image obtained by capturing the real space from a second position, and second three-dimensional position data corresponding to the real object appearing in the second captured image. , the selecting means selects first three-dimensional position data of interest related to the structure from the first three-dimensional position data based on the region recognized in the first captured image; 2 Selecting second three-dimensional position data of interest related to the structure from the second three-dimensional position data based on the region recognized in the captured image, the virtual object generating means generating the first If it is determined that the three-dimensional position data of interest and the second three-dimensional position data of interest in the virtual three-dimensional space are related to the same structure, the first A structure display system that generates one of the virtual objects based on the three-dimensional position data of interest and the second three-dimensional position data of interest.
 (3)(1)において、前記仮想オブジェクト生成手段は、前記注目三次元位置データに基づいて前記構造物に係る第1及び第2の部分オブジェクトを特定する部分オブジェクト特定手段と、前記第1及び第2の部分オブジェクトの延伸方向が一致していると判定され、且つ一方の延長線上に他方が配置されている場合に、前記第1及び第2の部分オブジェクトに基づいて一つの前記仮想オブジェクトを生成する生成手段と、を含む、構造物表示システム。 (3) In (1), the virtual object generation means includes partial object identification means for identifying first and second partial objects related to the structure based on the three-dimensional position data of interest; When it is determined that the extension directions of the second partial objects are the same, and one of the partial objects is arranged on the extension line of the other, the single virtual object is generated based on the first and second partial objects. a generating means for generating, and a structure display system.
 (4)(1)~(3)のいずれかにおいて、前記仮想オブジェクト生成手段は、前記注目三次元位置データに基づいて前記構造物に係る第1及び第2の部分オブジェクトを特定する部分オブジェクト特定手段と、前記第1及び第2の部分オブジェクトのうち一方の端部が他方の外表面と連続する場合に、前記第1及び第2の部分オブジェクトに基づいて一つの前記仮想オブジェクトを生成する生成手段と、を含む、構造物表示システム。 (4) In any one of (1) to (3), the virtual object generating means includes partial object identification for identifying first and second partial objects related to the structure based on the three-dimensional position data of interest. means for generating one said virtual object based on said first and second partial objects when an end of one of said first and second partial objects is continuous with the outer surface of the other of said first and second partial objects; A structure display system, comprising:
 (5)(1)~(4)のいずれかにおいて、前記仮想オブジェクト生成手段は、前記注目三次元位置データに基づいて前記仮想オブジェクトを生成する際、前記仮想オブジェクトの形状の変化に応じて前記仮想オブジェクトの端部を決定する端部決定手段を含む、構造物表示システム。 (5) In any one of (1) to (4), when the virtual object generation means generates the virtual object based on the three-dimensional position data of interest, the A structure display system comprising an edge determination means for determining an edge of a virtual object.
 (6)(1)~(5)のいずれかにおいて、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトを補正する仮想オブジェクト補正手段を有する、構造物表示システム。 (6) A structure display according to any one of (1) to (5), further comprising virtual object correction means for correcting the virtual object generated by the virtual object generation means based on the user's input on the display screen. system.
 (7)(6)において、前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトを分割する、構造物表示システム。 (7) The structure display system according to (6), wherein the virtual object correction means divides the virtual object generated by the virtual object generation means based on a user's input on the display screen.
 (8)(6)又は(7)において、前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトを延長する、構造物表示システム。 (8) In (6) or (7), the structure display system, wherein the virtual object correction means extends the virtual object generated by the virtual object generation means based on the user's input on the display screen.
 (9)(6)~(8)のいずれかにおいて、前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトの少なくとも一部を削除する、構造物表示システム。 (9) In any one of (6) to (8), the virtual object correction means deletes at least part of the virtual object generated by the virtual object generation means based on the user's input on the display screen. , a structure display system.
 (10)(6)~(9)のいずれかにおいて、前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された複数の仮想オブジェクトを接続する、構造物表示システム。 (10) In any one of (6) to (9), the virtual object correcting means connects the plurality of virtual objects generated by the virtual object generating means based on the user's input on the display screen. Structure display system.
 (11)(6)~(10)のいずれかにおいて、前記ユーザの表示画面上での入力は、前記構造物の形状に沿う入力動作によるものである、構造物表示システム。 (11) In any one of (6) to (10), the user's input on the display screen is an input operation along the shape of the structure.
 (12)(1)~(11)のいずれかにおいて、前記仮想オブジェクトの一部に所定の属性を付与する属性付与手段を有し、前記表示手段は、前記所定の属性が付与された一部を識別表示する、構造物表示システム。 (12) In any one of (1) to (11), there is provided an attribute imparting means for imparting a predetermined attribute to a portion of the virtual object, and the display means is the portion to which the predetermined attribute is imparted. A structure display system that identifies and displays the
 (13)(12)において、前記表示手段は、前記所定の属性が付与された一部を色分けして識別表示する、構造物表示システム。 (13) In the structure display system in (12), the display means color-codes and displays the part to which the predetermined attribute is assigned for identification.
 (14)(1)~(13)のいずれかにおいて、前記構造物は、プラント設備内に設けられると共に、流体が内部に流れる配管である、構造物表示システム。 (14) The structure display system according to any one of (1) to (13), wherein the structure is provided in plant equipment and is a pipe through which a fluid flows.
 (15)構造物が配置された実空間の撮影画像と、前記撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における三次元位置データと、を取得するステップと、学習済みモデルにより、前記撮影画像における前記構造物が表れた領域を認識するステップと、前記撮影画像において認識された前記領域に基づいて、前記三次元位置データから前記構造物に係る注目三次元位置データを選択するステップと、前記注目三次元位置データに基づいて、前記仮想三次元空間における前記構造物を示す仮想オブジェクトを生成するステップと、前記撮影画像の少なくとも一部を前記仮想オブジェクトと共に表示するステップと、を含む構造物表示方法。 (15) Acquiring a photographed image of a real space in which a structure is arranged, and three-dimensional position data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image; a step of recognizing an area where the structure appears in the captured image; and a step of selecting target three-dimensional position data related to the structure from the three-dimensional position data based on the recognized area in the captured image. and generating a virtual object representing the structure in the virtual three-dimensional space based on the three-dimensional position data of interest; and displaying at least part of the photographed image together with the virtual object. Structure display method.
 (16)構造物が配置された実空間の撮影画像と、前記撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における三次元位置データと、を取得する手順、学習済みモデルにより、前記撮影画像における前記構造物が表れた領域を認識する手順、前記撮影画像において認識された前記領域に基づいて、前記三次元位置データから前記構造物に係る注目三次元位置データを選択する手順、前記注目三次元位置データに基づいて、前記仮想三次元空間における前記構造物を示す仮想オブジェクトを生成する手順、前記撮影画像の少なくとも一部を前記仮想オブジェクトと共に表示する手順、をコンピュータに実行させるプログラム。 (16) A procedure for acquiring a photographed image of a real space in which a structure is arranged and three-dimensional position data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image, the a procedure of recognizing an area in which the structure appears in the captured image; a procedure of selecting three-dimensional position data of interest relating to the structure from the three-dimensional position data based on the recognized area in the captured image; A program for causing a computer to execute a procedure for generating a virtual object representing the structure in the virtual three-dimensional space and a procedure for displaying at least part of the captured image together with the virtual object, based on the three-dimensional position data of interest.
 上記(1)~(16)の側面によれば、構造物を示す仮想オブジェクトを精度良く生成することができる。 According to aspects (1) to (16) above, a virtual object representing a structure can be generated with high accuracy.
本実施形態に係る構造物表示システムの全体構成の一例を示す図である。BRIEF DESCRIPTION OF THE DRAWINGS It is a figure which shows an example of the whole structure display system which concerns on this embodiment. 本実施形態におけるユーザ端末の物理構成の一例を示す図である。It is a figure which shows an example of the physical configuration of the user terminal in this embodiment. ユーザ端末で実現される機能の一例を示す機能ブロック図である。3 is a functional block diagram showing an example of functions implemented by a user terminal; FIG. 本実施形態における注目点群データの選択を説明する図である。It is a figure explaining selection of attention point cloud data in this embodiment. 実空間に配置された一つの配管と、異なる2カ所の撮影位置に配置される撮影装置を模式的に示す図である。FIG. 2 is a diagram schematically showing one pipe arranged in real space and imaging devices arranged at two different imaging positions; 第1位置にある撮影装置により撮影された第1撮影画像を示す図である。It is a figure which shows the 1st picked-up image image|photographed with the imaging device in a 1st position. 第2位置にある撮影装置により撮影された第2撮影画像を示す図である。It is a figure which shows the 2nd picked-up image image|photographed by the imaging device in a 2nd position. 注目点群データが示す点群の一例を表す図である。It is a figure showing an example of the point cloud which attention point cloud data shows. 仮想オブジェクトの一例を示す図である。FIG. 4 is a diagram showing an example of a virtual object; FIG. 部分オブジェクトの一例を示す図である。FIG. 4 is a diagram showing an example of a partial object; FIG. 第1撮影画像と共に仮想オブジェクトが表示された画面を示す図である。FIG. 10 is a diagram showing a screen on which a virtual object is displayed together with a first captured image; 第2撮影画像と共に仮想オブジェクトが表示された画面を図である。FIG. 11 is a diagram showing a screen on which a virtual object is displayed together with a second captured image; 本実施形態のユーザ端末における処理フローを示すフローチャートである。It is a flowchart which shows the processing flow in the user terminal of this embodiment. 本実施形態の仮想オブジェクト生成の処理フローを示すフローチャートである。4 is a flow chart showing a processing flow of virtual object generation according to the present embodiment;
 以下に、本発明の実施形態(以下、本実施形態ともいう)について、図面を参照しつつ説明する。なお、開示はあくまで一例にすぎず、当業者において、発明の主旨を保っての適宜変更について容易に想到し得るものについては、当然に本発明の範囲に含有されるものである。また、図面は説明をより明確にするため、実際の態様に比べ、各部の幅、厚さ、形状等について模式的に表される場合があるが、あくまで一例であって、本発明の解釈を限定するものではない。また、本明細書と各図において、既出の図に関して前述したものと同様の要素には、同一の符号を付して、詳細な説明を適宜省略することがある。 An embodiment of the present invention (hereinafter also referred to as the present embodiment) will be described below with reference to the drawings. It should be noted that the disclosure is merely an example, and those skilled in the art will naturally include within the scope of the present invention any appropriate modifications that can be easily conceived while maintaining the gist of the invention. In addition, in order to make the description clearer, the drawings may schematically show the width, thickness, shape, etc. of each part compared to the actual embodiment, but this is only an example, and the interpretation of the present invention is not intended. It is not limited. In addition, in this specification and each figure, the same reference numerals may be given to the same elements as those described above with respect to the existing figures, and detailed description thereof may be omitted as appropriate.
[構造物表示システム100の概要]
 図1は、本実施形態に係る構造物表示システムの全体構成の一例を示す図である。
[Overview of structure display system 100]
FIG. 1 is a diagram showing an example of the overall configuration of a structure display system according to this embodiment.
 本実施形態に係る構造物表示システム100は、ユーザがプラント設備を網羅的かつ俯瞰的に確認することを可能とする表示システムである。構造物表示システム100は、延在構造物である配管が配置されたプラント設備の撮影画像と共に、仮想三次元空間における配管を示す仮想オブジェクトを表示する。撮影画像と重畳して配管を示す仮想オブジェクトが表示されることにより、ユーザは、表示画面上において特定の配管に注目することができる。そのため、ユーザは、プラント設備において保守や点検等が必要な配管が配置される場所を表示画面上で迅速に認識することが可能となる。ユーザは、表示画面上で保守や点検等が必要な配管を認識した上で実際のプラント設備内に出向き、当該配管に対して保守や点検等を行うとよい。 The structure display system 100 according to the present embodiment is a display system that enables the user to comprehensively and bird's-eye view the plant equipment. The structure display system 100 displays a captured image of plant equipment in which pipes, which are extension structures, are arranged, and a virtual object representing the pipes in a virtual three-dimensional space. By displaying the virtual object representing the pipe superimposed on the captured image, the user can focus on the specific pipe on the display screen. Therefore, the user can quickly recognize on the display screen the place where the piping requiring maintenance, inspection, or the like is arranged in the plant equipment. After recognizing the piping requiring maintenance or inspection on the display screen, the user should go to the actual plant facility and perform maintenance or inspection on the piping.
 なお、プラント設備は、例えば、化学プラント、火力発電プラント、原子力発電プラントなど、多種多様な構造物から構成される設備であるとよい。配管は、特定の方向に延在すると共に、その内部を流体が流れる円筒状の構造物である。 It should be noted that the plant equipment may be equipment composed of a wide variety of structures, such as chemical plants, thermal power plants, and nuclear power plants. A pipe is a cylindrical structure that extends in a particular direction and has a fluid flowing through it.
 図1に示すように、構造物表示システム100は、ユーザ端末10、撮影装置20、点群認識システム30を含む。ユーザ端末10、撮影装置20、点群認識システム30のそれぞれは、インターネット等のネットワークNに接続可能である。 As shown in FIG. 1, the structure display system 100 includes a user terminal 10, an imaging device 20, and a point cloud recognition system 30. Each of the user terminal 10, the imaging device 20, and the point cloud recognition system 30 can be connected to a network N such as the Internet.
 撮影装置20は、例えば、赤外線測距センサを備えており、色情報と深さ情報を含む撮影画像を生成可能なカメラである。すなわち、撮影装置20で撮影された撮影画像に含まれる画素それぞれには、色情報であるRGB値に加えて、撮影装置20から撮影対象までの距離を示す値が付与されている。なお、赤外線測距センサが、撮影装置20とは独立して設けられ、撮影画像とは別に、撮影対象までの距離、つまり三次元空間における位置を示すデータを出力する構成であってもよい。 The imaging device 20 is, for example, a camera that includes an infrared ranging sensor and can generate a captured image that includes color information and depth information. That is, each pixel included in the captured image captured by the image capturing device 20 is given a value indicating the distance from the image capturing device 20 to the target in addition to the RGB values, which are color information. The infrared ranging sensor may be provided independently of the photographing device 20, and may output data indicating the distance to the photographed object, that is, the position in the three-dimensional space, separately from the photographed image.
 撮影装置20は、例えば、撮影角度を上下方向及び左右方向で変更可能とするパン・チルト機構を備えており、少なくとも横長又は縦長の部分を含むパノラマ画像を生成する全方位カメラであるとよい。ただし、これに限らず、撮影装置20は、広角レンズを備える広角カメラや、魚眼レンズを備える魚眼カメラであってもよい。なお、本実施形態においては、説明や図示が煩雑になることを避けるため、撮影画像として、パノラマ画像ではなく、標準的なディスプレイに表示可能な縦横比である画像を用いて説明を行うこととする(図4(a)、図4(b)、図6、図7、図11、及び図12参照)。 The imaging device 20 may be, for example, an omnidirectional camera that includes a pan/tilt mechanism that allows the imaging angle to be changed in the vertical and horizontal directions, and that generates a panoramic image that includes at least a horizontally or vertically elongated portion. However, the imaging device 20 is not limited to this, and may be a wide-angle camera with a wide-angle lens or a fish-eye camera with a fish-eye lens. In this embodiment, in order to avoid complicating the description and illustrations, the captured image is not a panoramic image but an image having an aspect ratio that can be displayed on a standard display. (see FIGS. 4(a), 4(b), 6, 7, 11 and 12).
 撮影装置20で撮影された撮影画像は、ネットワークNを介して点群認識システム30に送られる。点群認識システム30は、配管が配置された実空間の撮影画像と、撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における点群データとを記憶する。また、点群認識システム30は、仮想三次元空間において表される、点群データが示す点群を認識する。 A photographed image photographed by the photographing device 20 is sent to the point cloud recognition system 30 via the network N. The point cloud recognition system 30 stores a photographed image of the real space in which the pipes are arranged, and point cloud data in the virtual three-dimensional space corresponding to the real object appearing in the photographed image. Also, the point cloud recognition system 30 recognizes a point cloud indicated by point cloud data represented in a virtual three-dimensional space.
 本実施形態の説明では、実オブジェクトに対応する、仮想三次元空間における三次元位置データとして点群データを使用しているが、これに限定されない。例えば、点群データから生成されるメッシュデータを用いることが可能である。メッシュデータは、点群データよりもデータ容量が少ないため、データのハンドリングが容易であり、三次元位置データの処理を高速にすることができる。 In the description of this embodiment, point cloud data is used as the three-dimensional position data in the virtual three-dimensional space corresponding to the real object, but the present invention is not limited to this. For example, it is possible to use mesh data generated from point cloud data. Since the mesh data has a smaller data capacity than the point cloud data, the handling of the data is easy, and the three-dimensional position data can be processed at high speed.
 なお、図1においては、ユーザ端末10、撮影装置20、及び点群認識システム30をそれぞれ1つずつ示しているが、これらは複数あってもよい。また、撮影装置20が、点群認識システム30が備える機能を含んでいてもよい。なお、点群認識システム30は、クラウド上に仮想的に作られたクラウドサーバ等の仮想サーバで構成されてもよい。 Although one user terminal 10, one imaging device 20, and one point cloud recognition system 30 are shown in FIG. 1, there may be a plurality of these. Moreover, the imaging device 20 may include the functions provided by the point cloud recognition system 30 . Note that the point group recognition system 30 may be configured by a virtual server such as a cloud server that is virtually created on the cloud.
[ユーザ端末10の物理構成の一例]
 図2は、本実施形態におけるユーザ端末の物理構成の一例を示す図である。ユーザ端末10は、ユーザが操作するコンピュータである。ユーザ端末10は、例えば、パーソナルコンピュータ、タブレット端末、携帯電話機(スマートフォンを含む)、ウェアラブル端末などのコンピュータである。図2に示すように、ユーザ端末10には、例えば、プロセッサ12、メモリ14、通信部16、ディスプレイ18、操作部19が含まれる。
[Example of physical configuration of user terminal 10]
FIG. 2 is a diagram showing an example of the physical configuration of a user terminal in this embodiment. The user terminal 10 is a computer operated by a user. The user terminal 10 is, for example, a computer such as a personal computer, a tablet terminal, a mobile phone (including a smart phone), or a wearable terminal. As shown in FIG. 2, the user terminal 10 includes a processor 12, a memory 14, a communication section 16, a display 18, and an operation section 19, for example.
 プロセッサ12は、例えば、ユーザ端末10にインストールされるプログラムに従って動作するCPU等のプログラム制御デバイスである。 The processor 12 is, for example, a program control device such as a CPU that operates according to a program installed in the user terminal 10.
 メモリ14は、例えば、ROMやRAM等の記憶素子やハードディスクドライブ等である。メモリ14には、プロセッサ12によって実行されるプログラム等が記憶される。 The memory 14 is, for example, a storage element such as ROM or RAM, a hard disk drive, or the like. The memory 14 stores programs and the like executed by the processor 12 .
 通信部16は、例えば、有線通信又は無線通信用の通信インターフェイスであり、ネットワークNを介してデータ通信を行う。 The communication unit 16 is, for example, a communication interface for wired communication or wireless communication, and performs data communication via the network N.
 ディスプレイ18は、例えば液晶ディスプレイ等の表示デバイスであって、プロセッサ12の指示に従って各種の画像を表示する。ディスプレイ18は、ユーザが頭部に装着可能なヘッドマウントディスプレイであってもよい。ヘッドマウントディスプレイを装着したユーザは、プラント設備内を歩くことを疑似的に体験することができる。 The display 18 is a display device such as a liquid crystal display, and displays various images according to instructions from the processor 12 . The display 18 may be a head-mounted display that the user can wear on his or her head. A user wearing a head-mounted display can simulate walking in plant equipment.
 操作部19は、例えばキーボード、マウス、タッチパネルなどといったユーザインターフェイスであって、ユーザの操作入力を受け付けて、その内容を示す信号をプロセッサ12に出力する。 The operation unit 19 is a user interface such as a keyboard, a mouse, a touch panel, or the like, and receives user's operation input and outputs a signal indicating the content of the input to the processor 12 .
 なお、メモリ14に記憶されるものとして説明するプログラム及びデータは、ネットワークNを介して供給されるようにしてもよい。また、上記説明した各コンピュータのハードウェア構成は、上記の例に限られず、種々のハードウェアを適用可能である。例えば、コンピュータ読み取り可能な情報記憶媒体を読み取る読取部(例えば、光ディスクドライブやメモリカードスロット)や外部機器とデータの入出力をするための入出力部(例えば、USBポート)が含まれていてもよい。例えば、情報記憶媒体に記憶されたプログラムやデータが読取部や入出力部を介してコンピュータに供給されるようにしてもよい。 The programs and data described as being stored in the memory 14 may be supplied via the network N. Moreover, the hardware configuration of each computer described above is not limited to the above example, and various hardware can be applied. For example, even if a reading unit (e.g., optical disk drive or memory card slot) for reading a computer-readable information storage medium and an input/output unit (e.g., USB port) for inputting/outputting data with an external device are included. good. For example, programs and data stored in an information storage medium may be supplied to a computer via a reading section or an input/output section.
[ユーザ端末10で実現される機能]
 図3は、ユーザ端末で実現される機能の一例を示す機能ブロック図である。図3に示すように、ユーザ端末10では、取得部41、認識部42、選択部43、仮想オブジェクト生成部44、表示部45、属性付与部46、仮想オブジェクト補正部47、及び記憶部48が実現される。取得部41、認識部42、選択部43、仮想オブジェクト生成部44、属性付与部46、及び仮想オブジェクト補正部47は、プロセッサ12を主として実現される。記憶部48は、メモリ14を主として実現される。これら各機能は、本実施形態に係るプログラムをコンピュータが実行することで実現される。このプログラムはコンピュータ可読情報記憶媒体に格納されていてもよい。
[Functions Realized by User Terminal 10]
FIG. 3 is a functional block diagram showing an example of functions implemented by a user terminal. As shown in FIG. 3, the user terminal 10 includes an acquisition unit 41, a recognition unit 42, a selection unit 43, a virtual object generation unit 44, a display unit 45, an attribute assignment unit 46, a virtual object correction unit 47, and a storage unit 48. Realized. The acquisition unit 41 , the recognition unit 42 , the selection unit 43 , the virtual object generation unit 44 , the attribute assignment unit 46 , and the virtual object correction unit 47 are realized mainly by the processor 12 . The storage unit 48 is realized mainly by the memory 14 . Each of these functions is implemented by a computer executing the program according to the present embodiment. This program may be stored in a computer-readable information storage medium.
[ユーザ端末10で実現される機能:取得部41]
 取得部41は、延在構造物である配管が配置された実空間の撮影画像と、撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における点群データと、を取得する。これら取得部41により取得される情報は、ネットワークNを介して点群認識システム30からユーザ端末10に送られ、記憶部48に記憶されているとよい。すなわち、取得部41は、記憶部48から撮影画像と点群データを読み出すことで、それらを取得するとよい。
[Function realized by user terminal 10: acquisition unit 41]
The acquisition unit 41 acquires a photographed image of a real space in which pipes, which are extending structures, are arranged, and point cloud data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image. The information acquired by the acquisition unit 41 may be sent from the point group recognition system 30 to the user terminal 10 via the network N and stored in the storage unit 48 . That is, the acquisition unit 41 may acquire the captured image and the point cloud data by reading them from the storage unit 48 .
[ユーザ端末10で実現される機能:認識部42]
 認識部42は、学習済みモデル48cにより、取得部41により取得された撮影画像における配管が表れた領域を認識する。学習済みモデル48cは、予め撮影された配管が表示される複数の撮影画像が教師画像として機械学習されることで生成されると共に、記憶部48に予め記憶されている。
[Function realized by user terminal 10: recognition unit 42]
The recognition unit 42 recognizes a region in which the pipe appears in the photographed image acquired by the acquisition unit 41, using the learned model 48c. The learned model 48 c is generated by performing machine learning on a plurality of pre-photographed images showing pipes as teacher images, and is pre-stored in the storage unit 48 .
 なお、学習済みモデル48cは、既知の機械学習アルゴリズムを用いて学習されたものであればよく、例えば、教師画像に含まれる特徴が多層構造のニューラルネットワークの各層で自動的に学習されたものであるとよい。具体的には、学習済みモデル48cのアルゴリズムは、例えば、教師画像に閾値を定めて各画素をラベル分けし、ラベル分けされた画像を教師データとしてセマンティックセグメンテーションにより分類されるようにするとよい。ただし、これに限られず、インスタンスセグメンテーション等の既知のアルゴリズムを用いてもよい。 The trained model 48c may be learned using a known machine learning algorithm. For example, the features included in the teacher image may be automatically learned in each layer of a multi-layered neural network. Good to have. Specifically, for the algorithm of the trained model 48c, for example, a threshold is set for the teacher image, each pixel is labeled, and the labeled image is used as teacher data for classification by semantic segmentation. However, it is not limited to this, and a known algorithm such as instance segmentation may be used.
[ユーザ端末10で実現される機能:選択部43]
 選択部43は、認識部42により認識された撮影画像における配管が表れた領域に基づいて、仮想三次元空間における点群データから配管に係る注目点群データを選択する。なお、選択部43による注目点群データの選択は、仮想三次元空間における点群データから注目点群データを抽出することにより行ってもよいし、仮想三次元空間における点群データから注目点群データ以外の点群データを削除することにより行ってもよい。
[Function realized by user terminal 10: selection unit 43]
The selection unit 43 selects point cloud data of interest relating to the pipe from the point cloud data in the virtual three-dimensional space based on the region in which the pipe appears in the captured image recognized by the recognition unit 42 . Note that the selection of the target point cloud data by the selection unit 43 may be performed by extracting the target point cloud data from the point cloud data in the virtual three-dimensional space, or by extracting the target point cloud data from the point cloud data in the virtual three-dimensional space. It may be performed by deleting the point cloud data other than the data.
[ユーザ端末10で実現される機能:仮想オブジェクト生成部44]
 仮想オブジェクト生成部44は、選択部43により選択された注目点群データに基づいて、仮想三次元空間における配管を示す仮想オブジェクトを生成する。なお、仮想オブジェクトは、注目点群データが示す点群の輪郭に沿うと共に、点群に属する点を頂点とする複数の面を含むポリゴンで構成されるとよい。上述の処理は、点群データを三次元位置データとして使用している場合の処理であるが、実オブジェクトに対応するポリゴンを示すメッシュデータを三次元位置データとして使用している場合は、実空間に存在する様々な構造物のメッシュデータから配管に対応するメッシュデータを注目三次元位置データとして選択する処理となる。
[Function realized by user terminal 10: virtual object generator 44]
The virtual object generation unit 44 generates a virtual object representing pipes in the virtual three-dimensional space based on the target point group data selected by the selection unit 43 . The virtual object is preferably configured by a polygon along the contour of the point cloud indicated by the point cloud data of interest and including a plurality of faces having points belonging to the point cloud as vertices. The above processing is performed when point cloud data is used as 3D position data. It is a process of selecting mesh data corresponding to piping from the mesh data of various structures existing in the system as the three-dimensional position data of interest.
 仮想オブジェクト生成部44は、第1注目点群データと第2注目点群データの仮想三次元空間における位置情報に基づいて同一の配管に係る注目点群データであると判断される場合、第1注目点群データと第2注目点群データに基づいて一つの仮想オブジェクトを生成するとよい。ここで、第1注目点群データとは、第1位置から実空間を撮影することにより得られる第1撮影画像に表れた実オブジェクトに対応する点群データから選択された配管に係る点群データである。また、第2注目点群データとは、第1位置と異なる第2位置から実空間を撮影することにより得られる第2撮影画像に表れた実オブジェクトに対応する点群データから選択された配管に係る点群データである。 If the virtual object generation unit 44 determines that the point group data of the first point of interest and the second point of interest group data are related to the same pipe based on the positional information in the virtual three-dimensional space, the first A single virtual object may be generated based on the point cloud data of interest and the second point cloud data of interest. Here, the first point cloud data of interest is point cloud data relating to piping selected from point cloud data corresponding to a real object appearing in a first captured image obtained by capturing an image of the real space from a first position. is. Further, the second point cloud data of interest refers to the piping selected from the point cloud data corresponding to the real object appearing in the second photographed image obtained by photographing the real space from a second position different from the first position. This is point cloud data.
 また、図3に示すように、仮想オブジェクト生成部44は、部分オブジェクト特定部44a、生成部44b、及び端部決定部44cを含むとよい。 Also, as shown in FIG. 3, the virtual object generation unit 44 may include a partial object identification unit 44a, a generation unit 44b, and an end determination unit 44c.
 部分オブジェクト特定部44aは、注目点群データに基づいて配管に係る部分オブジェクトを特定する。 The partial object specifying unit 44a specifies a partial object related to piping based on the point cloud data of interest.
 生成部44bは、部分オブジェクト特定部44aにより特定された第1の部分オブジェクトと第2の部分オブジェクトの延伸方向が一致していると判定され、且つ一方の延長線上に他方が配置されている場合に、それら第1及び第2の部分オブジェクトに基づいて一つの仮想オブジェクトを生成する。また、生成部44bは、第1及び第2の部分オブジェクトのうち一方の端部が他方の外表面と連続する場合に、第1及び第2の部分オブジェクトに基づいて一つの仮想オブジェクトを生成する。 If the generating unit 44b determines that the extending directions of the first partial object and the second partial object specified by the partial object specifying unit 44a match and the other is arranged on the extension line of one Then, one virtual object is generated based on the first and second partial objects. Further, the generation unit 44b generates one virtual object based on the first and second partial objects when one end of the first and second partial objects is continuous with the outer surface of the other. .
 端部決定部44cは、注目点群データに基づいて仮想オブジェクトを生成する際、仮想オブジェクトの幾何的条件に基づいて、仮想オブジェクトの端部を決定するとよい。具体的には、端部決定部44cは、仮想オブジェクトの形状の変化に応じて仮想オブジェクトの端部を決定するとよい。仮想オブジェクトの形状の変化は、例えば、円筒状の仮想オブジェクトの径の大きさの変化である。例えば、端部決定部44cは、仮想オブジェクトの径が段階的に大きくなる形状を認識した場合、その形状部分に隣接する部分を仮想オブジェクトの端部に決定するとよい。また、端部決定部44cは、途中で断面形状が変わる配管に係る注目点群データのうち一部に基づいて、配管の断面形状が一定である仮想オブジェクトを生成することとしてもよい。 When generating a virtual object based on the point cloud data of interest, the edge determining unit 44c may determine the edge of the virtual object based on the geometric conditions of the virtual object. Specifically, the edge determining unit 44c may determine the edge of the virtual object according to the change in the shape of the virtual object. A change in the shape of the virtual object is, for example, a change in the diameter of the cylindrical virtual object. For example, when the end determining unit 44c recognizes a shape in which the diameter of the virtual object increases step by step, the end determination unit 44c may determine a portion adjacent to the shape portion as the end of the virtual object. Further, the end determining unit 44c may generate a virtual object having a constant cross-sectional shape of the pipe based on part of the target point cloud data relating to the pipe whose cross-sectional shape changes midway.
[ユーザ端末10で実現される機能:表示部45]
 表示部45は、撮影装置20から配管を見た様子を表す撮影画像の一部を、ディスプレイ18に表示する。また、表示部45は、仮想オブジェクト生成部44により生成された仮想オブジェクトを、撮影画像に重畳してディスプレイ18に表示する。
[Function realized by user terminal 10: display unit 45]
The display unit 45 displays, on the display 18, a part of the photographed image showing how the piping is viewed from the photographing device 20. FIG. In addition, the display unit 45 displays the virtual object generated by the virtual object generation unit 44 on the display 18 by superimposing it on the captured image.
 また、表示部45は、後述の属性付与部46により所定の属性が付与された仮想オブジェクトの一部を識別表示するとよい。例えば、表示部45は、属性付与部46により所定の属性が付与された仮想オブジェクトの一部を色分けして識別表示するとよい。 Also, the display unit 45 may identify and display a part of the virtual object to which a predetermined attribute has been assigned by the attribute assigning unit 46, which will be described later. For example, the display unit 45 may display a portion of the virtual object to which the predetermined attribute has been assigned by the attribute assigning unit 46 in different colors for identification.
 また、表示部45は、属性付与部46により所定の属性が付与されたことを示す目印を表示するとよい。また、表示部45は、属性に応じて目印を識別表示するとよい。例えば、表示部45は、ある属性が付与された部分に青丸の目印を表示し、他の属性が付与された部分に赤丸の目印を表示するとよい。 Also, the display unit 45 may display a mark indicating that the attribute imparting unit 46 has imparted a predetermined attribute. Also, the display unit 45 may display the mark in an identifiable manner according to the attribute. For example, the display unit 45 may display a blue circle mark on a portion to which a certain attribute is assigned, and a red circle mark on a portion to which another attribute is assigned.
[ユーザ端末10で実現される機能:属性付与部46]
 属性付与部46は、仮想オブジェクトの一部に所定の属性を付与する。属性が付与される一部は、例えば、複数の配管のうちメンテナンス対象となる特定の配管を示す仮想オブジェクトや、一つの配管のうちメンテナンス対象となる部分に対応する仮想オブジェクトの部分であるとよい。
[Function realized by user terminal 10: attribute assigning unit 46]
The attribute assigning unit 46 assigns a predetermined attribute to part of the virtual object. The portion to which attributes are assigned may be, for example, a virtual object indicating a specific pipe to be maintained among a plurality of pipes, or a portion of a virtual object corresponding to a portion to be maintained among one pipe. .
 所定の属性は、例えば、配管のうち分岐部や継手分等の特徴的な形状、性質を有する部分に対応する仮想オブジェクトの部分に付与されるとよい。また、所定の属性は、例えば、配管のうち摩耗等により劣化しやすい部分に対応する仮想オブジェクトの部分に付与されるとよい。 Predetermined attributes may be given to portions of the virtual object that correspond to portions of piping that have characteristic shapes and properties, such as branch portions and joint portions, for example. Also, the predetermined attribute may be assigned to a portion of the virtual object corresponding to a portion of the pipe that is likely to deteriorate due to wear or the like, for example.
[ユーザ端末で実現される機能:仮想オブジェクト補正部47]
 仮想オブジェクト補正部47は、ユーザの表示画面上での入力に基づいて仮想オブジェクト生成部44によって生成された仮想オブジェクトを補正する。具体的には、例えば、仮想オブジェクト補正部47は、ユーザの表示画面上での入力に基づいて仮想オブジェクト生成部44によって生成された仮想オブジェクトを分割する。また、例えば、仮想オブジェクト補正部47は、ユーザの表示画面上での入力に基づいて仮想オブジェクト生成部44によって生成された仮想オブジェクトを延長する。また、例えば、仮想オブジェクト補正部47は、ユーザの表示画面上での入力に基づいて仮想オブジェクト生成部44によって生成された仮想オブジェクトの少なくとも一部を削除する。また、例えば、仮想オブジェクト補正部47は、ユーザの表示画面上での入力に基づいて仮想オブジェクト生成部44によって生成された複数の仮想オブジェクトを接続する。
[Function realized by user terminal: virtual object correction unit 47]
The virtual object correction unit 47 corrects the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Specifically, for example, the virtual object correction unit 47 divides the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Also, for example, the virtual object correction unit 47 extends the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Also, for example, the virtual object correction unit 47 deletes at least part of the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. Also, for example, the virtual object correction unit 47 connects a plurality of virtual objects generated by the virtual object generation unit 44 based on the user's input on the display screen.
 ユーザの表示画面上での入力は、配管の形状に沿う入力動作によるものであるとよい。 The user's input on the display screen should be based on the input operation along the shape of the piping.
[ユーザ端末で実現される機能:記憶部48]
 記憶部48は、ネットワークNを介して点群認識システム30から供給された撮影画像を記憶する画像記憶部48a、撮影画像に表れた実オブジェクトに対応する仮想三次元空間における点群データを記憶する点群データ記憶部48bを含む。また、記憶部48は、予め生成された学習済みモデル38aを記憶する。
[Function realized by user terminal: storage unit 48]
The storage unit 48 stores the image storage unit 48a that stores the captured image supplied from the point cloud recognition system 30 via the network N, and the point cloud data in the virtual three-dimensional space corresponding to the real object appearing in the captured image. A point cloud data storage unit 48b is included. The storage unit 48 also stores a pre-generated learned model 38a.
[仮想オブジェクトの生成処理の詳細]
 次に、主に図4~図10を参照して、本実施形態における仮想オブジェクトの生成処理の詳細について説明する。
[Details of virtual object generation processing]
Next, mainly referring to FIGS. 4 to 10, details of virtual object generation processing in this embodiment will be described.
 図4は、本実施形態における注目点群データの選択を説明する図である。図5は、実空間に配置された一つの配管と、異なる2カ所の撮影位置に配置される撮影装置を模式的に示す図である。図6は、第1位置にある撮影装置により撮影された第1撮影画像を示す図である。図7は、第2位置にある撮影装置により撮影された第2撮影画像を示す図である。図8は、注目点群データが示す点群の一例を表す図である。図9は、仮想オブジェクトの一例を示す図である。図10は、部分オブジェクトの一例を示す図である。 FIG. 4 is a diagram for explaining selection of target point cloud data in this embodiment. FIG. 5 is a diagram schematically showing one pipe arranged in real space and imaging devices arranged at two different imaging positions. FIG. 6 is a diagram showing a first captured image captured by the imaging device at the first position. FIG. 7 is a diagram showing a second captured image captured by the imaging device at the second position. FIG. 8 is a diagram showing an example of a point group indicated by target point group data. FIG. 9 is a diagram showing an example of a virtual object. FIG. 10 is a diagram showing an example of a partial object.
 なお、図5に示す撮影装置20から延びる2本の矢印は、第1位置と第2位置のそれぞれにおける撮影範囲を示している。上述のように撮影装置20は全方位カメラであるとよいが、説明が複雑になるのを避けるため、ここでは撮影範囲が180°未満である例を挙げて説明する。 Note that two arrows extending from the imaging device 20 shown in FIG. 5 indicate imaging ranges at the first position and the second position, respectively. As described above, the photographing device 20 is preferably an omnidirectional camera, but in order to avoid complicating the explanation, an example in which the photographing range is less than 180° will be described here.
 図6で示す第1撮影画像に対応する仮想三次元空間における点群データの座標系と、図7で示す第2撮影画像に対応する仮想三次元空間における点群データの座標系とは共通である。このように座標系を共通とすることは、例えば、撮影装置20において、第1位置に対する第2位置の相対位置が認識されることにより実現されるとよい。または、座標系を共通とすることは、例えば、第1撮影画像に対応する点群データが示す点群に基づく形状と第2撮影画像に対応する点群データが示す点群に基づく形状のいずれか一方に対して他方を、形状マッピングにより合わせ込むことにより実現されるとよい。 The coordinate system of the point cloud data in the virtual three-dimensional space corresponding to the first captured image shown in FIG. 6 and the coordinate system of the point cloud data in the virtual three-dimensional space corresponding to the second captured image shown in FIG. 7 are common. be. Such sharing of the coordinate system may be realized, for example, by recognizing the relative position of the second position with respect to the first position in the photographing device 20 . Alternatively, sharing a coordinate system means, for example, either a shape based on the point cloud indicated by the point cloud data corresponding to the first captured image or a shape based on the point cloud indicated by the point cloud data corresponding to the second captured image. It may be realized by fitting one of them to the other by shape mapping.
 本実施形態においては、図6及び図7に示す2枚の撮影画像を用いて説明するが、撮影画像は3枚以上であってもよく、それら撮影画像に対応する仮想三次元空間における点群データの座標系は互いに共通であるとよい。 In the present embodiment, the two shot images shown in FIGS. 6 and 7 are used for explanation, but the number of shot images may be three or more, and the point cloud in the virtual three-dimensional space corresponding to the shot images It is preferable that the data coordinate system be common to each other.
[仮想オブジェクトの生成処理の詳細:学習済みモデル48cを用いた生成処理]
 プラント設備においては、複数の配管を含む複数の構造物が複雑に配置されている。そのような設備において、配管を示す仮想オブジェクトを精度良く生成するのは難しい。
[Details of virtual object generation processing: generation processing using learned model 48c]
In plant facilities, a plurality of structures including a plurality of pipes are arranged in a complicated manner. In such facilities, it is difficult to accurately generate a virtual object representing piping.
 そこで、本実施形態においては、学習済みモデル48cを用いて、配管に係る注目点群データを選択し、注目点群データに基づいて配管を示す仮想オブジェクトを生成する構成を採用する。 Therefore, in the present embodiment, a configuration is adopted in which point cloud data related to piping is selected using the learned model 48c, and a virtual object representing piping is generated based on the point cloud data of interest.
 具体的には、図4(a)に示すような配管が表れた撮影画像が学習済みモデル48cに入力されることにより、認識部42が、配管が表れた領域(以下、配管領域ともいう)を認識する。図4(b)においては、配管領域が認識された画像、すなわち、配管領域が他の領域と区画された画像を示している。 Specifically, when a photographed image in which piping appears as shown in FIG. 4A is input to the learned model 48c, the recognition unit 42 recognizes an area in which piping appears (hereinafter, also referred to as a piping area). to recognize FIG. 4B shows an image in which the piping area is recognized, that is, an image in which the piping area is partitioned from other areas.
 また、図4(c)においては、撮影画像に表れた実オブジェクトに対応する、仮想三次元空間におけるプラント設備全体に係る点群データが示す点群を表している。選択部43は、認識部42により認識された撮影画像における配管領域に基づいて、仮想三次元空間における点群データから配管に係る注目点群データを選択する。図4(d)においては、仮想三次元空間における、配管に係る注目点群データが示す点群を表している。 In addition, FIG. 4(c) shows the point cloud indicated by the point cloud data related to the entire plant facility in the virtual three-dimensional space, corresponding to the real object appearing in the captured image. The selection unit 43 selects point cloud data of interest related to the pipe from the point cloud data in the virtual three-dimensional space based on the pipe region in the captured image recognized by the recognition unit 42 . FIG. 4D shows a point group indicated by point group data of interest related to piping in the virtual three-dimensional space.
 そして、仮想オブジェクト生成部44が、選択部43により選択された注目点群データに基づいて、仮想三次元空間における配管を示す仮想オブジェクトを生成する。 Then, the virtual object generation unit 44 generates a virtual object representing pipes in the virtual three-dimensional space based on the target point cloud data selected by the selection unit 43 .
 以上のように、学習済みモデル48cを用いて配管に係る注目点群データを選択した上で、配管を示す仮想オブジェクトを生成する構成を採用することにより、仮想オブジェクトを精度良く生成することができる。 As described above, a virtual object can be generated with high accuracy by adopting a configuration in which a virtual object representing a pipe is generated after selecting target point cloud data related to the pipe using the trained model 48c. .
[仮想オブジェクトの生成処理の詳細:複数の撮影画像を用いた生成処理]
 プラント設備内に配置される配管は、特定の方向に延在する長手の構造物である。そのため、1枚の撮影画像に一つの配管が収まり切らず、一つの配管が複数の撮影画像に跨って表される場合がある。このように、複枚の撮影画像に跨る配管においては、一つの配管であるにも関わらず、分断された複数の仮想オブジェクトが生成されてしまう場合がある。なお、一つの配管とは、例えば、その内部を同じ流体が連続的に流れる一続きの構造物であるとよい。
[Details of virtual object generation processing: generation processing using multiple captured images]
Piping installed in plant equipment is a longitudinal structure extending in a particular direction. Therefore, one pipe may not fit in one captured image, and may be represented across a plurality of captured images. In this way, in a pipe extending over a plurality of captured images, a plurality of divided virtual objects may be generated even though the pipe is one pipe. Note that one pipe may be, for example, a continuous structure in which the same fluid continuously flows.
 そこで、本実施形態においては、異なる撮影位置から撮影した複数の撮影画像に基づいて、精度良く一つの配管を示す仮想オブジェクトを生成可能な構成を採用する。 Therefore, in this embodiment, a configuration is adopted that can generate a virtual object that accurately shows one pipe based on a plurality of captured images captured from different capturing positions.
 具体的には、まず、取得部41が、第1位置から実空間を撮影することにより得られる第1撮影画像を取得する。また、取得部41は、第2位置から実空間を撮影することにより得られる第2撮影画像を取得する。 Specifically, first, the acquisition unit 41 acquires the first captured image obtained by capturing the real space from the first position. The acquisition unit 41 also acquires a second captured image obtained by capturing the real space from the second position.
 図5においては、実空間に存在する配管Pを、撮影装置20により第1位置及び第2位置から撮影する様子を示している。なお、図5においては、実空間に存在する配管P以外の構造物の図示を省略している。図6に示すように、第1位置にある撮影装置20により撮影された撮影画像には、配管Pの一部が表されている。図7に示すように、第2位置にある撮影装置20により撮影された撮影画像には、配管Pの他の一部が表されている。 FIG. 5 shows how the pipe P existing in the real space is photographed by the photographing device 20 from the first position and the second position. In addition, in FIG. 5, illustration of structures other than the pipe P existing in the real space is omitted. As shown in FIG. 6, part of the pipe P is shown in the captured image captured by the imaging device 20 at the first position. As shown in FIG. 7, another part of the pipe P is shown in the photographed image photographed by the photographing device 20 at the second position.
 また、取得部41は、第1撮影画像に表れた実オブジェクトに対応する第1点群データと、第2撮影画像に表れた実オブジェクトに対応する第2点群データを取得する。そして、選択部43は、学習済みモデル48cにより第1撮影画像において認識された配管領域に基づいて、第1点群データから配管に係る第1注目点群データを選択する。また、選択部43は、学習済みモデル48cにより第2撮影画像において認識された配管領域に基づいて、第2点群データから配管に係る第2注目点群データを選択する。図8には、第1注目点群データが示す点群PA1の一部と、第2注目点群データが示す点群PA2の一部が表されている。 The acquisition unit 41 also acquires first point cloud data corresponding to the real object appearing in the first captured image and second point cloud data corresponding to the real object appearing in the second captured image. Then, the selection unit 43 selects the first target point cloud data related to the pipe from the first point cloud data based on the pipe region recognized in the first captured image by the learned model 48c. Further, the selection unit 43 selects second target point cloud data related to piping from the second point cloud data based on the piping region recognized in the second captured image by the learned model 48c. FIG. 8 shows part of the point group PA1 indicated by the first target point cloud data and part of the point group PA2 indicated by the second target point cloud data.
 図8においては、仮想三次元空間において、第1注目点群データが示す点群PA1と第2点群データが示す点群PA2の一部は互いに重なるように配置されている様子を示している。すなわち、第1注目点群データと第2注目点群データは、共通の位置情報である共通注目点群データを含んでいる。図8においては、点群PA1と点群PA2が互いに重なる部分、すなわち共通注目点群データが示す点群を、点群PA3として表している。 FIG. 8 shows that part of the point group PA1 indicated by the first point cloud data of interest and the point group PA2 indicated by the second point group data are arranged so as to overlap each other in the virtual three-dimensional space. . That is, the first target point cloud data and the second target point cloud data include common target point cloud data, which is common position information. In FIG. 8, the overlapping portion of the point cloud PA1 and the point cloud PA2, that is, the point cloud indicated by the common point cloud data of interest is represented as a point cloud PA3.
 このように、共通注目点群データが示す点群PA3が存在する場合、仮想オブジェクト生成部44は、第1注目点群データ及び第2注目点群データに基づいて一つの仮想オブジェクトが生成する。すなわち、仮想オブジェクト生成部44は、第1注目点群データと第2注目点群データの仮想三次元空間における位置情報に基づいて同一の構造物に係る注目点群データであると判断される場合、第1注目点群データと第2注目点群データに基づいて一つの仮想オブジェクトを生成する。具体的には、仮想オブジェクト生成部44は、第1注目点群データと第2注目点群データが所定距離内の位置情報を含む場合、第1注目点群データと第2注目点群データに基づいて一つの仮想オブジェクトを生成するとよい。図9においては、仮想オブジェクト生成部44により、図8に示す点群データが示す点群に基づいて生成された一つの仮想オブジェクトOBを示している。 Thus, when the point cloud PA3 indicated by the common point cloud data exists, the virtual object generation unit 44 generates one virtual object based on the first point cloud data and the second point cloud data. That is, when the virtual object generation unit 44 determines that the point group data of the first point of interest and the second point of interest group data are related to the same structure based on the positional information in the virtual three-dimensional space. , one virtual object is generated based on the first target point cloud data and the second target point cloud data. Specifically, when the first target point group data and the second target point group data include position information within a predetermined distance, the virtual object generation unit 44 generates A virtual object may be generated based on the FIG. 9 shows one virtual object OB generated by the virtual object generator 44 based on the point group indicated by the point cloud data shown in FIG.
[仮想オブジェクトの生成処理の詳細:部分オブジェクトを特定することによる生成処理]
 プラント設備においては複数の配管を含む多数の構造物が配置されていることより、ある配管が、他の配管や、配管以外の構造物の陰に隠れてしまうことにより、配管に係る点群データが非連続となる場合がある。すなわち、配管に係る点群データが示す点群の一部が欠落している場合がある。これにより、一つの配管を示す仮想オブジェクトが分断されて生成される場合がある。また、一つの配管が一直線上に無い形状である場合、例えば、一つの配管に分岐部が存在する場合、一つの配管を示す仮想オブジェクトが分断されて生成される場合がある。このように、本来一つの仮想オブジェクトが生成されるべきであるところ、複数の仮想オブジェクトが生成されてしまう場合がある。
[Details of virtual object generation processing: generation processing by specifying partial objects]
Since many structures including multiple pipes are arranged in plant equipment, a certain pipe is hidden behind other pipes and structures other than pipes, so point cloud data related to pipes may be discontinuous. That is, part of the point cloud indicated by the point cloud data related to piping may be missing. As a result, a virtual object representing one pipe may be divided and generated. Further, when one pipe has a shape that is not on a straight line, for example, when one pipe has a branch portion, a virtual object representing one pipe may be divided and generated. In this way, a plurality of virtual objects may be generated when one virtual object should be generated.
 そこで、本実施形態においては、注目点群データに基づいて配管に係る部分オブジェクトを特定し、特定された複数の部分オブジェクトに基づいて一つの仮想オブジェクトを生成する構成を採用した。 Therefore, in the present embodiment, a configuration is adopted in which a partial object related to piping is specified based on the point cloud data of interest, and one virtual object is generated based on a plurality of specified partial objects.
 図8においては、図6に表される構造物Sにより死角となった部分において、注目点群データが示す点群が分断された様子を示している。図8においては、分断された部分を符号「D」で示している。 FIG. 8 shows how the point cloud indicated by the point cloud data of interest is divided in the blind spots caused by the structure S shown in FIG. In FIG. 8, the divided portion is indicated by symbol "D".
 部分オブジェクト特定部44aは、分断された一方の注目点群データが示す点群から第1の部分オブジェクトOB1を特定し、他方の注目点群データが示す点群から第2の部分オブジェクトOB2を特定する。図10においては、部分オブジェクト特定部44aにより特定された第1の部分オブジェクトOB1と第2の部分オブジェクトOB2を示している。 The partial object identifying unit 44a identifies the first partial object OB1 from the point cloud indicated by one of the divided target point cloud data, and identifies the second partial object OB2 from the point cloud indicated by the other of the divided target point cloud data. do. FIG. 10 shows a first partial object OB1 and a second partial object OB2 specified by the partial object specifying unit 44a.
 図10に示すように、第1の部分オブジェクトOB1は、中心軸O1を中心として延びる部分を含む形状である。また、第2の部分オブジェクトOB2は、中心軸O2を中心として延びる部分を含む形状である。そして、中心軸O1と中心軸O2は、同一直線上にある。すなわち、図10に示す第1の部分オブジェクトOB1と第2の部分オブジェクトOB2とは、延伸方向が一致しており、且つ一方の延長線上に他方が配置されている。 As shown in FIG. 10, the first partial object OB1 has a shape including a portion extending around the central axis O1. Also, the second partial object OB2 has a shape including a portion extending around the central axis O2. The central axis O1 and the central axis O2 are on the same straight line. That is, the first partial object OB1 and the second partial object OB2 shown in FIG. 10 have the same extension direction, and the other is arranged on the extension line of one.
 また、図5に示すように、配管Pは、分岐部B1を含む。このような配管Pが表れた撮影画像において、部分オブジェクト特定部44aは、図10に示すように、第2-1部分オブジェクトOB21と、第2-2部分オブジェクトOB2を特定する。第2-2部分オブジェクトOB22は、その端部が第2-1の部分オブジェクトOB21の外表面と連続している。 Also, as shown in FIG. 5, the pipe P includes a branch portion B1. In such a photographed image in which the pipe P appears, the partial object specifying unit 44a specifies the 2-1 partial object OB21 and the 2-2 partial object OB2, as shown in FIG. The 2-2nd partial object OB22 is continuous at its end with the outer surface of the 2-1st partial object OB21.
 生成部44bは、第1の部分オブジェクトOB1と第2の部分オブジェクトOB2(第2-1の部分オブジェクトOB21及び第2-2の部分オブジェクトOB22)に基づいて一つの仮想オブジェクトOBを生成する。すなわち、生成部44bは、図10に示す第1の部分オブジェクトOB1と第2の部分オブジェクトOB2に基づいて、図9に示す一つの仮想オブジェクトOBを生成する。 The generation unit 44b generates one virtual object OB based on the first partial object OB1 and the second partial object OB2 (the 2-1st partial object OB21 and the 2-2nd partial object OB22). That is, the generation unit 44b generates one virtual object OB shown in FIG. 9 based on the first partial object OB1 and the second partial object OB2 shown in FIG.
 以上説明したように、本実施形態においては、複数の部分オブジェクトの延伸方向が一致していると判定され、且つ一方の延長線上に他方が配置されている場合、生成部44bにより、複数の部分オブジェクトに基づいて一つの仮想オブジェクトを生成する。また、複数の部分オブジェクトのうち一方の端部が他方の外表面と連続する場合、生成部44bにより、複数の部分オブジェクトに基づいて一つの仮想オブジェクトを生成する。このような構成を採用することにより、配管を示す仮想オブジェクトOBを精度良く生成することができる。なお、図5においては、T字状の分岐部B1を例に挙げたが、これに限られず、分岐部はY字状等であってもよい。 As described above, in the present embodiment, when it is determined that the extending directions of a plurality of partial objects are the same and the other is arranged on the extension line of one, the generation unit 44b generates a plurality of partial objects. Create a single virtual object based on the object. Further, when one end of a plurality of partial objects is continuous with the other outer surface, the generation unit 44b generates one virtual object based on the plurality of partial objects. By adopting such a configuration, it is possible to accurately generate a virtual object OB representing piping. In FIG. 5, the T-shaped branched portion B1 is taken as an example, but the branched portion is not limited to this, and the branched portion may be Y-shaped or the like.
[仮想オブジェクトの生成処理の詳細:仮想オブジェクトの端部の決定]
 プラント設備においては同じ方向に延伸するように互いに接続される複数の配管がある。このような場合、複数の配管を示す仮想オブジェクトが一つの配管を示す仮想オブジェクトとして生成されてしまう場合がある。
[Details of virtual object generation processing: determination of edge of virtual object]
In plant equipment, there are multiple pipes connected to each other so as to extend in the same direction. In such a case, a virtual object representing a plurality of pipes may be generated as a virtual object representing a single pipe.
 そこで、本実施形態においては、端部決定部44cにより、注目点群データに基づいて仮想オブジェクトを生成する際、仮想オブジェクトの形状の変化に応じて仮想オブジェクトの端部を決定する。 Therefore, in the present embodiment, when generating a virtual object based on the target point group data, the edge determination unit 44c determines the edge of the virtual object according to the change in the shape of the virtual object.
 図7に示すように、配管Pは、継手であるフランジFを介して別体である配管P1に接続されている。フランジFは、配管Pよりも径が大きい部分である。図8に示すように、配管に係る注目点群データが示す点群において、途中で径が異なる部分が生成される。端部決定部44cは、フランジFに隣接する部分を配管Pの端部に決定する。すなわち、仮想オブジェクト生成部44は、図9に示すように、仮想オブジェクトOBの端部Eを決定する。なお、仮想オブジェクトの形状の変化を生じさせる部分は、フランジFに限らず、流量の制御を行うバルブ等、配管とは異なる形状を有する部分であればよい。 As shown in FIG. 7, the pipe P is connected to a separate pipe P1 via a flange F that is a joint. The flange F is a portion having a larger diameter than the pipe P. As shown in FIG. 8, in the point cloud indicated by the point cloud data of interest related to piping, a portion having a different diameter is generated in the middle. The end portion determining portion 44c determines the portion adjacent to the flange F as the end portion of the pipe P. As shown in FIG. That is, the virtual object generator 44 determines the end E of the virtual object OB as shown in FIG. Note that the portion that causes the shape of the virtual object to change is not limited to the flange F, and may be any portion that has a shape different from that of the pipe, such as a valve that controls the flow rate.
[画面例、及び仮想オブジェクトの補正]
 次に、図11、図12を参照して、ディスプレイ16に表示される画面例を説明する。図11は、第1撮影画像と共に仮想オブジェクトが表示された画面を示す図である。図12は、第2撮影画像と共に仮想オブジェクトが表示された画面を図である。
[Screen example and correction of virtual objects]
Next, examples of screens displayed on the display 16 will be described with reference to FIGS. 11 and 12. FIG. FIG. 11 is a diagram showing a screen on which a virtual object is displayed together with the first captured image. FIG. 12 is a diagram showing a screen on which a virtual object is displayed together with the second captured image.
 図11は、図6で示した第1撮影画像と共に図10で示した仮想オブジェクトが表示される表示画面を示している。図12は、図7で示した第2撮影画像と共に図10で示した仮想オブジェクトが表示される表示画面を示している。図11、図12においては、ハッチングが付された部分が仮想オブジェクトOBを表している。なお、図11、図12においては、撮影画像に示される配管の輪郭よりも仮想オブジェクトOBを若干大きく示しているが、実際は、それらはほぼ同じ大きさであるとよい。 FIG. 11 shows a display screen on which the first captured image shown in FIG. 6 and the virtual object shown in FIG. 10 are displayed. FIG. 12 shows a display screen on which the virtual object shown in FIG. 10 is displayed together with the second captured image shown in FIG. In FIGS. 11 and 12, hatched portions represent virtual objects OB. In FIGS. 11 and 12, the virtual object OB is shown slightly larger than the contour of the pipe shown in the captured image, but in reality, they should be approximately the same size.
 ユーザの入力により、撮影画像に表される特定の配管が表示画面上で選択されると、選択された配管上に仮想オブジェクトが重ねて表示される。なお、上述のように、仮想オブジェクトはポリゴンから構成されるものであるとよい。また、ポリゴンから構成される仮想オブジェクトは、例えば、着色透明のオブジェクトとして表示されるとよい。具体的には、例えば、仮想オブジェクトは、青色かつ透明のオブジェクトとして表示されるとよい。 When a specific pipe represented in the captured image is selected on the display screen by user input, a virtual object is displayed superimposed on the selected pipe. It should be noted that, as described above, the virtual object may preferably be composed of polygons. Also, a virtual object composed of polygons may be displayed as, for example, a colored transparent object. Specifically, for example, the virtual object may be displayed as a blue and transparent object.
 図11、図12に示すように、撮影画像に表される一つの配管に、仮想オブジェクトOBが重ねて表示されることより、ユーザは、特定の配管を画面上で認識することができる。なお、図示は省略するが、図11、図12に示される配管以外の配管にも同様に仮想オブジェクトOBが重ねて表示されるとよい。そして、ユーザの入力に応じて、仮想オブジェクトOBが重ねて表示される配管が切り替えられるとよい。 As shown in FIGS. 11 and 12, the user can recognize a specific pipe on the screen by displaying a virtual object OB superimposed on one pipe represented in the captured image. Although illustration is omitted, the virtual object OB may be similarly superimposed and displayed on pipes other than the pipes shown in FIGS. 11 and 12 . Then, it is preferable that the pipes on which the virtual object OB is superimposed and displayed are switched according to the user's input.
 また、図11においては、属性付与部46により所定の属性が付与されたことを示す目印Mが表示される様子を示している。ユーザの表示画面上での入力に基づいて、目印Mが選択されると、その目印Mに対応付けられる属性に関する情報が表示されるとよい。図11においては、目印Mに対応付けられる属性に関する情報として、「管理ポイント」との文字が表示される様子を示している。このような表示がされることにより、ユーザは、メンテナンスや点検を行う際に、配管のどの部分に注目すれば良いのかを予め表示画面上で把握することができる。 Also, FIG. 11 shows how a mark M is displayed to indicate that a predetermined attribute has been assigned by the attribute assigning unit 46 . When the mark M is selected based on the user's input on the display screen, information regarding attributes associated with the mark M may be displayed. FIG. 11 shows how the characters "management point" are displayed as the information about the attribute associated with the mark M. In FIG. With such a display, the user can grasp in advance on the display screen which part of the pipe should be focused on when performing maintenance or inspection.
 また、図11、図12においては、認識済みオブジェクトの一覧が表示される例を示している。図11、図12においては、画面に表示される構造物のうち配管IDが「001~005」の配管を示す仮想オブジェクトを表示可能であることを示している。また、画面に表示される構造物のうち機器IDが「011~012」の機器を示す仮想オブジェクトを表示可能であることを示している。図11、図12においては、配管ID「002」の配管が選択されて、当該配管を示す仮想オブジェクトOBが表示されている様子を示している。なお、図11、図12においては、配管ID「002」の配管以外の構造物の図示を一部省略して示している。 Also, FIGS. 11 and 12 show examples in which a list of recognized objects is displayed. 11 and 12 show that virtual objects representing pipes with pipe IDs of "001 to 005" among the structures displayed on the screen can be displayed. It also indicates that virtual objects representing devices with device IDs “011 to 012” among the structures displayed on the screen can be displayed. FIGS. 11 and 12 show that the pipe with the pipe ID “002” is selected and the virtual object OB representing the pipe is displayed. 11 and 12, the illustration of structures other than the pipe with the pipe ID "002" is partially omitted.
 ここで、上述のように、構造物表示システム100においては仮想オブジェクトを精度良く生成することができる。しかしながら、プラント設備は複雑な構成を有することより、必ずしも実オブジェクトに応じた仮想オブジェクトが生成されるとは限らない。そこで、本実施形態においては、ユーザの入力により仮想オブジェクトを補正する機能を採用する。具体的には、構造物表示システム100は、仮想オブジェクト補正部47を含む。 Here, as described above, the structure display system 100 can accurately generate virtual objects. However, since plant equipment has a complicated configuration, it is not always the case that a virtual object corresponding to a real object is generated. Therefore, in this embodiment, a function of correcting a virtual object based on user input is employed. Specifically, the structure display system 100 includes a virtual object correction section 47 .
 仮想オブジェクト補正部47は、例えば、延長機能、分割機能、合体機能、削除機能を発揮するとよい。例えば、図11、図12に示される「なぞって認識」、「分割」、「マージ」、「削除」のいずれかをユーザが選択することで、選択された機能が発揮される補正モードになるとよい。なお、ユーザによる選択は、ユーザの表示画面上での入力により行われるとよい。 The virtual object correction unit 47 may exhibit, for example, an extension function, a division function, a union function, and a deletion function. For example, when the user selects any one of "trace and recognize", "divide", "merge", and "delete" shown in FIGS. good. It should be noted that the selection by the user is preferably performed by the user's input on the display screen.
 「なぞって認識」が選択された場合、延長機能が発揮される補正モードに切り替わるとよい。具体的には、ユーザの表示画面上での入力に基づいて、仮想オブジェクト補正部47が、仮想オブジェクト生成部44によって生成された仮想オブジェクトを延長するとよい。これにより、仮想オブジェクト生成部44によって生成されなかった仮想オブジェクトの端部を、適切に表すことができる。 When "Trace and recognize" is selected, it is better to switch to the correction mode where the extension function is exhibited. Specifically, the virtual object correction unit 47 may extend the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. As a result, the edge of the virtual object that was not generated by the virtual object generator 44 can be represented appropriately.
 なお、延長機能が発揮される補正モードおけるユーザの入力は、配管の形状に沿う入力動作によるものであるとよい。例えば、表示画面に表されるカーソルを配管の形状に沿って移動させるのに伴い、仮想オブジェクトが延びとよい。または、タッチパネルにおいてユーザが配管の形状を指でなぞる動作を行うのに伴い、仮想オブジェクトが延びるとよい。 It should be noted that the user's input in the correction mode in which the extension function is exhibited should be an input operation that conforms to the shape of the pipe. For example, the virtual object may extend as the cursor displayed on the display screen is moved along the shape of the pipe. Alternatively, the virtual object may extend as the user traces the shape of the pipe with a finger on the touch panel.
 また、「なぞって認識」が選択された場合、属性付与部46により所定の属性が付与される機能が発揮されるモードとなってもよい。すなわち、ユーザによる配管の形状に沿う入力動作がなされた領域に対して所定の属性が付与されることとなってもよい。これにより、ある程度の長さのある領域に対して所定の属性を付与することが可能となる。 Further, when "recognize by tracing" is selected, a mode in which a function of giving a predetermined attribute by the attribute assigning unit 46 is exhibited may be activated. That is, a predetermined attribute may be given to a region in which the user has performed an input action along the shape of the pipe. This makes it possible to give a predetermined attribute to an area having a certain length.
 また、「分割」が選択された場合、分割機能が発揮される補正モードに切り替わるとよい。具体的には、ユーザの表示画面上での入力に基づいて、仮想オブジェクト補正部47が、仮想オブジェクト生成部44によって生成された仮想オブジェクトを分割するとよい。これにより、仮想オブジェクト生成部44により誤って一つの配管として生成された一つの仮想オブジェクトを、複数の仮想オブジェクトとして表すことができる。 Also, when "split" is selected, it is preferable to switch to a correction mode in which the split function is exhibited. Specifically, the virtual object correction unit 47 may divide the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. As a result, one virtual object that has been erroneously generated as one pipe by the virtual object generator 44 can be expressed as a plurality of virtual objects.
 また、「マージ」が選択された場合、接続機能(合体機能)が発揮される補正モードに切り替わるとよい。具体的には、ユーザの表示画面上での入力に基づいて、仮想オブジェクト補正部47が、仮想オブジェクト生成部44によって生成された複数の仮想オブジェクトを接続するとよい。これにより、仮想オブジェクト生成部44により誤って生成された複数の仮想オブジェクトを、一つの仮想オブジェクトとして表すことができる。 Also, when "Merge" is selected, it is better to switch to the correction mode where the connection function (union function) is exhibited. Specifically, the virtual object correction unit 47 may connect the plurality of virtual objects generated by the virtual object generation unit 44 based on the user's input on the display screen. Thereby, a plurality of virtual objects erroneously generated by the virtual object generation unit 44 can be represented as one virtual object.
 また、「削除」が選択された場合、削除機能が発揮される補正モードに切り替わるとよい。具体的には、ユーザの表示画面上での入力に基づいて、仮想オブジェクト補正部47が、仮想オブジェクト生成部44によって生成された仮想オブジェクトの一部又は全部を削除するとよい。これにより、仮想オブジェクト生成部44により誤って生成された仮想オブジェクトの一部又は全部を削除して、仮想オブジェクトを表すことができる。 Also, when "delete" is selected, it is preferable to switch to the correction mode in which the deletion function is exhibited. Specifically, the virtual object correction unit 47 preferably deletes part or all of the virtual object generated by the virtual object generation unit 44 based on the user's input on the display screen. As a result, the virtual object can be expressed by deleting part or all of the virtual object that was erroneously generated by the virtual object generation unit 44 .
[フローチャート]
 次に、図13を参照して、ユーザ端末10における処理フローを説明する。図13は、本実施形態のユーザ端末における処理フローを示すフローチャートである。図14は、本実施形態の仮想オブジェクト生成の処理フローを示すフローチャートである。
[flowchart]
Next, a processing flow in the user terminal 10 will be described with reference to FIG. FIG. 13 is a flow chart showing the processing flow in the user terminal of this embodiment. FIG. 14 is a flow chart showing the processing flow of virtual object generation according to this embodiment.
 まず、取得部41により、延在構造物である配管が配置された実空間の撮影画像と、撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における点群データと、を取得する(S1)。また、認識部42により、学習済みモデル48cにより撮影画像における配管が表れた配管領域を認識する(S2)。 First, the acquisition unit 41 acquires a photographed image of a real space in which pipes, which are extending structures, are arranged, and point cloud data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image ( S1). Further, the recognizing unit 42 recognizes the pipe region in which the pipe appears in the photographed image using the learned model 48c (S2).
 次に、選択部43により、撮影画像において認識されたが配管が表れた配管領域に基づいて、点群データから構造物に係る注目点群データを選択する(S3)。 Next, the selection unit 43 selects point cloud data of interest related to the structure from the point cloud data based on the pipe region in which the pipe is recognized in the captured image (S3).
 次に、仮想オブジェクト生成部44により、注目点群データに基づいて、仮想三次元空間における配管を示す仮想オブジェクトを生成する(S4)。 Next, the virtual object generation unit 44 generates a virtual object representing the pipe in the virtual three-dimensional space based on the target point cloud data (S4).
 ここで、図14を参照して、仮想オブジェクト生成部44による仮想オブジェクトの生成の処理フローを説明する。 Here, with reference to FIG. 14, the processing flow of virtual object generation by the virtual object generation unit 44 will be described.
 端部決定部44cにより、注目点群データに基づいて仮想オブジェクトを生成する際、仮想オブジェクトの形状の変化に応じて仮想オブジェクトの端部を決定する(S41)。 When generating a virtual object based on the point cloud data of interest, the edge determination unit 44c determines the edge of the virtual object according to the change in the shape of the virtual object (S41).
 次に、部分オブジェクト特定部44aにより、注目点群データに基づいて配管に係る部分オブジェクトを特定する(S42)。 Next, the partial object specifying unit 44a specifies a partial object related to piping based on the target point cloud data (S42).
 次に、生成部44bにより、複数の部分オブジェクトに基づいて一つの仮想オブジェクトを生成する(S43)。 Next, the generation unit 44b generates one virtual object based on the plurality of partial objects (S43).
 以上説明したS41~S43の手順を経て、仮想オブジェクトが生成される。 A virtual object is generated through the procedures of S41 to S43 described above.
 最後に、図14に示すように、表示部45により、撮影画像と共に、仮想オブジェクト生成部44により生成された仮想オブジェクトをディスプレイ16に表示する(S5)。なお、この後、ユーザの表示画面上での入力に基づいて、仮想オブジェクトは補正されてもよい。 Finally, as shown in FIG. 14, the display unit 45 displays the virtual object generated by the virtual object generation unit 44 together with the captured image on the display 16 (S5). After that, the virtual object may be corrected based on the user's input on the display screen.
 なお、ユーザ端末10とは別に、ユーザ端末10よりも大容量のデータを高速に処理することができる情報処理装置にて、撮影画像と点群データの取得から仮想オブジェクトの生成までの処理を行う構成であってもよい。この場合、ユーザ端末10は、情報処理装置から撮影画像と仮想オブジェクトの情報を取得して表示し、図11と図12に示すような仮想オブジェクトの補正処理やデータ入力処理を受け付ける。このようなシステム構成にすることで、データ処理機能が比較的低いユーザ端末10であっても、一定の機能を十分に果たすことが可能となる。 Note that, apart from the user terminal 10, an information processing device capable of processing a large amount of data at a higher speed than the user terminal 10 performs processing from acquisition of captured images and point cloud data to generation of virtual objects. It may be a configuration. In this case, the user terminal 10 acquires and displays the captured image and information of the virtual object from the information processing device, and receives correction processing and data input processing of the virtual object as shown in FIGS. 11 and 12 . With such a system configuration, even the user terminal 10 with relatively low data processing capabilities can sufficiently perform certain functions.
[まとめ]
 以上説明した本実施形態に係る構造物表示システム100においては、配管を示す仮想オブジェクトを精度良く生成することができる。また、1枚の撮影画像に収まらない長さの配管であっても、その形状に対応した仮想オブジェクトを精度良く生成することができる。このように精度良く仮想オブジェクトを生成できることより、配管を精度良く識別表示することができる。
[summary]
In the structure display system 100 according to this embodiment described above, a virtual object representing a pipe can be generated with high accuracy. In addition, even if the length of the pipe does not fit in one photographed image, a virtual object corresponding to its shape can be generated with high accuracy. Since the virtual object can be generated with high accuracy in this way, the piping can be identified and displayed with high accuracy.
 また、本実施形態においては、仮想オブジェクト生成部44により仮想オブジェクトを生成した後に、ユーザの表示画面上での入力に応じて仮想オブジェクトを補正する機能を採用することより、仮想オブジェクトの形状の精度をより向上させることができる。 Further, in the present embodiment, after the virtual object generation unit 44 generates the virtual object, by adopting a function of correcting the virtual object according to the user's input on the display screen, the accuracy of the shape of the virtual object can be improved. can be further improved.
 また、本実施形態においては、位置情報を含む注目点群データから生成した仮想オブジェクトを表示する構成を採用することより、表示画面上で配管の長さ等を計測することも可能となる。 In addition, in this embodiment, by adopting a configuration that displays a virtual object generated from point cloud data of interest including position information, it is also possible to measure the length of a pipe on the display screen.
 なお、本実施形態においては、配管を示す仮想オブジェクトを生成する例について説明したが、これに限られるものではない。すなわち、構造物表示システム100は、立体形状を有する構造物を示す仮想オブジェクトを生成するものであればよい。また、構造物表示システム100は、プラント設備以外の設備を表示するものであってもよい。

 
In this embodiment, an example of generating a virtual object representing piping has been described, but the present invention is not limited to this. That is, the structure display system 100 may generate a virtual object representing a structure having a three-dimensional shape. Also, the structure display system 100 may display facilities other than plant facilities.

Claims (16)

  1.  構造物が配置された実空間の撮影画像と、前記撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における三次元位置データと、を取得する取得手段と、
     学習済みモデルにより、前記撮影画像における前記構造物が表れた領域を認識する認識手段と、
     前記撮影画像において認識された前記領域に基づいて、前記三次元位置データから前記構造物に係る注目三次元位置データを選択する選択手段と、
     前記注目三次元位置データに基づいて、前記仮想三次元空間における前記構造物を示す仮想オブジェクトを生成する仮想オブジェクト生成手段と、
     前記撮影画像の少なくとも一部を前記仮想オブジェクトと共に表示する表示手段と、
     を有する構造物表示システム。
    Acquisition means for acquiring a photographed image of a real space in which a structure is arranged and three-dimensional position data in a virtual three-dimensional space corresponding to a real object appearing in the photographed image;
    Recognition means for recognizing an area in which the structure appears in the photographed image using a learned model;
    selection means for selecting target three-dimensional position data relating to the structure from the three-dimensional position data based on the region recognized in the captured image;
    virtual object generation means for generating a virtual object representing the structure in the virtual three-dimensional space based on the three-dimensional position data of interest;
    display means for displaying at least part of the captured image together with the virtual object;
    A structure display system having
  2.  前記取得手段は、第1位置から前記実空間を撮影することにより得られる第1撮影画像と、前記第1撮影画像に表れた実オブジェクトに対応する第1の三次元位置データと、第2位置から前記実空間を撮影することにより得られる第2撮影画像と、前記第2撮影画像に表れた実オブジェクトに対応する第2の三次元位置データと、を取得し、
     前記選択手段は、前記第1撮影画像において認識された前記領域に基づいて、前記第1の三次元位置データから前記構造物に係る第1の注目三次元位置データを選択するとともに、前記第2撮影画像において認識された前記領域に基づいて、前記第2の三次元位置データから前記構造物に係る第2の注目三次元位置データを選択し、
     前記仮想オブジェクト生成手段は、前記第1の注目三次元位置データと前記第2の注目三次元位置データの前記仮想三次元空間における位置情報に基づいて同一の構造物に係る注目三次元位置データであると判断される場合、前記第1の注目三次元位置データと前記第2の注目三次元位置データに基づいて一つの前記仮想オブジェクトを生成する、
     請求項1に記載の構造物表示システム。
    The obtaining means obtains a first photographed image obtained by photographing the real space from a first position, first three-dimensional position data corresponding to a real object appearing in the first photographed image, and a second position. obtaining a second captured image obtained by capturing the real space from and second three-dimensional position data corresponding to the real object appearing in the second captured image;
    The selecting means selects first three-dimensional position data of interest related to the structure from the first three-dimensional position data based on the region recognized in the first captured image, and selects the second selecting second three-dimensional position data of interest related to the structure from the second three-dimensional position data based on the region recognized in the captured image;
    The virtual object generating means generates three-dimensional position data of interest for the same structure based on position information in the virtual three-dimensional space of the first three-dimensional position data of interest and the second three-dimensional position data of interest. If it is determined that there is, generating one of the virtual objects based on the first three-dimensional position data of interest and the second three-dimensional position data of interest;
    The structure display system according to claim 1.
  3.  前記仮想オブジェクト生成手段は、
     前記注目三次元位置データに基づいて前記構造物に係る第1及び第2の部分オブジェクトを特定する部分オブジェクト特定手段と、
     前記第1及び第2の部分オブジェクトの延伸方向が一致していると判定され、且つ一方の延長線上に他方が配置されている場合に、前記第1及び第2の部分オブジェクトに基づいて一つの前記仮想オブジェクトを生成する生成手段と、を含む、
     請求項1又は2に記載の構造物表示システム。
    The virtual object generation means is
    partial object identifying means for identifying first and second partial objects related to the structure based on the three-dimensional position data of interest;
    When it is determined that the extension directions of the first and second partial objects are the same, and one is arranged on the extension line of the other, one partial object is drawn based on the first and second partial objects. generating means for generating the virtual object;
    The structure display system according to claim 1 or 2.
  4.  前記仮想オブジェクト生成手段は、
     前記注目三次元位置データに基づいて前記構造物に係る第1及び第2の部分オブジェクトを特定する部分オブジェクト特定手段と、
     前記第1及び第2の部分オブジェクトのうち一方の端部が他方の外表面と連続する場合に、前記第1及び第2の部分オブジェクトに基づいて一つの前記仮想オブジェクトを生成する生成手段と、を含む、
     請求項1~3のいずれか1項に記載の構造物表示システム。
    The virtual object generation means is
    partial object identifying means for identifying first and second partial objects related to the structure based on the three-dimensional position data of interest;
    generation means for generating one virtual object based on the first and second partial objects when one end of the first and second partial objects is continuous with the outer surface of the other; including,
    The structure display system according to any one of claims 1 to 3.
  5.  前記仮想オブジェクト生成手段は、前記注目三次元位置データに基づいて前記仮想オブジェクトを生成する際、前記仮想オブジェクトの形状の変化に応じて前記仮想オブジェクトの端部を決定する端部決定手段を含む、
     請求項1~4のいずれか1項に記載の構造物表示システム。
    The virtual object generation means includes edge determination means for determining an edge of the virtual object according to a change in shape of the virtual object when generating the virtual object based on the three-dimensional position data of interest.
    A structure display system according to any one of claims 1 to 4.
  6.  ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトを補正する仮想オブジェクト補正手段を有する、
     請求項1~5のいずれか1項に記載の構造物表示システム。
    virtual object correction means for correcting the virtual object generated by the virtual object generation means based on the user's input on the display screen;
    A structure display system according to any one of claims 1 to 5.
  7.  前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトを分割する、
     請求項6に記載の構造物表示システム。
    The virtual object correction means divides the virtual object generated by the virtual object generation means based on a user's input on the display screen.
    The structure display system according to claim 6.
  8.  前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトを延長する、
     請求項6又は7に記載の構造物表示システム。
    The virtual object correction means extends the virtual object generated by the virtual object generation means based on the user's input on the display screen.
    The structure display system according to claim 6 or 7.
  9.  前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された仮想オブジェクトの少なくとも一部を削除する、
     請求項6~8のいずれか1項に記載の構造物表示システム。
    The virtual object correction means deletes at least part of the virtual object generated by the virtual object generation means based on the user's input on the display screen.
    The structure display system according to any one of claims 6-8.
  10.  前記仮想オブジェクト補正手段は、ユーザの表示画面上での入力に基づいて前記仮想オブジェクト生成手段によって生成された複数の仮想オブジェクトを接続する、
     請求項6~9のいずれか1項に記載の構造物表示システム。
    The virtual object correcting means connects the plurality of virtual objects generated by the virtual object generating means based on the user's input on the display screen.
    The structure display system according to any one of claims 6-9.
  11.  前記ユーザの表示画面上での入力は、前記構造物の形状に沿う入力動作によるものである、
     請求項6~10のいずれか1項に記載の構造物表示システム。
    The user's input on the display screen is based on an input operation along the shape of the structure,
    The structure display system according to any one of claims 6-10.
  12.  前記仮想オブジェクトの一部に所定の属性を付与する属性付与手段を有し、
     前記表示手段は、前記所定の属性が付与された一部を識別表示する、
     請求項1~11のいずれか1項に記載の構造物表示システム。
    an attribute imparting means for imparting a predetermined attribute to a portion of the virtual object;
    The display means identifies and displays the part to which the predetermined attribute is assigned.
    The structure display system according to any one of claims 1-11.
  13.  前記表示手段は、前記所定の属性が付与された一部を色分けして識別表示する、
     請求項12に記載の構造物表示システム。
    The display means distinguishes and displays the portion to which the predetermined attribute is assigned by color-coding.
    The structure display system according to claim 12.
  14.  前記構造物は、プラント設備内に設けられると共に、流体が内部に流れる配管である、
     請求項1~13のいずれか1項に記載の構造物表示システム。
    The structure is provided in plant equipment and is a pipe through which a fluid flows,
    The structure display system according to any one of claims 1-13.
  15.  構造物が配置された実空間の撮影画像と、前記撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における三次元位置データと、を取得するステップと、
     学習済みモデルにより、前記撮影画像における前記構造物が表れた領域を認識するステップと、
     前記撮影画像において認識された前記領域に基づいて、前記三次元位置データから前記構造物に係る注目三次元位置データを選択するステップと、
     前記注目三次元位置データに基づいて、前記仮想三次元空間における前記構造物を示す仮想オブジェクトを生成するステップと、
     前記撮影画像の少なくとも一部を前記仮想オブジェクトと共に表示するステップと、
     を含む構造物表示方法。
    Acquiring a photographed image of a real space in which a structure is arranged, and three-dimensional position data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image;
    a step of recognizing an area in which the structure appears in the photographed image using a trained model;
    a step of selecting three-dimensional position data of interest related to the structure from the three-dimensional position data based on the region recognized in the captured image;
    generating a virtual object representing the structure in the virtual three-dimensional space based on the three-dimensional position data of interest;
    displaying at least part of the captured image together with the virtual object;
    Structure representation method including.
  16.  構造物が配置された実空間の撮影画像と、前記撮影画像に表れた実オブジェクトに対応する、仮想三次元空間における三次元位置データと、を取得する手順、
     学習済みモデルにより、前記撮影画像における前記構造物が表れた領域を認識する手順、
     前記撮影画像において認識された前記領域に基づいて、前記三次元位置データから前記構造物に係る注目三次元位置データを選択する手順、
     前記注目三次元位置データに基づいて、前記仮想三次元空間における前記構造物を示す仮想オブジェクトを生成する手順、
     前記撮影画像の少なくとも一部を前記仮想オブジェクトと共に表示する手順、
     をコンピュータに実行させるプログラム。
    A procedure for acquiring a photographed image of a real space in which a structure is arranged and three-dimensional position data in a virtual three-dimensional space corresponding to the real object appearing in the photographed image;
    A procedure for recognizing an area in which the structure appears in the photographed image using a learned model;
    A procedure of selecting three-dimensional position data of interest related to the structure from the three-dimensional position data based on the region recognized in the captured image;
    a step of generating a virtual object representing the structure in the virtual three-dimensional space based on the three-dimensional position data of interest;
    displaying at least part of the captured image together with the virtual object;
    A program that makes a computer run
PCT/JP2021/036654 2021-10-04 2021-10-04 Structure display system, structure display method, program WO2023058092A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/036654 WO2023058092A1 (en) 2021-10-04 2021-10-04 Structure display system, structure display method, program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/036654 WO2023058092A1 (en) 2021-10-04 2021-10-04 Structure display system, structure display method, program

Publications (1)

Publication Number Publication Date
WO2023058092A1 true WO2023058092A1 (en) 2023-04-13

Family

ID=85803998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/036654 WO2023058092A1 (en) 2021-10-04 2021-10-04 Structure display system, structure display method, program

Country Status (1)

Country Link
WO (1) WO2023058092A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009053059A (en) * 2007-08-27 2009-03-12 Mitsubishi Electric Corp Object specifying device, object specifying method, and object specifying program
WO2016088553A1 (en) * 2014-12-01 2016-06-09 株式会社日立製作所 3d model creation assistance system and method
JP2018195240A (en) * 2017-05-22 2018-12-06 日本電信電話株式会社 Facility state detection method, detection system and program
JP2019056966A (en) * 2017-09-19 2019-04-11 株式会社東芝 Information processing device, image recognition method and image recognition program
JP2019191927A (en) * 2018-04-25 2019-10-31 有限会社リライト Structure inspection system and inspection method
WO2021033249A1 (en) * 2019-08-19 2021-02-25 日本電信電話株式会社 Linear structure detection device, detection method, and detection program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009053059A (en) * 2007-08-27 2009-03-12 Mitsubishi Electric Corp Object specifying device, object specifying method, and object specifying program
WO2016088553A1 (en) * 2014-12-01 2016-06-09 株式会社日立製作所 3d model creation assistance system and method
JP2018195240A (en) * 2017-05-22 2018-12-06 日本電信電話株式会社 Facility state detection method, detection system and program
JP2019056966A (en) * 2017-09-19 2019-04-11 株式会社東芝 Information processing device, image recognition method and image recognition program
JP2019191927A (en) * 2018-04-25 2019-10-31 有限会社リライト Structure inspection system and inspection method
WO2021033249A1 (en) * 2019-08-19 2021-02-25 日本電信電話株式会社 Linear structure detection device, detection method, and detection program

Similar Documents

Publication Publication Date Title
US10964108B2 (en) Augmentation of captured 3D scenes with contextual information
US10089794B2 (en) System and method for defining an augmented reality view in a specific location
CN108876934B (en) Key point marking method, device and system and storage medium
JP4434890B2 (en) Image composition method and apparatus
JP5117452B2 (en) Marker recognition method using dynamic threshold and learning system based on augmented reality using the same
JP6144364B2 (en) Work support data creation program
JP5549605B2 (en) Gaze position detection device, gaze position detection method, and computer program
JP2004062756A (en) Information-presenting device and information-processing method
JPH08322033A (en) Method for creating color table in computer unit in order toclassify pixel in image
US11232636B2 (en) Methods, devices, and systems for producing augmented reality
US10484599B2 (en) Simulating depth of field
CN108629799B (en) Method and equipment for realizing augmented reality
JP2007233971A (en) Image compositing method and device
JPWO2019021456A1 (en) Learning device, recognition device, learning method, recognition method, and program
KR20110088995A (en) Method and system to visualize surveillance camera videos within 3d models, and program recording medium
JP2016162392A (en) Three-dimensional image processing apparatus and three-dimensional image processing system
WO2018025825A1 (en) Image capture system
WO2023058092A1 (en) Structure display system, structure display method, program
EP3309713B1 (en) Method and device for interacting with virtual objects
JP2004030408A (en) Three-dimensional image display apparatus and display method
JP6971788B2 (en) Screen display control method and screen display control system
KR101588409B1 (en) Method for providing stereo sound onto the augmented reality object diplayed by marker
JP2014153666A (en) Advertisement presentation device
JP6358996B2 (en) Security simulation device
CN114066715A (en) Image style migration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21959835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE