WO2023007535A1 - Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method - Google Patents

Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method Download PDF

Info

Publication number
WO2023007535A1
WO2023007535A1 PCT/JP2021/027478 JP2021027478W WO2023007535A1 WO 2023007535 A1 WO2023007535 A1 WO 2023007535A1 JP 2021027478 W JP2021027478 W JP 2021027478W WO 2023007535 A1 WO2023007535 A1 WO 2023007535A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
image
sewage pipe
developed image
abnormal
Prior art date
Application number
PCT/JP2021/027478
Other languages
French (fr)
Japanese (ja)
Inventor
明和 大西
倫太郎 江口
敬介 柴田
Original Assignee
株式会社Njs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Njs filed Critical 株式会社Njs
Priority to JP2023537745A priority Critical patent/JPWO2023007535A1/ja
Priority to PCT/JP2021/027478 priority patent/WO2023007535A1/en
Publication of WO2023007535A1 publication Critical patent/WO2023007535A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/954Inspecting the inner surface of hollow bodies, e.g. bores

Definitions

  • the present invention relates to a system for supporting abnormality (abnormality) diagnosis in a sewage pipe, a client machine, a server machine, and related methods that can be used in the system.
  • Patent Document 1 in the image diagnosis of a sewage pipe network, the cross-sectional area obtained from the deformation position is used as one diagnostic index, and the image statistic of the density gradient in the direction orthogonal to the a priori determined crack direction is used as the other diagnostic index.
  • a diagnostic imaging system is described, which is characterized by diagnosing an abnormality in a sewage pipe network based on an axial distribution of a diagnostic index, which is associated in advance, and an actual abnormal event.
  • feedback such as changes is received from the user regarding the results of image diagnosis by the system, and there is also a configuration for improving diagnostic accuracy by continuing the operation of the system. Neither is provided, nor is a configuration for analyzing anomalous locations in consideration of the type of pipe, the state of the pipe, etc. from the image inside the sewage pipe.
  • the present invention provides a system for supporting abnormality diagnosis of sewage pipes using moving images or still images taken inside sewage pipes. It is an object of the present invention to provide a system capable of receiving, a client machine, a server machine and related methods that can be used in the system.
  • the present invention provides a sewer pipe abnormality diagnosis support system that supports sewer pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewer pipe.
  • a captured image capturing unit that captures the captured image of the above;
  • a developed image generating unit that generates a developed image of the inside of the sewage pipe from the captured image;
  • a judgment result display unit for displaying judgment results of locations judged to be abnormal by the abnormal location judging unit; and a change registration reception unit for accepting registration of changes to the judgment results. do.
  • the abnormal location determination unit may include an image analysis unit, and the image analysis unit includes a pipe type and investigation condition reception unit that receives input of the pipe type and investigation conditions, and binarization according to the pipe type and investigation conditions.
  • a binarization parameter setting unit that sets a parameter, an image conversion unit that converts a developed image into a binary image using the binarization parameter, and a binary image that analyzes the binary image and determines an abnormal location. and an image analysis unit.
  • the abnormal location determination unit may include a diagnostic model determination unit, and the diagnostic model determination unit includes a teacher data storage unit that stores an unfolded image of the inside of the sewage pipe and an abnormal location in the unfolded image, and a teacher data storage unit.
  • a diagnostic model generation unit that uses the stored developed image and the abnormal location in the developed image as training data, uses the developed image as an input, and uses machine learning to generate a diagnostic model that uses the abnormal location in the developed image as an output;
  • An abnormal location model determining unit that determines an abnormal location inside the sewage pipe from the developed image generated by the developed image generating unit using the diagnostic model generated by the model generating unit.
  • the diagnostic model determination unit may further include a developed image classification unit that classifies developed images by clustering prior to determination of an abnormal location by the abnormal location model determination unit.
  • the training data storage unit may further store a pipe type corresponding to the developed image of the inside of the sewage pipe, and the system stores the developed image stored in the training data storage unit and the pipe type corresponding to the developed image as a teacher.
  • a pipe type estimation model generation unit that uses machine learning to generate a pipe type estimation model using data, an input as a developed image, and an output as a pipe type corresponding to the developed image;
  • a pipe type estimating unit may further be provided that uses the type estimating model to estimate a pipe type corresponding to the developed image generated by the developed image generating unit.
  • the diagnostic model generation unit may update the diagnostic model by machine learning based on the changes made to the determination results registered by the change registration reception unit.
  • the photographed image capturing unit, the developed image generating unit, the determination result display unit, and the change registration receiving unit may be embodied by a client machine.
  • the data storage unit and diagnostic model generation unit may be embodied by a server machine, and the abnormal location model determination unit may be embodied by a client machine.
  • the above system may further include a form output unit for outputting an abnormality diagnosis result of the sewage pipe as a form, based on the determination result by the abnormality location determination unit and the content of the change to the determination result registered by the change registration reception unit. .
  • the present invention provides a client machine for a sewer pipe abnormality diagnosis support system that supports sewer pipe abnormality diagnosis using a photographed image that is a moving image or a still image photographed inside a sewer pipe, A captured image capturing unit that captures an image, a developed image generating unit that generates a developed image of the inside of the sewage pipe from the captured image, and a client side abnormality location determination that determines an abnormal location inside the sewage pipe from the developed image a determination result display unit for displaying a determination result of a location determined to be abnormal by the abnormal location determination unit on the client side; and a change registration reception unit for receiving registration of changes to the determination result.
  • Serving client machines Serving client machines.
  • the client-side abnormality location determination unit may include an image analysis unit, and the image analysis unit includes a pipe type and investigation condition reception unit that receives input of the pipe type and investigation conditions, and two A binarization parameter setting unit that sets a value conversion parameter, an image conversion unit that converts a developed image into a binary image using the binary parameter, and an abnormal location by analyzing the binary image. and a binary image analysis unit.
  • the client-side diagnostic model determination unit may include a client-side diagnostic model determination unit, and the client-side diagnostic model determination unit is an input model generated by machine learning using a developed image and an abnormal site in the developed image as teacher data. is a developed image, and a diagnostic model receiving unit that receives a diagnostic model whose output is an abnormal location in the developed image; An abnormal location model determining unit that determines an abnormal location inside the water pipe may be provided.
  • the present invention provides a server machine for a sewage pipe abnormality diagnosis support system that supports sewage pipe abnormality diagnosis performed using captured images that are moving images or still images captured inside a sewage pipe.
  • the developed image and the abnormal locations in the developed image are stored in the teacher data storage unit, and the developed image and the abnormal locations in the developed image stored in the teacher data storage unit are used as teacher data, and the input is the developed image.
  • a server machine comprising: a diagnostic model generating unit for generating a diagnostic model whose output is an abnormal location in a developed image by machine learning; and a diagnostic model transmitting unit for transmitting the diagnostic model generated by the diagnostic model generating unit. do.
  • the above server machine may be embodied as a cloud server.
  • the present invention is a method by a sewer pipe abnormality diagnosis support system that supports sewage pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewage pipe, wherein the photographed image importing unit , a developed image generation unit generates a developed image of the inside of the sewage pipe from the captured image, and an abnormal location determination unit identifies an abnormal location inside the sewer pipe from the developed image.
  • a determination result display unit displaying the location determined to be abnormal by the abnormal location determination unit as a determination result; and a change registration receiving unit receiving registration of changes to the determination result.
  • the present invention is a method by a client machine for a sewage pipe abnormality diagnosis support system, which supports sewer pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewage pipe, wherein the photographed image
  • An acquisition unit acquires a photographed image of the inside of a sewer pipe
  • a developed image generation unit generates a developed image of the inside of the sewer pipe from the photographed image
  • a client-side abnormality location determination unit extracts a sewer pipe from the developed image.
  • the determination result display unit displays the location determined to be abnormal by the abnormal location determination unit as the determination result
  • the change registration reception unit displays the change content for the determination result and accepting registration for a.
  • the present invention is a method using a server machine for a sewer pipe abnormality diagnosis support system that supports sewer pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewage pipe, wherein teacher data
  • the storage unit stores an expanded image of the inside of the sewage pipe and the abnormal location in the expanded image
  • the diagnostic model generation unit stores the expanded image stored in the teacher data storage unit and the abnormal location in the expanded image.
  • teacher data an input is a developed image
  • an output is an abnormal location in the developed image to generate a diagnostic model by machine learning; and transmitting.
  • the presence or absence of an abnormal portion is determined from a developed view generated from a captured image that is a moving image or a still image of the inside of a sewage pipe, and registration of changes to the determination result is accepted, thereby improving diagnostic accuracy. It is possible to plan In one example, the type of pipe, the state of the pipe, etc. are determined from the developed view generated from the photographed image of the inside of the sewage pipe, and the information on the determined pipe type and state of the pipe is taken into consideration, and the location of the abnormality is determined with higher accuracy.
  • FIG. 1 is a conceptual diagram showing the configuration of a sewer pipe abnormality diagnosis support system according to an embodiment of the present invention
  • FIG. The figure which shows an example of a sewage pipeline.
  • FIG. 2 is a block diagram showing the configuration of a client machine included in the sewage pipe abnormality diagnosis support system
  • FIG. 3 is a block diagram showing the configuration of an image analysis unit of the client machine
  • FIG. 3 is a block diagram showing the configuration of a client-side diagnostic model determination unit
  • FIG. 4 is a block diagram showing the configuration of a client-side pipe type estimation unit
  • the block diagram which shows the structure of a server machine, such as a cloud server contained in a sewage pipe abnormality diagnosis support system.
  • FIG. 9 is a diagram for explaining the principle of development drawing creation (generation of frame unit image data) in the flow of FIG. 8 ;
  • FIG. 9 is a diagram for explaining the principle of creating a developed view (distance calculation) in the flow of FIG. 8 ;
  • FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (an example of a distance meter);
  • FIG. 9 is a diagram for explaining the principle of creating a developed view (calculation of a cutting angle) in the flow of FIG. 8 ;
  • FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG.
  • FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (principle of normal image conversion); FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (creating a developed view by normal image conversion); FIG. 9 is a view for explaining the principle of creating a developed view in the flow of FIG. 8 (luminance leveling of developed image); FIG. 9 is a view for explaining the principle of creating a development view in the flow of FIG. 8 (connection processing of development views); The figure which shows typically an example of the development view obtained by creation of a development view. FIG. 9 is a flowchart showing an operation flow of image analysis in the flow of FIG.
  • FIG. 9 is a flowchart showing an operation flow of analysis by a diagnostic model in the flow of FIG. 8;
  • FIG. FIG. 9 is a flow chart showing the operation flow of diagnostic model generation in the flow of FIG. 8 ;
  • FIG. 2 is a diagram for explaining the concept of a neural network as an example of a learning algorithm;
  • FIG. 9 is a diagram showing an example of an abnormality diagnosis result browsing screen in outputting an abnormality location in the flow of FIG. 8 ;
  • FIG. 9 is a diagram showing an example of a form (main investigation record table) in the form output in the flow of FIG. 8;
  • FIG. 27 is an enlarged view of a part of the main investigation record table of FIG. 26;
  • FIG. 27 is an enlarged view of a part of the main investigation record table of FIG. 26;
  • FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet).
  • FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet).
  • FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet).
  • FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet).
  • FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet).
  • FIG. 4 is a diagram for explaining the principle of image analysis in the embodiment; A table showing the verification results of image analysis. An example of an abnormal location to be verified in the embodiment (machine learning). The figure which shows the diagnostic model generation flow in an Example. The graph which shows the abnormal (abnormality) location determination result by machine learning. The table
  • the sewer pipe abnormality diagnosis support system of the present invention is not limited to a server-client type system, and may be configured by a single computer or the like. It may be distributed to any number of computers or the like, one or more. It is not necessary for the sewer pipe abnormality diagnosis support system, the client machine, and the server of the present invention to have all the functions described in the following embodiments. For example, it is not essential to determine survey conditions such as the type of pipe and the presence/absence of cleaning using a machine learning model, and a user or the like may directly input them into the system.
  • the machine learning algorithm for generating the machine learning model is not limited to the random forest and neural network described below, and may be any algorithm.
  • an image diagnosis model based on deep learning can be used for estimating investigation conditions such as the type of pipe and the presence or absence of cleaning conditions, and for estimating abnormal locations.
  • Each functional unit described later may be implemented by a single piece of hardware, may be implemented by two or more pieces of hardware, or may be implemented by a single piece of hardware as described later. good.
  • "water” may be pure water, sewage such as sewage, or water containing any impurities such as rainwater.
  • FIG. 1 is a conceptual diagram showing the configuration of a sewer pipe abnormality diagnosis support system according to an embodiment of the present invention
  • FIG. 2 is a diagram showing an example of a sewer pipe.
  • an unmanned aerial vehicle (drones) equipped with a camera is allowed to fly in the sewage pipe 1000 shown in FIG.
  • Captured images (either moving images or still images) inside the sewerage pipe 1000 obtained by driving a camera car, etc., are imported into the client machine, and abnormal location diagnosis is performed in the following flow. .
  • (1) Create a developed view from the photographed image.
  • (2) Analyze the type of pipe from the developed view (developed image) using the analytical model for the type of pipe.
  • the analysis model such as pipe type is retrained using the results of fixed registration such as changes from the client side to the analysis results.
  • Analyze abnormal locations such as breakage and cracks from the developed view (developed image) using the abnormal location analysis model.
  • the client side registers the confirmed result of the abnormal location including the change, and retrains the abnormal location analysis model using the confirmed result.
  • FIG. 3 is a block diagram showing the configuration of a client machine included in the sewer pipe abnormality diagnosis support system
  • FIG. 4 is a block diagram showing the configuration of an image analysis section of the client machine
  • FIG. 6 is a block diagram showing the configuration of a client-side diagnostic model determining unit
  • FIG. 6 is a block diagram showing the configuration of a client-side pipe type estimating unit.
  • the client machine 1 includes a control unit 2, a storage unit 3, an input/output unit 4, and a communication unit 5.
  • the control unit 2 includes a processor 6 such as a CPU (Central Processing Unit) and a temporary memory 7 such as a RAM (Random Access Memory).
  • a processor 6 such as a CPU (Central Processing Unit) and a temporary memory 7 such as a RAM (Random Access Memory).
  • the processor 6 executes the image analysis program 18 stored in the storage unit 3 so that the control unit 2 functions as the image analysis unit 10 .
  • the processor 6 executes the machine-learning-related program 19 stored in the storage unit 3 so that the control unit 2 functions as the client-side diagnostic model determination unit 11 and the client-side pipe type estimation unit 12 .
  • the control unit 2 when the processor 6 executes the diagnostic model reception program, the control unit 2 functions as a diagnostic model reception unit 32 using the communication unit 5, and when the processor 6 executes the abnormal location model determination program, the control unit 2 functions as a developed image classification unit 33 and an abnormal part model determination unit 34, and the processor 6 executes a pipe type estimation model reception program so that the control unit 2 functions as a pipe type estimation model reception unit 35 using the communication unit 5.
  • the controller 2 functions as a pipe type model estimating unit 36 by the processor 6 executing the pipe type estimating program.
  • the processor 6 executes the form output program 22 stored in the storage section 3 so that the control section 2 functions as the form output section 15 .
  • control unit 2 executes various control and display programs 23 (including operating system, various application software, driver software for various devices, etc.), so that the control unit 2 operates the determination result display unit 13, the change registration reception unit 14, functions as a various control and display unit 16; Any other program may be stored in the storage unit 3, and the processor 6 of the control unit 2 executes any program, thereby allowing the control unit 2 to function as any functional unit.
  • various control and display programs 23 including operating system, various application software, driver software for various devices, etc.
  • the developed image generation unit 8 stores a captured image of the inside of the sewage pipe (captured image data 24 as a storage unit 3.), for example, it is a functional unit that generates a developed image based on the principles shown in FIGS.
  • the developed image generation unit 8 stores data of the generated developed image in the storage unit 3 as developed image data 25 .
  • the image analysis unit 10 includes a pipe type and investigation condition reception unit 28, a binarization parameter setting unit 29, an image conversion unit 30, and a binary image analysis unit 31, and analyzes the developed image to detect damage, cracks, and the like. It is a functional unit that identifies an abnormal location.
  • the image analysis unit 10 collects data indicating the abnormal location specified by the analysis (data indicating the position and type of the abnormal location, or data of the expanded image portion around the specified abnormal location). Good.) is stored in the storage unit 3 as the abnormal location and pipe type data 26 .
  • the client-side diagnostic model determination unit 11 is a functional unit including functional units of a diagnostic model reception unit 32, a developed image classification unit 33, and an abnormal location model determination unit 34.
  • the diagnostic model reception unit 32 is a server machine 37 (to be described later).
  • the machine-learned diagnostic model is stored in the storage unit 3 as the machine-learned diagnostic model 20, and the developed image classification unit 33 classifies the developed image as necessary.
  • the machine-learned diagnostic model (the machine-learning algorithm may be any algorithm, image recognition by deep learning may be used, or a specific feature amount is extracted from the developed image and random forest, neural A network may be used to build a model.) By analyzing the developed image, abnormal locations such as damage and cracks are identified.
  • the client-side diagnostic model determination unit 11 uses data indicating the abnormal location specified by the analysis (data indicating the position and type of the abnormal location, or data of the expanded image portion around the specified abnormal location). ) is stored in the storage unit 3 as the abnormal location and pipe type data 26 .
  • the client-side pipe type estimation unit 12 is a functional unit including a pipe type estimation model reception unit 35 and a pipe type model estimation unit 36 .
  • the pipe type estimation model is received and stored in the storage unit 3 as the machine-learned pipe type estimation model 21, and the pipe type model estimation unit 36 receives the machine-learned pipe type estimation model (the machine learning algorithm may be any algorithm, Image recognition by deep learning may be used, or a model may be constructed by extracting a specific feature amount from the developed image and using a random forest or neural network described later.) to analyze the developed image. estimates the tube type from the developed image.
  • the client-side pipe type estimation unit 12 causes the storage unit 3 to store data indicating the pipe type specified by the analysis (and investigation conditions such as the presence or absence of cleaning, if necessary) as abnormal location and pipe type data 26. .
  • the determination result display unit 13 is a functional unit that displays the location determined to be abnormal by the client-side abnormality location determination unit as the determination result.
  • a diagnosis result browsing screen having a layout shown in FIG.
  • the change registration accepting unit 14 is a functional unit that accepts, via the keyboard and mouse of the input/output unit 4, the registration of changes to the abnormal location determination result by the client side abnormal location determination unit 9.
  • the user of the client machine 1 operates the keyboard and mouse while confirming the diagnostic result viewing screen as shown in FIG. Input such as registration can be performed, and the change registration reception unit 14 stores the registered content as change registration data 27 in the storage unit 3 (only the change registration content may be stored, or the client side abnormality point determination may be performed). It may also be possible to store that the department has approved and "confirmed" the result of determination of an abnormal location.).
  • the form output unit 15 is a functional unit that outputs a diagnosis result of an abnormality of a sewage pipe as a form based on the determination result by the client side abnormality location determination unit and the content of the change to the determination result accepted by the change registration reception unit 14. be. Examples of forms are shown in FIGS. 26 to 32, which will be described later.
  • the storage unit 3 is a storage (recording) device equipped with a hard disk drive, SSD (Solid State Drive), etc. As described above, the program executed by the processor 6 to cause the control unit 2 to function as various functional units , development image generation program 17, image analysis program 18, machine learning-related program 19 (including diagnostic model reception program, pipe type estimation model reception program, abnormal location model determination program, pipe type estimation program), report output program 22, various Control and display programs 23 are stored.
  • the storage unit 3 also stores a machine-learned diagnostic model 20 for determining an abnormal location from a developed image by image recognition or the like as a machine-learned model generated by the diagnostic model generation unit 44 of the server machine 37 .
  • the storage unit 3 stores the machine-learned pipe type estimation model 21 for estimating the pipe type from the developed image as a machine-learned model generated by the pipe type estimation model generation unit 46 of the server machine 37 .
  • the storage unit 3 also stores photographed image data 24, developed image data 25, anomaly location, pipe type data 26, and change registration data 27, as shown in FIG.
  • FIG. 7 is a block diagram showing the configuration of a server machine such as a cloud server included in the sewer pipe abnormality diagnosis support system.
  • the server machine 37 includes a control section 38 , a storage section 39 , an input/output section 40 and a communication section 41 .
  • the control unit 38 includes a processor 42 such as a CPU and a temporary memory 43 such as RAM.
  • the processor 42 executes the diagnostic model generation program in the machine learning-related program 51 stored in the storage unit 39, so that (the processor 42 of the control unit 38; the same shall apply hereinafter) functions as the diagnostic model generation unit 44.
  • the processor 42 executes the diagnostic model transmission program so that the control unit 38 functions as a diagnostic model transmission unit 45 using the communication unit 41, and the processor 42 executes the pipe type estimation model generation program so that the control unit 38 functions as a pipe type estimation model generation unit 46, the processor 42 executes a pipe type estimation model transmission program, the control unit 38 functions as a pipe type estimation model transmission unit 47 using the communication unit 41, and the processor 42
  • the control unit 38 functions as a diagnostic model evaluation unit 48.
  • the control unit 38 functions as a pipe type estimation model evaluation unit 49. Function.
  • control/display programs 54 including an operating system, various application software, various device driver software, etc.
  • the control unit 38 functions as various control/display units 50 .
  • Any other programs may be stored in the storage unit 39, and the control unit 38 can function as any functional unit by executing any program by the processor 42 of the control unit 38.
  • the diagnostic model generation unit 44 is a functional unit that uses machine learning to generate a diagnostic model for estimating an abnormal location from a developed image. Any algorithm may be used as the machine learning algorithm, but in one example, developed image data is input, and information on the abnormal location in the developed image data (position, type, image data around the abnormal location, etc.) It is possible to use an image recognition learning algorithm by deep learning that outputs . In machine learning using deep learning, as teacher data (learning data), developed image data (input) and information on abnormal locations in the developed images (position, type, image data around abnormal locations, etc.) are output. ), and let the image recognition model perform machine learning using these teaching data (in one example, learning data is prepared as a large number of developed image data labeled with information on abnormal locations).
  • the diagnostic model generation unit 44 stores the generated diagnostic model in the storage unit 39 as a machine-learned diagnostic model 52 .
  • the diagnostic model transmission unit 45 transmits the machine-learned diagnostic model generated by the diagnostic model generation unit 44 to the client machine 1 using the communication unit 41 .
  • the diagnostic model evaluation unit 48 evaluates the determination accuracy of the machine-learned diagnostic model 52 using the developed image data and the abnormal location data as the test data 56 , and converts the diagnostic result of the machine-learned diagnostic model 52 into the changed registration data 57 .
  • the judgment accuracy of the machine-learned diagnostic model 52 is evaluated in comparison with the changed content on the client side indicated by .
  • the pipe type estimation model generation unit 46 is a functional unit that uses machine learning to generate a pipe type estimation model for estimating the pipe type from the developed image. Any algorithm may be used as the machine learning algorithm, but in one example, developed image data is input, and the type of sewage pipe represented by the developed image (Hume pipe, ceramic pipe, PVC pipe, etc.) It is possible to use an image recognition learning algorithm by deep learning that outputs . In machine learning using deep learning, as teacher data (learning data), developed image data (input) and the type of sewage pipe represented by the developed image (Hume pipe, ceramic pipe, PVC pipe, etc.).
  • a pipe type estimation model can also be constructed as a model for estimating not only the pipe type but also the presence or absence of cleaning. In this case, as teacher data (learning data), developed image data (input), and "sewer pipe type (Hume pipe, ceramic pipe, PVC pipe, etc.) and cleaning status etc.
  • the pipe type estimation model generation unit 46 stores the generated pipe type estimation model in the storage unit 39 as a machine-learned pipe type estimation model 53 .
  • the pipe type estimation model transmission unit 47 transmits the machine-learned pipe type estimation model generated by the pipe type estimation model generation unit 46 to the client machine 1 using the communication unit 41 .
  • the pipe type estimation model evaluation unit 49 evaluates the determination accuracy of the machine-learned pipe type estimation model 53 using the developed image data and pipe type data (which may include information on survey conditions such as cleaning conditions) as the test data 56 . Also, the diagnosis result of the machine-learned pipe type estimation model 53 is compared with the change contents on the client side indicated by the change registration data 57 to evaluate the judgment accuracy of the machine-learned pipe type estimation model 53. .
  • the storage unit 39 is a storage (recording) device including a hard disk drive, an SSD, etc. As described above, a machine learning related program is executed by the processor 42 to cause the control unit 38 to function as various functional units. 51 (diagnostic model generation program, pipe type estimation model generation program, diagnostic model transmission generation program, pipe type estimation model transmission program, diagnostic model evaluation program, pipe type estimation model evaluation program), various control and display programs 54 are stored. .
  • the storage unit 39 also stores, as a machine-learned model generated by the diagnostic model generation unit 44 of the server machine 37, a machine-learned diagnostic model 52 for determining an abnormal location from a developed image by image recognition or the like.
  • the storage unit 39 stores a machine-learned pipe type estimation model 53 for estimating the pipe type from the developed image as a machine-learned model generated by the pipe type estimation model generation unit 46 of the server machine 37 .
  • the storage unit 39 also stores teacher data 55, test data 56, and change registration data 57 as shown in FIG.
  • FIG. 8 is a flow chart showing the operation flow of the sewer pipe abnormality diagnosis support system.
  • the captured image is described as a moving image, but the captured image as a still image (for example, a camera car equipped with a still image camera intermittently captures still images while traveling inside the sewage pipe) image information similar to moving images can be obtained), the sewer pipe abnormality diagnosis support system can be operated on the same principle.
  • the user of the client machine 1 performs various control and display on the display device of the input/output unit 4 by means of the display unit 16 (displaying a browsing screen as shown in FIG. 25, and also performs processing such as an interface for receiving change registration from the user). While referring to the displayed screen, a photographed moving image to be read is selected from the list of photographed moving image data via the keyboard, mouse, etc. of the input/output unit 4 (the same applies to other input/output from the user).
  • the various control/display unit 16 reads selected moving image data among the moving image data 1 to N (N is an integer equal to or greater than 1) included in the image data 24 and displays it on the display device of the input/output unit 4. (Step S101).
  • the user of the client machine 1 inputs an instruction to create a developed drawing while referring to the screen displayed on the display device of the input/output unit 4 by the various control/display units 16, and the developed image creating unit 8 is selected.
  • a developed view (developed image) is created using the photographed moving image data (step S102).
  • FIGS. 9 to 17 are diagrams for explaining the principle of development drawing creation in the flow of FIG.
  • the developed view creation in this embodiment is not limited to those shown in FIGS. 9 to 17, and any method may be used.
  • the developed image creation unit 8 first captures a moving image frame by frame using the OpenCV (open source library for computer vision) function (included in the developed image generation program 17), and generates image data for each frame. (Fig. 9).
  • OpenCV open source library for computer vision
  • the developed image generation unit 8 calculates the distance (dist) advanced from the two consecutive images (FIG. 10).
  • the captured moving image includes a distance meter display (unit is m).
  • (dist) is becomes.
  • the developed image generation unit 8 calculates the cropping angle (wdeg) of the image A from the obtained distance (dist) in the following procedure (FIG. 12).
  • (1) Calculate the distance per pixel of the rl point from the circumference of rl and the circumference obtained from the actual pipe diameter (input parameter) However, "*" represents multiplication.
  • (2) Shorten rl in the direction of rs and calculate rs whose total distance is the input distance
  • the developed image generation unit 8 cuts the image based on the cut-out angle in the following procedure (FIGS. 13 to 15).
  • (1) Recalculate rs from the cropping angle wdeg
  • (2) Set the developed view height to the circumference pixel of rl + 1
  • (3) Calculate the developed view width by the following formula (4)
  • a developed image is created by normal image conversion in units of circumference angles of rl. *param: Variable parameter. Set by tuning separately Calculate the deployment parameters from the angle parameters above.
  • a reference pixel is obtained in units of development width.
  • the developed image generation unit 8 levels the luminance of the developed image according to the following procedure (FIG. 16. Leveling is performed because there may be variations in the lighting conditions). • Calculate the average brightness (lY) of the ceiling. If the difference between the average luminance (rY) of the target row and the pixel luminance (Y) to be checked is ZZ (threshold) or less, the luminance is corrected.
  • the developed image generation unit 8 performs a process of joining the developed views (FIG. 17).
  • FIG. 18 is a diagram schematically showing an example of a development view obtained by creating a development view.
  • the inner surface of a cylindrical sewer is represented as a rectangular plan view as shown in FIG.
  • Abnormal locations 60 in the sewage pipe inner wall 58 are automatically detected by image analysis and judgment by a diagnostic model, which will be described later, and detection results can be changed and registered on the client side.
  • the developed image generated by the developed image generation unit 8 is displayed on the display device of the input/output unit 4 .
  • the user of the client machine 1 selects either (A) image analysis or (B) analysis using a diagnostic model as an abnormality analysis method for the displayed expanded image (step S103).
  • the image analysis unit 10 When image analysis is selected, the image analysis unit 10 performs image analysis of the abnormal location using the developed image generated in step S102 according to the procedure shown in FIG. 19 (step S104).
  • FIG. 19 is a flow chart showing the operation flow of image analysis in the flow of FIG. Image analysis can be implemented using the OpenCV functions (included in the image analysis program 18) in the same manner as the development image generation.
  • the user manually sets the pipe type of the developed image and inputs it from the input/output unit 4, and the pipe type and investigation condition reception unit 28 receives the input (step S201).
  • the user sets and inputs a Hume tube, a ceramic tube, or a vinyl chloride tube.
  • the user manually sets survey conditions for the developed image and inputs them from the input/output unit 4, and the pipe type and survey condition reception unit 28 receives the input (step S202).
  • the presence or absence of cleaning is set and input.
  • the binarization parameter setting unit 29 generates parameters for image conversion (binarization) from the tube type and investigation conditions (step S203). Specifically, the binarization parameter setting unit 29 sets a predetermined binarization threshold for each tube type and investigation condition (sets the OpenCV threshold parameter). Subsequently, the image conversion unit 30 performs binarization based on the parameters generated by the “parameter generation”, converts the developed image into a black and white image (binary image) (step S204), and the binary image analysis unit 31 divides the binarized image by "image conversion" into pixel units (step S205). Next, the binary image analysis unit 31 scans the binary image pixel by pixel, marks portions containing black in black, and extracts them as black clusters (feature points) (step S206).
  • the detected feature points are analyzed to determine an abnormal point (step S207).
  • the binary image analysis unit 31 uses an algorithm to determine the characteristics of each abnormal location (damage, crack, etc.) to determine whether it is an abnormal location and, if it is an abnormal location, determine the type of abnormality. .
  • an algorithm here, an arbitrary algorithm such as a known object recognition algorithm by OpenCV etc. can be used. Judgment of the location is performed. As a specific judgment standard, for example, when the number of black pixels exceeds a predetermined threshold, it is judged to be abnormal, and depending on the shape of the black part, it is classified as damaged, cracked, etc., etc. Accumulation of know-how (pattern) can be programmed by setting the judgment conditions.
  • step S103 When (B) analysis using a diagnostic model is selected in step S103, the client-side diagnostic model determination unit 11 performs image analysis of the abnormal location using the developed image generated in step S102 in the procedure shown in FIGS. (step S105).
  • the diagnostic model receiving unit 32 downloads the latest diagnostic model from the cloud server 37 and stores it in the storage unit 3 as the machine-learned diagnostic model 20 (step S301).
  • the abnormal location model determination unit 34 uses the downloaded machine-learned diagnostic model 20 to determine the abnormal location (step S302).
  • the machine-learned diagnostic model in one example, is developed image data as input, and deep learning that outputs information on the abnormal location in the developed image data (position, type, image data around the abnormal location, etc.) It is a diagnostic model that has been machine-learned using the image recognition learning algorithm by.
  • the abnormal location model determination unit 34 uses the developed image data as an input to the machine-learned diagnostic model 20, and obtains information on the abnormal location (position, type, image data around the abnormal location, etc.) as an output, thereby identifying the abnormal location. make a judgment.
  • the accuracy of the determination can be improved by clustering the developed image data in advance by a method such as the k-means method (k-means method) and then performing model determination of the abnormal location.
  • the developed image classification unit 33 classifies the developed image data by executing the abnormal site model determination program, and then the abnormal site model determination unit 34 uses the machine-learned diagnostic model 20 by an arbitrary algorithm. Determine the location of anomalies.
  • the developed image classification unit 33 clusters the developed image data by the k-means method, and then the abnormal part model determination unit 34 extracts a plurality of features from the developed image data according to the feature amount extraction method determined according to the clustering result.
  • Extract features from the parameters background color, maximum brightness difference, range considered as background, etc.
  • input the extracted feature values into a diagnostic model machine-learned by machine-learning algorithms such as random forests and neural networks. Then, it is possible to obtain information on the abnormal location (position, type, image data around the abnormal location, etc.) as an output.
  • the client-side pipe type estimating unit 12 may estimate the pipe type, the cleaning condition, and other investigation conditions, and determine the location of the abnormality.
  • the pipe type estimation model receiving unit 35 downloads the latest pipe type estimation model from the cloud server 37 and stores it in the storage unit 3 as the machine-learned pipe type estimation model 21. do.
  • the pipe type model estimating unit 36 uses the downloaded machine-learned pipe type estimating model 21 to estimate the pipe type and investigation conditions such as cleaning conditions.
  • the diagnostic model receiving unit 32 uses the machine-learned diagnostic model 20 according to the estimation result of the pipe type model estimating unit 36 (in step S301, the diagnostic model receiving unit 32 performs machine learning according to investigation conditions such as various pipe types and cleaning conditions) Assume that the completed diagnosis model 20 has been received and stored in the storage unit 3.) The location of abnormality is determined.
  • step S106 the determination result display unit 13 outputs the information (diagnosis result) of the abnormal location determined in step S104 or S105 to the display device of the input/output unit 4 as a system screen.
  • the diagnosis result browsing screen shown in FIG. 25 is displayed on the display device of the input/output unit 4 by the determination result display unit 13.
  • a list of photographed moving image data files is displayed on the upper left, and the user selects an arbitrary photographed moving image, and instructs the above-described developed drawing generation and abnormal point diagnosis by pressing buttons in the browsing screen. .
  • a developed image is displayed at the bottom of the browsing screen, and a colored frame (displayed in gray scale in the drawing) surrounds the part determined to be abnormal by image analysis or diagnostic model analysis. is displayed.
  • a colored line in the drawing, it is displayed in grayscale
  • it can be configured to output not only the information of the abnormal location but also the position of the connection part (joint part) etc.
  • the diagnostic model perform machine learning as deep learning
  • teacher data as teacher data
  • the user can perform change registration (editing) for the determination result of the abnormal location.
  • change registration editing
  • the user viewing the browsing screen of FIG. ⁇ Drop, etc. select information such as the type of abnormality (damage, crack, etc.) according to the guidance on the screen to edit the determination result of the abnormal location.
  • the user may be able to similarly edit investigation conditions such as pipe type and cleaning condition.
  • step S107 When the user finishes editing and confirms and changes the abnormal portion, the user presses the completion button on the viewing screen, and the changed content is stored in the storage unit 3 as the change registration data 27, and the abnormal portion is confirmed. is performed (step S107). Also, even if there is no change, the abnormal point output by the user's operation is checked and the abnormal point is determined. Subsequently, the image data of the abnormal location and the data of the abnormal location after the change by the user (the data after changing the investigation conditions such as the pipe type and the cleaning state may also be included) are sent from the communication unit 5 of the client machine 1 to the server machine.
  • the communication unit 41 of the server machine 37 receives this, and the storage unit 39 stores the changed abnormal location data as change registration data 57 together with the abnormal location image data.
  • the diagnostic model generation unit 44 and the pipe type estimation model generation unit 46 of the server machine 37 receive the abnormal location data after the change by the user, the data after the change of the investigation conditions such as the pipe type and the cleaning state, and the corresponding expanded image. Data (assumed to be received from the client machine 1) is used as teacher data to perform machine learning again on the abnormal location diagnosis model and the pipe type estimation model, respectively, to generate machine-learned models, whereby the abnormal location diagnosis model and the pipe type estimation model are generated. can be improved (step S109).
  • the diagnostic model evaluation unit 48 and the pipe type estimation model evaluation unit 49 verify the accuracy of the generated machine-learned abnormal point diagnosis model and pipe type estimation model by using the test data 56 (step S110).
  • the diagnostic model transmission unit 45 and the pipe type estimation model transmission unit 47 to the client machine 1 transmits the abnormal location diagnosis model and the pipe type estimation model with improved judgment accuracy in the analysis by the next diagnostic model. can be used.
  • the supervised data of the abnormal points is accumulated, and the diagnostic model generated by machine learning and the abnormality analysis of the pipe type estimation model are performed. , pipe type estimation accuracy is improved.
  • the abnormal location diagnosis model and the pipe type estimation model may be machine-learned abnormal location diagnosis models and pipe type estimation models by deep learning.
  • any machine learning algorithm may be used to construct an abnormal location diagnosis model and a pipe type estimation model.
  • FIG. 21 shows an example of the operational flow of diagnostic model generation.
  • the diagnostic model generation unit 44 classifies images of abnormal locations by clustering.
  • the diagnostic model generator 44 performs clustering using the k-means method.
  • the number of classifications is set to an arbitrary value N to classify into N types.
  • the diagnostic model generator 44 generates classifier creation parameters for each classified image as follows. [Parameter generation method] Set Open CV parameters (details below) determined for each classification type ⁇ Vector file creation parameters> -inv -randinv...specified when reversing colors -bgcolor...background color -bgthresh...range to be regarded as background -maxidev...maximum lightness difference -maxxangle -maxyangle. . .
  • HAAR A method to capture features from the difference in brightness of an image
  • LBP A method to capture features from the luminance distribution (histogram)
  • HOG A method to capture features from the distribution in the gradient direction of luminance -bt ...
  • Boost classification Device type DAB Discrete AdaBoost RAB: Real Ada Boost LB: LogitBoost GAB: Gentle AdaBoost -minHitRate...minimum hit rate -maxFalseAlarmRate...maximum false alarm rate -weightTrimRate...specifies whether to trim by value -maxDepth...maximum depth of weak detectors -maxWeakCount...maximum of weak classifiers required to achieve maxFalseAlarmRate number
  • the diagnostic model generating unit 44 generates a classifier based on the parameters generated by "generate classifier generation parameter" in step S402.
  • a classifier here may be a classifier according to any machine learning algorithm, such as random forest, neural network, or the like.
  • FIG. 22 is a diagram explaining the concept of random forest (learning stage) as an example of a learning algorithm
  • FIG. 23 is a diagram explaining the concept of random forest (operation stage).
  • An overview of random forests is given below.
  • the final model machine-learned by random forest uses multiple models called decision trees, and takes the majority (classification) and average (regression) of the prediction (estimation) results by each decision tree. to get the final output.
  • the learning stage of a random forest a large number of explanatory variables are classified into multiple subsamples by random replacement sampling using a method called bootstrap method, and a large amount of teacher data in each subsample is given to each decision tree.
  • Each decision tree performs independent learning, and learning is performed with a plurality of models (decision trees).
  • the final machine learning model generated by the random forest machine learning algorithm can be interpreted as a collection of multiple decision trees.
  • FIG. 24 is a diagram explaining the concept of a neural network as an example of a learning algorithm.
  • a neural network one or more hidden layers (intermediate layers) exist between an input layer and an output layer, node values in the input layer are converted to node values in the hidden layer, and nodes in the hidden layer are converted to node values in the hidden layer.
  • Output data (objective variable data, classification results) are obtained from input data (explanatory variable data) by converting the values into node values of the output layer. Transformation of node values from one layer to the next is performed by linear transformation or nonlinear transformation using an activation function.
  • connections between nodes between the input layer and the hidden layer and the connections between the nodes between the hidden layer and the output layer each have a separate weight value, and are used as explanatory variables.
  • the value of each weight is updated by giving the training data of the objective variable and making it learn.
  • the weight of each layer is updated by error backpropagation during learning. Calculate so that the difference between the required output and the actual output is small, and reflect it in each layer.
  • a model can be arbitrarily constructed by adjusting hyperparameters such as the number of intermediate layers and the number of nodes belonging to each intermediate layer. Random forests and neural networks are well known machine learning algorithms and will not be described in further detail here.
  • step S404 the diagnostic model generation unit 44 generates a diagnostic model based on the classifier created for each abnormality classification.
  • the form is output by the form output unit 15 (step S111).
  • the form output may be performed by outputting to a paper medium using a printer connected to the client machine 1, or may be performed by outputting in an electronic file format such as an EXCEL (registered trademark) file.
  • FIG. 26 shows an example of a form (main investigation record table) in form output
  • FIGS. 32 is an expanded image of another sheet). However, it should be noted that this does not represent the actual location of the abnormality, but is entered in the system as an example.
  • the abnormal location image 68 outputs an image of an abnormal location registered in the system (an image of an abnormal location determined by image analysis or diagnostic model analysis and changed by the user if necessary), and displays the contents of the abnormal location.
  • information abnormality type, rank, distance from the left end
  • 29 to 32 are developed images displayed on separate sheets.
  • the person in charge of investigation determines the location of an abnormality such as a problem based on the result, and prepares an investigation report.
  • the function on the cloud side manages the final judgment results of the investigator and the system judgment results (including false positives) as training data, and generates a machine learning model (diagnosis model).
  • This diagnostic model optimizes the machine-learning model on the personal computer side and is applied to the next and subsequent fault point determinations. By repeating the cycle of improving the accuracy of the machine learning model from this abnormality point determination, it is possible to improve the determination accuracy and reduce the work burden and investigation time required for diagnosis.
  • Verification of the determination accuracy of the system of the present embodiment is performed by comparing the determination result of the system with the information on the abnormal location in the pipeline inspection result report (diagnosis result by a person) as the correct answer.
  • the data to be verified is the results of a screening survey conducted in three cities (pipe pipe length: 11.7 km), and the results of abnormality judgment in span units are mainly rank A and rank B pipe footage due to cracks and damage. was sampled (see Table-1 in FIG. 33).
  • FIG. 34 shows a processing flow and a characteristic processing image.
  • Investigation conditions such as pipe type and whether or not cleaning is required, are set for development drawings generated based on the results of screening investigations, and image processing is performed, such as whether or not brightness and noise reduction are required, depending on the pipe type and conditions inside the pipe.
  • Set parameters related to This parameter is a value that facilitates the extraction of characteristic locations based on the conditions of the pipe investigation.
  • the analysis process then divides the transformed image into pixel units and marks the pixel units in black. The marked black lumps are extracted as feature points, and the abnormal point and the abnormal type are determined from the distribution (connection, spread, ratio, etc.).
  • the diagnostic model is a generation flow using each classifier (machine learning model generated as a result of learning (deep learning) for each image classified by clustering) for damage and cracks (Fig. 37 reference). Images of abnormal locations are classified into images having common features by clustering using the k-means method. Machine learning (deep learning) using parameters for extracting the features generates a diagnostic model for determining anomalies.
  • the present invention can be used for diagnosing abnormalities in sewage pipes of various types, such as Hume pipes, ceramic pipes, and PVC pipes.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Sewage (AREA)

Abstract

This invention addresses the problem of providing: a system that is for assisting in the diagnosis of an abnormality in a sewage pipe using a moving image or still image obtained by photographing the inside of the sewage pipe, displays a system-side determination result, and accepts change content registration from a user, or the like; a client machine and server machine that can be used in the system; and a related method. Provided is a sewage pipe interior abnormality diagnosis assistance system that is for assisting in the diagnosis of an abnormality in a sewage pipe using a moving image or still image obtained by photographing the inside of the sewage pipe, the system comprising: a photographed image acquisition unit for acquiring a photographed image of the inside of a sewage pipe; a flattened image generation unit for generating a flattened image of the inside of the sewage pipe from the photographed image; an abnormal location determination unit for determining an abnormal location inside the sewage pipe from the flattened image; a determination result display unit for displaying, as a determination result, the location that has been determined to be abnormal by the abnormal location determination unit; and a change registration acceptance unit for accepting change content registration in relation to the determination result.

Description

下水管内異状診断支援システム、下水管内異状診断支援システム用のクライアントマシン、サーバマシン、及び関連する方法System for diagnosing abnormality in sewer pipe, client machine, server machine for system for diagnosing abnormality in sewer pipe, and related methods
 本発明は、下水管内の異状(異常)診断を支援するためのシステム、当該システムに用いることができるクライアントマシン、サーバマシン、及び関連する方法に関する。 The present invention relates to a system for supporting abnormality (abnormality) diagnosis in a sewage pipe, a client machine, a server machine, and related methods that can be used in the system.
 標準耐用年数を超える管きょ(管渠)が増加し、道路陥没等の発生リスクが高まる中、管きょの計画的な維持管理がこれまで以上に重要となっている。そのため、今後急増する老朽管の状態把握を進めていく上で、管内調査の日進量の向上に加え、管内の異状(異常)箇所の判定スピードの向上が必要不可欠となる。異状箇所の迅速な判定のためには、画像を用いた診断が有効と考えられる。 As the number of pipes (pipes) exceeding their standard service life is increasing and the risk of road subsidence is increasing, systematic maintenance and management of pipes is more important than ever. Therefore, in order to grasp the condition of aging pipes, which will increase rapidly in the future, it is essential to improve the speed of determining abnormalities (abnormalities) in pipes, in addition to improving the daily progress of pipe investigations. Diagnosis using images is considered effective for rapid determination of abnormal locations.
 特許文献1においては、下水配管網の画像診断において、変形位置から求められる断面積を一方の診断指標に、先験的に定まる亀裂方向に直交する方向への濃淡勾配の画像統計量を他方の診断指標とし、予め対応付けられた診断指標の軸方向分布と実際の異常事象に基づき、下水配管網の異常を診断することを特徴とする画像診断システムが記載されている。しかしながら、特許文献1のシステムにおいては、システムによる画像診断結果に対してユーザから変更等のフィードバックを受け付けることが想定されておらず、システムの運用を続けることにより診断精度を向上させるための構成も与えられてはいないし、下水道管内の画像から管種、管の状態等を考慮して異状箇所を分析するための構成も与えられていない。 In Patent Document 1, in the image diagnosis of a sewage pipe network, the cross-sectional area obtained from the deformation position is used as one diagnostic index, and the image statistic of the density gradient in the direction orthogonal to the a priori determined crack direction is used as the other diagnostic index. A diagnostic imaging system is described, which is characterized by diagnosing an abnormality in a sewage pipe network based on an axial distribution of a diagnostic index, which is associated in advance, and an actual abnormal event. However, in the system of Patent Document 1, it is not assumed that feedback such as changes is received from the user regarding the results of image diagnosis by the system, and there is also a configuration for improving diagnostic accuracy by continuing the operation of the system. Neither is provided, nor is a configuration for analyzing anomalous locations in consideration of the type of pipe, the state of the pipe, etc. from the image inside the sewage pipe.
特開2017-194813号公報JP 2017-194813 A
 以上に鑑み、本発明は、下水管内で撮影した動画又は静止画を用いて下水管の異状診断を支援するシステムとして、まずシステム側の判定結果を表示して、ユーザ等からの変更内容の登録を受け付けることができるシステム、当該システムに用いることができるクライアントマシン、サーバマシン、及び関連する方法を提供することを課題とする。 In view of the above, the present invention provides a system for supporting abnormality diagnosis of sewage pipes using moving images or still images taken inside sewage pipes. It is an object of the present invention to provide a system capable of receiving, a client machine, a server machine and related methods that can be used in the system.
 上記課題を解決するべく、本発明は、下水管内で撮影した動画又は静止画である撮影画像を用いて行われる下水管の異状診断を支援する、下水管内異状診断支援システムであって、下水管内の撮影画像を取り込む、撮影画像取込部と、撮影画像から下水管の内側の展開画像を生成する、展開画像生成部と、展開画像から下水管の内側における異状箇所を判定する、異状箇所判定部と、異状箇所判定部により異状であると判定された箇所を判定結果として表示する、判定結果表示部と、判定結果に対する変更内容の登録を受け付ける、変更登録受付部とを備えたシステムを提供する。 In order to solve the above-described problems, the present invention provides a sewer pipe abnormality diagnosis support system that supports sewer pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewer pipe. a captured image capturing unit that captures the captured image of the above; a developed image generating unit that generates a developed image of the inside of the sewage pipe from the captured image; a judgment result display unit for displaying judgment results of locations judged to be abnormal by the abnormal location judging unit; and a change registration reception unit for accepting registration of changes to the judgment results. do.
 異状箇所判定部は画像解析部を備えてよく、画像解析部は、管種、及び調査条件の入力を受け付ける、管種及び調査条件受付部と、管種、及び調査条件に応じて二値化パラメータを設定する、二値化パラメータ設定部と、二値化パラメータを用いて展開画像を二値画像に変換する、画像変換部と、二値画像を解析して異常箇所を判定する、二値画像解析部とを備えてよい。 The abnormal location determination unit may include an image analysis unit, and the image analysis unit includes a pipe type and investigation condition reception unit that receives input of the pipe type and investigation conditions, and binarization according to the pipe type and investigation conditions. A binarization parameter setting unit that sets a parameter, an image conversion unit that converts a developed image into a binary image using the binarization parameter, and a binary image that analyzes the binary image and determines an abnormal location. and an image analysis unit.
 異状箇所判定部は診断モデル判定部を備えてよく、診断モデル判定部は、下水管の内側の展開画像と、展開画像中の異状箇所とを記憶する教師データ記憶部と、教師データ記憶部に記憶された展開画像と展開画像中の異状箇所とを教師データとして用い、入力を展開画像とし、出力を展開画像中の異状箇所とする診断モデルを機械学習により生成する診断モデル生成部と、診断モデル生成部により生成された診断モデルを用いて、展開画像生成部が生成した展開画像から下水管の内側における異状箇所を判定する、異状箇所モデル判定部と、を備えてよい。 The abnormal location determination unit may include a diagnostic model determination unit, and the diagnostic model determination unit includes a teacher data storage unit that stores an unfolded image of the inside of the sewage pipe and an abnormal location in the unfolded image, and a teacher data storage unit. a diagnostic model generation unit that uses the stored developed image and the abnormal location in the developed image as training data, uses the developed image as an input, and uses machine learning to generate a diagnostic model that uses the abnormal location in the developed image as an output; An abnormal location model determining unit that determines an abnormal location inside the sewage pipe from the developed image generated by the developed image generating unit using the diagnostic model generated by the model generating unit.
 診断モデル判定部は、異状箇所モデル判定部による異状箇所の判定に先立って展開画像をクラスタリングにより分類する展開画像分類部を更に備えてよい。 The diagnostic model determination unit may further include a developed image classification unit that classifies developed images by clustering prior to determination of an abnormal location by the abnormal location model determination unit.
 教師データ記憶部は、下水管の内側の展開画像に対応する管種を更に記憶してよく、上記システムは、教師データ記憶部に記憶された展開画像と展開画像に対応する管種とを教師データとして用い、入力を展開画像とし、出力を展開画像に対応する管種とする管種推定モデルを機械学習により生成する管種推定モデル生成部と、管種推定モデル生成部により生成された管種推定モデルを用いて、展開画像生成部が生成した展開画像に対応する管種を推定する、管種推定部とを更に備えてよい。 The training data storage unit may further store a pipe type corresponding to the developed image of the inside of the sewage pipe, and the system stores the developed image stored in the training data storage unit and the pipe type corresponding to the developed image as a teacher. A pipe type estimation model generation unit that uses machine learning to generate a pipe type estimation model using data, an input as a developed image, and an output as a pipe type corresponding to the developed image; A pipe type estimating unit may further be provided that uses the type estimating model to estimate a pipe type corresponding to the developed image generated by the developed image generating unit.
 診断モデル生成部は、変更登録受付部が登録を受け付けた判定結果に対する変更内容に基づいて、機械学習により診断モデルを更新してよい。 The diagnostic model generation unit may update the diagnostic model by machine learning based on the changes made to the determination results registered by the change registration reception unit.
 撮影画像取込部と、展開画像生成部と、判定結果表示部と、変更登録受付部とは、クライアントマシンによって具現化されてよく、異状箇所判定部に含まれる診断モデル判定部のうち、教師データ記憶部と、診断モデル生成部とは、サーバマシンによって具現化されてよく、異状箇所モデル判定部はクライアントマシンによって具現化されてよい。 The photographed image capturing unit, the developed image generating unit, the determination result display unit, and the change registration receiving unit may be embodied by a client machine. The data storage unit and diagnostic model generation unit may be embodied by a server machine, and the abnormal location model determination unit may be embodied by a client machine.
 上記システムは、異状箇所判定部による判定結果、及び、変更登録受付部が登録を受け付けた判定結果に対する変更内容に基づき、下水管の異状診断結果を帳票として出力する帳票出力部を更に備えてよい。 The above system may further include a form output unit for outputting an abnormality diagnosis result of the sewage pipe as a form, based on the determination result by the abnormality location determination unit and the content of the change to the determination result registered by the change registration reception unit. .
 また本発明は、下水管内で撮影した動画又は静止画である撮影画像を用いて行われる下水管の異状診断を支援する、下水管内異状診断支援システム用のクライアントマシンであって、下水管内の撮影画像を取り込む、撮影画像取込部と、撮影画像から下水管の内側の展開画像を生成する、展開画像生成部と、展開画像から下水管の内側における異状箇所を判定する、クライアント側異状箇所判定部と、クライアント側異状箇所判定部により異状であると判定された箇所を判定結果として表示する、判定結果表示部と、判定結果に対する変更内容の登録を受け付ける、変更登録受付部と、を備えたクライアントマシンを提供する。 Further, the present invention provides a client machine for a sewer pipe abnormality diagnosis support system that supports sewer pipe abnormality diagnosis using a photographed image that is a moving image or a still image photographed inside a sewer pipe, A captured image capturing unit that captures an image, a developed image generating unit that generates a developed image of the inside of the sewage pipe from the captured image, and a client side abnormality location determination that determines an abnormal location inside the sewage pipe from the developed image a determination result display unit for displaying a determination result of a location determined to be abnormal by the abnormal location determination unit on the client side; and a change registration reception unit for receiving registration of changes to the determination result. Serving client machines.
 クライアント側異状箇所判定部は画像解析部を備えてよく、画像解析部は、管種、及び調査条件の入力を受け付ける、管種及び調査条件受付部と、管種、及び調査条件に応じて二値化パラメータを設定する、二値化パラメータ設定部と、二値化パラメータを用いて展開画像を二値画像に変換する、画像変換部と、二値画像を解析して異常箇所を判定する、二値画像解析部とを備えてよい。 The client-side abnormality location determination unit may include an image analysis unit, and the image analysis unit includes a pipe type and investigation condition reception unit that receives input of the pipe type and investigation conditions, and two A binarization parameter setting unit that sets a value conversion parameter, an image conversion unit that converts a developed image into a binary image using the binary parameter, and an abnormal location by analyzing the binary image. and a binary image analysis unit.
 クライアント側異状箇所判定部はクライアント側診断モデル判定部を備えてよく、クライアント側診断モデル判定部は、展開画像と展開画像中の異状箇所とを教師データとして用いて機械学習により生成された、入力を展開画像とし、出力を展開画像中の異状箇所とする診断モデルを受信する診断モデル受信部と、診断モデル受信部が受信した診断モデルを用いて、展開画像生成部が生成した展開画像から下水管の内側における異状箇所を判定する、異状箇所モデル判定部とを備えてよい。 The client-side diagnostic model determination unit may include a client-side diagnostic model determination unit, and the client-side diagnostic model determination unit is an input model generated by machine learning using a developed image and an abnormal site in the developed image as teacher data. is a developed image, and a diagnostic model receiving unit that receives a diagnostic model whose output is an abnormal location in the developed image; An abnormal location model determining unit that determines an abnormal location inside the water pipe may be provided.
 また本発明は、下水管内で撮影した動画又は静止画である撮影画像を用いて行われる下水管の異状診断を支援する、下水管内異状診断支援システム用のサーバマシンであって、下水管の内側の展開画像と、展開画像中の異状箇所とを記憶する教師データ記憶部と、教師データ記憶部に記憶された展開画像と展開画像中の異状箇所とを教師データとして用い、入力を展開画像とし、出力を展開画像中の異状箇所とする診断モデルを機械学習により生成する診断モデル生成部と、診断モデル生成部により生成された診断モデルを送信する診断モデル送信部とを備える、サーバマシンを提供する。 Further, the present invention provides a server machine for a sewage pipe abnormality diagnosis support system that supports sewage pipe abnormality diagnosis performed using captured images that are moving images or still images captured inside a sewage pipe. The developed image and the abnormal locations in the developed image are stored in the teacher data storage unit, and the developed image and the abnormal locations in the developed image stored in the teacher data storage unit are used as teacher data, and the input is the developed image. A server machine comprising: a diagnostic model generating unit for generating a diagnostic model whose output is an abnormal location in a developed image by machine learning; and a diagnostic model transmitting unit for transmitting the diagnostic model generated by the diagnostic model generating unit. do.
 上記サーバマシンは、クラウドサーバとして具現化されてよい。 The above server machine may be embodied as a cloud server.
 また本発明は、下水管内で撮影した動画又は静止画である撮影画像を用いて行われる下水管の異状診断を支援する、下水管内異状診断支援システムによる方法であって、撮影画像取込部が、下水管内の撮影画像を取り込むことと、展開画像生成部が、撮影画像から下水管の内側の展開画像を生成することと、異状箇所判定部が、展開画像から下水管の内側における異状箇所を判定することと、判定結果表示部が、異状箇所判定部により異状であると判定された箇所を判定結果として表示することと、変更登録受付部が、判定結果に対する変更内容の登録を受け付けることとを含む、方法を提供する。 Further, the present invention is a method by a sewer pipe abnormality diagnosis support system that supports sewage pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewage pipe, wherein the photographed image importing unit , a developed image generation unit generates a developed image of the inside of the sewage pipe from the captured image, and an abnormal location determination unit identifies an abnormal location inside the sewer pipe from the developed image. a determination result display unit displaying the location determined to be abnormal by the abnormal location determination unit as a determination result; and a change registration receiving unit receiving registration of changes to the determination result. A method is provided, comprising:
 また本発明は、下水管内で撮影した動画又は静止画である撮影画像を用いて行われる下水管の異状診断を支援する、下水管内異状診断支援システム用のクライアントマシンによる方法であって、撮影画像取込部が、下水管内の撮影画像を取り込むことと、展開画像生成部が、撮影画像から下水管の内側の展開画像を生成することと、クライアント側異状箇所判定部が、展開画像から下水管の内側における異状箇所を判定することと、判定結果表示部が、異状箇所判定部により異状であると判定された箇所を判定結果として表示することと、変更登録受付部が、判定結果に対する変更内容の登録を受け付けることと、を含む方法を提供する。 Further, the present invention is a method by a client machine for a sewage pipe abnormality diagnosis support system, which supports sewer pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewage pipe, wherein the photographed image An acquisition unit acquires a photographed image of the inside of a sewer pipe, a developed image generation unit generates a developed image of the inside of the sewer pipe from the photographed image, and a client-side abnormality location determination unit extracts a sewer pipe from the developed image. The determination result display unit displays the location determined to be abnormal by the abnormal location determination unit as the determination result, and the change registration reception unit displays the change content for the determination result and accepting registration for a.
 また本発明は、下水管内で撮影した動画又は静止画である撮影画像を用いて行われる下水管の異状診断を支援する、下水管内異状診断支援システム用のサーバマシンによる方法であって、教師データ記憶部が、下水管の内側の展開画像と、展開画像中の異状箇所とを記憶することと、診断モデル生成部が、教師データ記憶部に記憶された展開画像と展開画像中の異状箇所とを教師データとして用い、入力を展開画像とし、出力を展開画像中の異状箇所とする診断モデルを機械学習により生成することと、診断モデル送信部が、診断モデル生成部により生成された診断モデルを送信することと、を含む方法を提供する。 Further, the present invention is a method using a server machine for a sewer pipe abnormality diagnosis support system that supports sewer pipe abnormality diagnosis performed using a photographed image that is a moving image or a still image photographed inside a sewage pipe, wherein teacher data The storage unit stores an expanded image of the inside of the sewage pipe and the abnormal location in the expanded image, and the diagnostic model generation unit stores the expanded image stored in the teacher data storage unit and the abnormal location in the expanded image. is used as teacher data, an input is a developed image, and an output is an abnormal location in the developed image to generate a diagnostic model by machine learning; and transmitting.
 本発明によれば、下水道管内を撮影した動画又は静止画である撮影画像より生成した展開図から異状箇所の有無を判定し、判定結果に対する変更内容の登録を受け付けることにより、診断精度の向上を図ることが可能となる。一例においては、下水道管内を撮影した撮影画像より生成した展開図から管種、管の状態等を判定し、判定された管種、管の状態の情報を考慮して、より良い精度で異状箇所の有無を判定することが可能となる。また一例においては、展開図及び展開図に対応する管種、管の状態等の判定結果及び異状箇所の解析結果を記憶装置に集積し、その関連性を繰り返し学習させることで解析精度を向上させることも可能となる。 According to the present invention, the presence or absence of an abnormal portion is determined from a developed view generated from a captured image that is a moving image or a still image of the inside of a sewage pipe, and registration of changes to the determination result is accepted, thereby improving diagnostic accuracy. It is possible to plan In one example, the type of pipe, the state of the pipe, etc. are determined from the developed view generated from the photographed image of the inside of the sewage pipe, and the information on the determined pipe type and state of the pipe is taken into consideration, and the location of the abnormality is determined with higher accuracy. It becomes possible to determine the presence or absence of In one example, developments and pipe types corresponding to the developments, determination results of pipe conditions, etc., and analysis results of abnormal points are accumulated in a storage device, and the relationship between them is repeatedly learned to improve analysis accuracy. is also possible.
本発明の一実施形態における下水管内異状診断支援システムの構成を示す概念図。1 is a conceptual diagram showing the configuration of a sewer pipe abnormality diagnosis support system according to an embodiment of the present invention; FIG. 下水道管路の一例を示す図。The figure which shows an example of a sewage pipeline. 下水管内異状診断支援システムに含まれるクライアントマシンの構成を示すブロック図。FIG. 2 is a block diagram showing the configuration of a client machine included in the sewage pipe abnormality diagnosis support system; クライアントマシンの画像解析部の構成を示すブロック図。FIG. 3 is a block diagram showing the configuration of an image analysis unit of the client machine; クライアント側診断モデル判定部の構成を示すブロック図。FIG. 3 is a block diagram showing the configuration of a client-side diagnostic model determination unit; クライアント側管種推定部の構成を示すブロック図。FIG. 4 is a block diagram showing the configuration of a client-side pipe type estimation unit; 下水管内異状診断支援システムに含まれるクラウドサーバ等、サーバマシンの構成を示すブロック図。The block diagram which shows the structure of a server machine, such as a cloud server contained in a sewage pipe abnormality diagnosis support system. 下水管内異状診断支援システムの動作フローを示すフローチャート。The flowchart which shows the operation|movement flow of an abnormality diagnosis support system in a sewage pipe. 図8のフロー中、展開図作成の原理を説明する図(フレーム単位画像データの生成)。FIG. 9 is a diagram for explaining the principle of development drawing creation (generation of frame unit image data) in the flow of FIG. 8 ; 図8のフロー中、展開図作成の原理を説明する図(距離の算出)。FIG. 9 is a diagram for explaining the principle of creating a developed view (distance calculation) in the flow of FIG. 8 ; 図8のフロー中、展開図作成の原理を説明する図(距離メータの例)。FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (an example of a distance meter); 図8のフロー中、展開図作成の原理を説明する図(切り取り角度の算出)。FIG. 9 is a diagram for explaining the principle of creating a developed view (calculation of a cutting angle) in the flow of FIG. 8 ; 図8のフロー中、展開図作成の原理を説明する図(切り取り角度に基づく画像の切り取り)。FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (cropping of an image based on a cropping angle); 図8のフロー中、展開図作成の原理を説明する図(正像変換の原理)。FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (principle of normal image conversion); 図8のフロー中、展開図作成の原理を説明する図(正像変換による展開図の作成)。FIG. 9 is a diagram for explaining the principle of creating a developed view in the flow of FIG. 8 (creating a developed view by normal image conversion); 図8のフロー中、展開図作成の原理を説明する図(展開画像の輝度平準化)。FIG. 9 is a view for explaining the principle of creating a developed view in the flow of FIG. 8 (luminance leveling of developed image); 図8のフロー中、展開図作成の原理を説明する図(展開図の接合処理)。FIG. 9 is a view for explaining the principle of creating a development view in the flow of FIG. 8 (connection processing of development views); 展開図作成により得られる展開図の一例を模式的に示す図。The figure which shows typically an example of the development view obtained by creation of a development view. 図8のフロー中、画像解析の動作フローを示すフローチャート。FIG. 9 is a flowchart showing an operation flow of image analysis in the flow of FIG. 8; FIG. 図8のフロー中、診断モデルによる解析の動作フローを示すフローチャート。FIG. 9 is a flowchart showing an operation flow of analysis by a diagnostic model in the flow of FIG. 8; FIG. 図8のフロー中、診断モデル生成の動作フローを示すフローチャート。FIG. 9 is a flow chart showing the operation flow of diagnostic model generation in the flow of FIG. 8 ; FIG. 学習アルゴリズムの一例として、ランダムフォレストの概念(学習段階)を説明する図。The figure explaining the concept (learning stage) of the random forest as an example of a learning algorithm. 学習アルゴリズムの一例として、ランダムフォレストの概念(運用段階)を説明する図。The figure explaining the concept (operation stage) of the random forest as an example of a learning algorithm. 学習アルゴリズムの一例として、ニューラルネットワークの概念を説明する図。FIG. 2 is a diagram for explaining the concept of a neural network as an example of a learning algorithm; 図8のフロー中、異状箇所出力における異状診断結果の閲覧画面の例を示す図。FIG. 9 is a diagram showing an example of an abnormality diagnosis result browsing screen in outputting an abnormality location in the flow of FIG. 8 ; 図8のフロー中、帳票出力における帳票の一例(本管用調査記録表)を示す図。FIG. 9 is a diagram showing an example of a form (main investigation record table) in the form output in the flow of FIG. 8; 図26の本管用調査記録表の一部を拡大した図。FIG. 27 is an enlarged view of a part of the main investigation record table of FIG. 26; 図26の本管用調査記録表の一部を拡大した図。FIG. 27 is an enlarged view of a part of the main investigation record table of FIG. 26; 図26の本管用調査記録表の一部を拡大した図(別シートの展開画像)。FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet). 図26の本管用調査記録表の一部を拡大した図(別シートの展開画像)。FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet). 図26の本管用調査記録表の一部を拡大した図(別シートの展開画像)。FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet). 図26の本管用調査記録表の一部を拡大した図(別シートの展開画像)。FIG. 26 is an enlarged view of a part of the survey record table for mains in FIG. 26 (expanded image of another sheet). 下水管内異状診断支援システムの実施例における検証対象データの表。The table|surface of the verification object data in the Example of an abnormality diagnosis support system in a sewage pipe. 実施例における画像解析の原理を説明する図。FIG. 4 is a diagram for explaining the principle of image analysis in the embodiment; 画像解析の検証結果を示す表。A table showing the verification results of image analysis. 実施例における検証対象の異状箇所例(機械学習)。An example of an abnormal location to be verified in the embodiment (machine learning). 実施例における診断モデル生成フローを示す図。The figure which shows the diagnostic model generation flow in an Example. 機械学習による異常(異状)箇所判定結果を示すグラフ。The graph which shows the abnormal (abnormality) location determination result by machine learning. 機械学習による異常(異状)箇所判定結果を示す表。The table|surface which shows the abnormality (abnormality) location determination result by machine learning.
 以下、本発明の実施形態を、図面を参照しつつ説明する。ただし、本発明の範囲は以下の実施形態に限られるわけではなく、請求の範囲の記載によって定められることに留意する。例えば、本発明の下水管内異状診断支援システムは、サーバ・クライアント型のシステムに限らず単独のコンピュータ等によって構成されてもよいし、クライアントマシンの有する機能、(クラウド)サーバマシンの有する機能を、1以上の任意の数のコンピュータ等に分散させてもよい。以下の実施形態で説明する全ての機能を、本発明の下水管内異状診断支援システム、クライアントマシン、サーバが備える必要もない。例えば管種や清掃の有無等の調査条件を機械学習モデルで判定することは必須ではなく、ユーザ等が直接システムに入力してもよい。機械学習モデルを生成するための機械学習アルゴリズムも、後述のランダムフォレスト(random forest)、ニューラルネットワーク(neural network)に限らず任意のアルゴリズムであってよい。例えば管種や清掃条件の有無等の調査条件の推定、異状箇所の推定においてはディープラーニングによる画像診断モデルを用いることができる。後述の各々の機能部は、単独のハードウェアによって実現されてもよいし、2以上のハードウェアにより実現されてもよいし、後述のとおり複数の機能部が1つのハードウェアにより実現されてもよい。なお、本明細書中、「水」とは純水であってもよいし、汚水等の下水、或いは雨水等、任意の不純物等を任意に含む水であってもよい。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. However, it should be noted that the scope of the present invention is not limited to the following embodiments, but is defined by the claims. For example, the sewer pipe abnormality diagnosis support system of the present invention is not limited to a server-client type system, and may be configured by a single computer or the like. It may be distributed to any number of computers or the like, one or more. It is not necessary for the sewer pipe abnormality diagnosis support system, the client machine, and the server of the present invention to have all the functions described in the following embodiments. For example, it is not essential to determine survey conditions such as the type of pipe and the presence/absence of cleaning using a machine learning model, and a user or the like may directly input them into the system. The machine learning algorithm for generating the machine learning model is not limited to the random forest and neural network described below, and may be any algorithm. For example, an image diagnosis model based on deep learning can be used for estimating investigation conditions such as the type of pipe and the presence or absence of cleaning conditions, and for estimating abnormal locations. Each functional unit described later may be implemented by a single piece of hardware, may be implemented by two or more pieces of hardware, or may be implemented by a single piece of hardware as described later. good. In this specification, "water" may be pure water, sewage such as sewage, or water containing any impurities such as rainwater.
 図1は、本発明の一実施形態における下水管内異状診断支援システムの構成を示す概念図であり、図2は、下水道管路の一例を示す図である。本発明の一実施形態においては、図2に示す下水道管1000内でカメラを搭載した無人航空機(ドローン)に撮影飛行をさせることにより、或いはテレビカメラ等の動画カメラ、又は静止画カメラを搭載したカメラ車に撮影走行をさせること等により得られた下水道管1000内の撮影画像(動画であっても静止画であってもよい)をクライアントマシンに取り込み、以下の流れで異状箇所診断が行われる。
(1)撮影画像から展開図を作成する。
(2)管種等解析モデルを用いて展開図(展開画像)から管種等を解析する。また解析結果に対するクライアント側からの変更等の確定登録の結果を用いて管種等解析モデルを再トレーニングする。
(3)異状箇所解析モデルを用いて展開図(展開画像)から破損、クラック等の異状箇所を解析する。
(4)異状箇所の解析結果に対して、クライアント側から変更を含む異状箇所の確定結果を登録し、確定結果を用いて異状箇所解析モデルを再トレーニングする。
(5)診断結果を帳票(印刷物、或いは電子ファイル等、任意の形式)として出力する。
FIG. 1 is a conceptual diagram showing the configuration of a sewer pipe abnormality diagnosis support system according to an embodiment of the present invention, and FIG. 2 is a diagram showing an example of a sewer pipe. In one embodiment of the present invention, an unmanned aerial vehicle (drones) equipped with a camera is allowed to fly in the sewage pipe 1000 shown in FIG. Captured images (either moving images or still images) inside the sewerage pipe 1000 obtained by driving a camera car, etc., are imported into the client machine, and abnormal location diagnosis is performed in the following flow. .
(1) Create a developed view from the photographed image.
(2) Analyze the type of pipe from the developed view (developed image) using the analytical model for the type of pipe. In addition, the analysis model such as pipe type is retrained using the results of fixed registration such as changes from the client side to the analysis results.
(3) Analyze abnormal locations such as breakage and cracks from the developed view (developed image) using the abnormal location analysis model.
(4) For the analysis result of the abnormal location, the client side registers the confirmed result of the abnormal location including the change, and retrains the abnormal location analysis model using the confirmed result.
(5) Output the diagnosis result as a form (any format such as printed matter or electronic file).
 クライアントマシンの構成
 図3は、下水管内異状診断支援システムに含まれるクライアントマシンの構成を示すブロック図であり、図4は、クライアントマシンの画像解析部の構成を示すブロック図であり、図5は、クライアント側診断モデル判定部の構成を示すブロック図であり、図6は、クライアント側管種推定部の構成を示すブロック図である。
Configuration of Client Machine FIG. 3 is a block diagram showing the configuration of a client machine included in the sewer pipe abnormality diagnosis support system, FIG. 4 is a block diagram showing the configuration of an image analysis section of the client machine, and FIG. 6 is a block diagram showing the configuration of a client-side diagnostic model determining unit, and FIG. 6 is a block diagram showing the configuration of a client-side pipe type estimating unit.
 クライアントマシン1は、制御部2と、記憶部3と、入出力部4と、通信部5とを備える。制御部2は、CPU(Central Processing Unit:中央処理装置)等のプロセッサ6と、RAM(Random Access Memory:ランダム アクセス メモリ)等の一時メモリ7とを備える。プロセッサ6が、記憶部3に記憶された展開画像生成プログラム17を実行することにより、制御部2(のプロセッサ6。以下においても同様)は展開画像生成部8として機能する。プロセッサ6が、記憶部3に記憶された画像解析プログラム18を実行することにより、制御部2は画像解析部10として機能する。プロセッサ6が、記憶部3に記憶された機械学習関連プログラム19を実行することにより、制御部2はクライアント側診断モデル判定部11、クライアント側管種推定部12として機能する。具体的には、プロセッサ6が診断モデル受信プログラムを実行することにより制御部2は通信部5を用いる診断モデル受信部32として機能し、プロセッサ6が異状箇所モデル判定プログラムを実行することにより制御部2は展開画像分類部33、異状箇所モデル判定部34として機能し、プロセッサ6が管種推定モデル受信プログラムを実行することにより制御部2は通信部5を用いる管種推定モデル受信部35として機能し、プロセッサ6が管種推定プログラムを実行することにより制御部2は管種モデル推定部36として機能する。プロセッサ6が、記憶部3に記憶された帳票出力プログラム22を実行することにより、制御部2は帳票出力部15として機能する。その他、プロセッサ6が各種制御、表示プログラム等23(オペレーティングシステムや、各種アプリケーションソフトウェア、各種デバイスのドライバソフトウェア等を含む)を実行することにより、制御部2は判定結果表示部13、変更登録受付部14、各種制御、表示部16として機能する。記憶部3にはその他に任意のプログラムが記憶されていてよく、制御部2のプロセッサ6が任意のプログラムを実行することにより制御部2は任意の機能部として機能することができる。 The client machine 1 includes a control unit 2, a storage unit 3, an input/output unit 4, and a communication unit 5. The control unit 2 includes a processor 6 such as a CPU (Central Processing Unit) and a temporary memory 7 such as a RAM (Random Access Memory). By the processor 6 executing the developed image generation program 17 stored in the storage unit 3 , (the processor 6 of the control unit 2 ) functions as the developed image generation unit 8 . The processor 6 executes the image analysis program 18 stored in the storage unit 3 so that the control unit 2 functions as the image analysis unit 10 . The processor 6 executes the machine-learning-related program 19 stored in the storage unit 3 so that the control unit 2 functions as the client-side diagnostic model determination unit 11 and the client-side pipe type estimation unit 12 . Specifically, when the processor 6 executes the diagnostic model reception program, the control unit 2 functions as a diagnostic model reception unit 32 using the communication unit 5, and when the processor 6 executes the abnormal location model determination program, the control unit 2 functions as a developed image classification unit 33 and an abnormal part model determination unit 34, and the processor 6 executes a pipe type estimation model reception program so that the control unit 2 functions as a pipe type estimation model reception unit 35 using the communication unit 5. The controller 2 functions as a pipe type model estimating unit 36 by the processor 6 executing the pipe type estimating program. The processor 6 executes the form output program 22 stored in the storage section 3 so that the control section 2 functions as the form output section 15 . In addition, the processor 6 executes various control and display programs 23 (including operating system, various application software, driver software for various devices, etc.), so that the control unit 2 operates the determination result display unit 13, the change registration reception unit 14, functions as a various control and display unit 16; Any other program may be stored in the storage unit 3, and the processor 6 of the control unit 2 executes any program, thereby allowing the control unit 2 to function as any functional unit.
 展開画像生成部8は、入出力部4のデータ入出力部(Universal Serial Bus(USB)ポート等)を介する等して外部から入力された下水管内部の撮影画像(撮影画像データ24として記憶部3に記憶される。)から、一例においては後述の図9~図18に示す原理で展開画像を生成する機能部である。展開画像生成部8は、生成した展開画像のデータを展開画像データ25として記憶部3に記憶させる。 The developed image generation unit 8 stores a captured image of the inside of the sewage pipe (captured image data 24 as a storage unit 3.), for example, it is a functional unit that generates a developed image based on the principles shown in FIGS. The developed image generation unit 8 stores data of the generated developed image in the storage unit 3 as developed image data 25 .
 画像解析部10は、管種及び調査条件受付部28、二値化パラメータ設定部29、画像変換部30、二値画像解析部31を含み、展開画像を解析することにより、破損、クラック等の異状箇所を特定する機能部である。画像解析部10は、解析により特定された異状箇所を示すデータ(異常箇所の位置、種類を示すデータであってもよいし、或いは特定された異状箇所周辺の展開画像部分のデータであってもよい。)を、異状箇所、管種データ26として記憶部3に記憶させる。 The image analysis unit 10 includes a pipe type and investigation condition reception unit 28, a binarization parameter setting unit 29, an image conversion unit 30, and a binary image analysis unit 31, and analyzes the developed image to detect damage, cracks, and the like. It is a functional unit that identifies an abnormal location. The image analysis unit 10 collects data indicating the abnormal location specified by the analysis (data indicating the position and type of the abnormal location, or data of the expanded image portion around the specified abnormal location). Good.) is stored in the storage unit 3 as the abnormal location and pipe type data 26 .
 クライアント側診断モデル判定部11は、診断モデル受信部32、展開画像分類部33、異状箇所モデル判定部34の各機能部を含む機能部であり、診断モデル受信部32が後述のサーバマシン37(図7参照)から機械学習済み診断モデルを受信して記憶部3に機械学習済み診断モデル20として記憶させ、展開画像分類部33が必要に応じて展開画像を分類し、異状箇所モデル判定部34が、機械学習済み診断モデル(機械学習アルゴリズムは任意のアルゴリズムであってよく、ディープラーニングによる画像認識を用いてもよいし、或いは展開画像から特定の特徴量を抽出して後述のランダムフォレスト、ニューラルネットワークによってモデルを構築してもよい。)を用いて展開画像を解析することにより、破損、クラック等の異状箇所を特定する。クライアント側診断モデル判定部11は、解析により特定された異状箇所を示すデータ(異常箇所の位置、種類を示すデータであってもよいし、或いは特定された異状箇所周辺の展開画像部分のデータであってもよい。)を、異状箇所、管種データ26として記憶部3に記憶させる。 The client-side diagnostic model determination unit 11 is a functional unit including functional units of a diagnostic model reception unit 32, a developed image classification unit 33, and an abnormal location model determination unit 34. The diagnostic model reception unit 32 is a server machine 37 (to be described later). (See FIG. 7), the machine-learned diagnostic model is stored in the storage unit 3 as the machine-learned diagnostic model 20, and the developed image classification unit 33 classifies the developed image as necessary. However, the machine-learned diagnostic model (the machine-learning algorithm may be any algorithm, image recognition by deep learning may be used, or a specific feature amount is extracted from the developed image and random forest, neural A network may be used to build a model.) By analyzing the developed image, abnormal locations such as damage and cracks are identified. The client-side diagnostic model determination unit 11 uses data indicating the abnormal location specified by the analysis (data indicating the position and type of the abnormal location, or data of the expanded image portion around the specified abnormal location). ) is stored in the storage unit 3 as the abnormal location and pipe type data 26 .
 クライアント側管種推定部12は、管種推定モデル受信部35、管種モデル推定部36の各機能部を含む機能部であり、管種推定モデル受信部35がサーバマシン37から機械学習済み管種推定モデルを受信して記憶部3に機械学習済み管種推定モデル21として記憶させ、管種モデル推定部36が機械学習済み管種推定モデル(機械学習アルゴリズムは任意のアルゴリズムであってよく、ディープラーニングによる画像認識を用いてもよいし、或いは展開画像から特定の特徴量を抽出して後述のランダムフォレスト、ニューラルネットワークによってモデルを構築してもよい。)を用いて展開画像を解析することにより、展開画像から管種を推定する。クライアント側管種推定部12は、解析により特定された管種(及び、必要に応じて清掃の有無等の調査条件)を示すデータを、異状箇所、管種データ26として記憶部3に記憶させる。 The client-side pipe type estimation unit 12 is a functional unit including a pipe type estimation model reception unit 35 and a pipe type model estimation unit 36 . The pipe type estimation model is received and stored in the storage unit 3 as the machine-learned pipe type estimation model 21, and the pipe type model estimation unit 36 receives the machine-learned pipe type estimation model (the machine learning algorithm may be any algorithm, Image recognition by deep learning may be used, or a model may be constructed by extracting a specific feature amount from the developed image and using a random forest or neural network described later.) to analyze the developed image. estimates the tube type from the developed image. The client-side pipe type estimation unit 12 causes the storage unit 3 to store data indicating the pipe type specified by the analysis (and investigation conditions such as the presence or absence of cleaning, if necessary) as abnormal location and pipe type data 26. .
 判定結果表示部13は、クライアント側異状箇所判定部により異状であると判定された箇所を判定結果として表示する機能部である。一例においては、後述の図25に示すレイアウトの診断結果閲覧画面を入出力部4のディスプレイ装置に表示する。 The determination result display unit 13 is a functional unit that displays the location determined to be abnormal by the client-side abnormality location determination unit as the determination result. In one example, a diagnosis result browsing screen having a layout shown in FIG.
 変更登録受付部14は、入出力部4のキーボードやマウスを介して、クライアント側異状箇所判定部9による異状箇所判定結果に対する変更内容の登録を受け付ける機能部である。クライアントマシン1のユーザは、一例においては図25のような診断結果閲覧画面を確認しつつ、キーボードやマウスを操作することにより、クライアント側異状箇所判定部9による異状箇所判定結果に対する確定登録、変更登録等の入力を行うことができ、変更登録受付部14は、登録内容を変更登録データ27として記憶部3に記憶させる(変更の登録内容のみを記憶してもよいし、クライアント側異状箇所判定部による異状箇所判定結果を承認して「確定した」ことも記憶してもよい。)。 The change registration accepting unit 14 is a functional unit that accepts, via the keyboard and mouse of the input/output unit 4, the registration of changes to the abnormal location determination result by the client side abnormal location determination unit 9. For example, the user of the client machine 1 operates the keyboard and mouse while confirming the diagnostic result viewing screen as shown in FIG. Input such as registration can be performed, and the change registration reception unit 14 stores the registered content as change registration data 27 in the storage unit 3 (only the change registration content may be stored, or the client side abnormality point determination may be performed). It may also be possible to store that the department has approved and "confirmed" the result of determination of an abnormal location.).
 帳票出力部15は、クライアント側異状箇所判定部による判定結果、及び、変更登録受付部14が登録を受け付けた判定結果に対する変更内容に基づき、下水管の異状診断結果を帳票として出力する機能部である。帳票の例は後述の図26~図32に示される。 The form output unit 15 is a functional unit that outputs a diagnosis result of an abnormality of a sewage pipe as a form based on the determination result by the client side abnormality location determination unit and the content of the change to the determination result accepted by the change registration reception unit 14. be. Examples of forms are shown in FIGS. 26 to 32, which will be described later.
 記憶部3は、ハードディスクドライブ、SSD(Solid State Drive)等を備えた記憶(記録)装置であり、上述のとおり制御部2をさまざまな機能部として機能させるためにプロセッサ6によって実行されるプログラムとして、展開画像生成プログラム17、画像解析プログラム18、機械学習関連プログラム19(診断モデル受信プログラム、管種推定モデル受信プログラム、異状箇所モデル判定プログラム、管種推定プログラムを含む)、帳票出力プログラム22、各種制御、表示プログラム等23を記憶する。また記憶部3は、サーバマシン37の診断モデル生成部44により生成される機械学習済みモデルとして、展開画像から画像認識等により異状箇所を判定する機械学習済み診断モデル20を記憶する。さらに記憶部3は、サーバマシン37の管種推定モデル生成部46により生成される機械学習済みモデルとして、展開画像から管種を推定する機械学習済み管種推定モデル21を記憶する。また記憶部3は、図3に示すとおり撮影画像データ24、展開画像データ25、異状箇所、管種データ26、変更登録データ27も記憶する。 The storage unit 3 is a storage (recording) device equipped with a hard disk drive, SSD (Solid State Drive), etc. As described above, the program executed by the processor 6 to cause the control unit 2 to function as various functional units , development image generation program 17, image analysis program 18, machine learning-related program 19 (including diagnostic model reception program, pipe type estimation model reception program, abnormal location model determination program, pipe type estimation program), report output program 22, various Control and display programs 23 are stored. The storage unit 3 also stores a machine-learned diagnostic model 20 for determining an abnormal location from a developed image by image recognition or the like as a machine-learned model generated by the diagnostic model generation unit 44 of the server machine 37 . Further, the storage unit 3 stores the machine-learned pipe type estimation model 21 for estimating the pipe type from the developed image as a machine-learned model generated by the pipe type estimation model generation unit 46 of the server machine 37 . The storage unit 3 also stores photographed image data 24, developed image data 25, anomaly location, pipe type data 26, and change registration data 27, as shown in FIG.
 サーバマシンの構成
 図7は、下水管内異状診断支援システムに含まれるクラウドサーバ等、サーバマシンの構成を示すブロック図である。サーバマシン37は、制御部38と、記憶部39と、入出力部40と、通信部41とを備える。制御部38は、CPU等のプロセッサ42と、RAM等の一時メモリ43とを備える。プロセッサ42が、記憶部39に記憶された機械学習関連プログラム51中、診断モデル生成プログラムを実行することにより、制御部38(のプロセッサ42。以下においても同様)は診断モデル生成部44として機能し、プロセッサ42が診断モデル送信プログラムを実行することにより、制御部38は通信部41を用いる診断モデル送信部45として機能し、プロセッサ42が管種推定モデル生成プログラムを実行することにより、制御部38は管種推定モデル生成部46として機能し、プロセッサ42が管種推定モデル送信プログラムを実行することにより、制御部38は通信部41を用いる管種推定モデル送信部47として機能し、プロセッサ42が診断モデル評価プログラムを実行することにより、制御部38は診断モデル評価部48として機能し、プロセッサ42が管種推定モデル評価プログラムを実行することにより、制御部38は管種推定モデル評価部49として機能する。その他、プロセッサ42が各種制御、表示プログラム等54(オペレーティングシステムや、各種アプリケーションソフトウェア、各種デバイスのドライバソフトウェア等を含む)を実行することにより、制御部38は各種制御、表示部50として機能する。記憶部39にはその他に任意のプログラムが記憶されていてよく、制御部38のプロセッサ42が任意のプログラムを実行することにより制御部38は任意の機能部として機能することができる。
Configuration of Server Machine FIG. 7 is a block diagram showing the configuration of a server machine such as a cloud server included in the sewer pipe abnormality diagnosis support system. The server machine 37 includes a control section 38 , a storage section 39 , an input/output section 40 and a communication section 41 . The control unit 38 includes a processor 42 such as a CPU and a temporary memory 43 such as RAM. The processor 42 executes the diagnostic model generation program in the machine learning-related program 51 stored in the storage unit 39, so that (the processor 42 of the control unit 38; the same shall apply hereinafter) functions as the diagnostic model generation unit 44. , the processor 42 executes the diagnostic model transmission program so that the control unit 38 functions as a diagnostic model transmission unit 45 using the communication unit 41, and the processor 42 executes the pipe type estimation model generation program so that the control unit 38 functions as a pipe type estimation model generation unit 46, the processor 42 executes a pipe type estimation model transmission program, the control unit 38 functions as a pipe type estimation model transmission unit 47 using the communication unit 41, and the processor 42 By executing the diagnostic model evaluation program, the control unit 38 functions as a diagnostic model evaluation unit 48. By executing the pipe type estimation model evaluation program by the processor 42, the control unit 38 functions as a pipe type estimation model evaluation unit 49. Function. In addition, the processor 42 executes various control/display programs 54 (including an operating system, various application software, various device driver software, etc.), so that the control unit 38 functions as various control/display units 50 . Any other programs may be stored in the storage unit 39, and the control unit 38 can function as any functional unit by executing any program by the processor 42 of the control unit 38. FIG.
 診断モデル生成部44は、展開画像から異状箇所を推定するための診断モデルを機械学習により生成する機能部である。機械学習のアルゴリズムとしては任意のアルゴリズムを用いてよいが、一例においては、展開画像データを入力とし、当該展開画像データ中の異状箇所の情報(位置、タイプ、及び異状箇所周辺の画像データ等)を出力とするディープラーニングによる画像認識の学習アルゴリズムを用いることができる。ディープラーニングを用いた機械学習においては、教師データ(学習データ)として、展開画像データ(入力)と、当該展開画像中の異状箇所の情報(位置、タイプ、及び異状箇所周辺の画像データ等。出力)の組のデータを多数用意し、これらの教師データ(一例においては、異状箇所の情報でラベル付けされた多数の展開画像データとして学習データを用意する。)により画像認識モデルに機械学習をさせることでモデルの判定精度を向上させることができる。ディープラーニングによる画像認識は広く知られており、ここではこれ以上詳しく説明しない。なお、管種、及び清掃状態等の調査条件に応じて分類された教師データを準備し、管種、及び清掃状態等の調査条件に応じて別個の診断モデルを生成することにより、展開画像データを管種、及び清掃状態等の調査条件で分類してから対応する診断モデルを用いて異状箇所診断を行うことが可能となり、これにより判定精度が向上すると考えられる。診断モデル生成部44は、生成した診断モデルを機械学習済み診断モデル52として記憶部39に記憶させる。診断モデル送信部45は、診断モデル生成部44が生成した機械学習済み診断モデルを、通信部41を用いてクライアントマシン1に送信する。診断モデル評価部48は、テストデータ56として展開画像データと異状箇所データを用いて機械学習済み診断モデル52の判定精度を評価したり、機械学習済み診断モデル52の診断結果を、変更登録データ57によって示されるクライアント側での変更内容と比較して機械学習済み診断モデル52の判定精度を評価したりする。 The diagnostic model generation unit 44 is a functional unit that uses machine learning to generate a diagnostic model for estimating an abnormal location from a developed image. Any algorithm may be used as the machine learning algorithm, but in one example, developed image data is input, and information on the abnormal location in the developed image data (position, type, image data around the abnormal location, etc.) It is possible to use an image recognition learning algorithm by deep learning that outputs . In machine learning using deep learning, as teacher data (learning data), developed image data (input) and information on abnormal locations in the developed images (position, type, image data around abnormal locations, etc.) are output. ), and let the image recognition model perform machine learning using these teaching data (in one example, learning data is prepared as a large number of developed image data labeled with information on abnormal locations). By doing so, it is possible to improve the judgment accuracy of the model. Image recognition by deep learning is widely known and will not be described in detail here. In addition, by preparing training data classified according to the investigation conditions such as pipe type and cleaning condition, and generating a separate diagnostic model according to the investigation conditions such as pipe type and cleaning condition, developed image data can be classified according to investigation conditions such as pipe type and cleaning condition, and then diagnosis of abnormal locations can be performed using the corresponding diagnostic model, which is thought to improve the accuracy of determination. The diagnostic model generation unit 44 stores the generated diagnostic model in the storage unit 39 as a machine-learned diagnostic model 52 . The diagnostic model transmission unit 45 transmits the machine-learned diagnostic model generated by the diagnostic model generation unit 44 to the client machine 1 using the communication unit 41 . The diagnostic model evaluation unit 48 evaluates the determination accuracy of the machine-learned diagnostic model 52 using the developed image data and the abnormal location data as the test data 56 , and converts the diagnostic result of the machine-learned diagnostic model 52 into the changed registration data 57 . The judgment accuracy of the machine-learned diagnostic model 52 is evaluated in comparison with the changed content on the client side indicated by .
 管種推定モデル生成部46は、展開画像から管種を推定するための管種推定モデルを機械学習により生成する機能部である。機械学習のアルゴリズムとしては任意のアルゴリズムを用いてよいが、一例においては、展開画像データを入力とし、当該展開画像によって表される下水管の管種(ヒューム管、陶管、塩ビ管、等)を出力とするディープラーニングによる画像認識の学習アルゴリズムを用いることができる。ディープラーニングを用いた機械学習においては、教師データ(学習データ)として、展開画像データ(入力)と、当該展開画像によって表される下水管の管種(ヒューム管、陶管、塩ビ管、等。出力)の組のデータを多数用意し、これらの教師データ(一例においては、管種の情報でラベル付けされた多数の展開画像データとして学習データを用意する。)により画像認識モデルに機械学習をさせることでモデルの判定精度を向上させることができる。また管種だけでなく、清掃の有無等も推定するモデルとして管種推定モデルを構築することもできる。この場合は、教師データ(学習データ)として、展開画像データ(入力)と、当該展開画像によって表される「下水管の管種(ヒューム管、陶管、塩ビ管、等)及び清掃状態等の調査条件」(出力)の組のデータを多数用意し、これらの教師データ(一例においては、管種及び清掃状態等の調査条件の情報(「ヒューム管、清掃済み」等)でラベル付けされた多数の展開画像データとして学習データを用意する。)により画像認識モデルに機械学習をさせることでモデルの判定精度を向上させることができる。管種推定モデル生成部46は、生成した管種推定モデルを機械学習済み管種推定モデル53として記憶部39に記憶させる。管種推定モデル送信部47は、管種推定モデル生成部46が生成した機械学習済み管種推定モデルを、通信部41を用いてクライアントマシン1に送信する。管種推定モデル評価部49は、テストデータ56として展開画像データと管種データ(清掃状態等の調査条件の情報を含んでもよい。)を用いて機械学習済み管種推定モデル53の判定精度を評価したり、機械学習済み管種推定モデル53の診断結果を、変更登録データ57によって示されるクライアント側での変更内容と比較して機械学習済み管種推定モデル53の判定精度を評価したりする。 The pipe type estimation model generation unit 46 is a functional unit that uses machine learning to generate a pipe type estimation model for estimating the pipe type from the developed image. Any algorithm may be used as the machine learning algorithm, but in one example, developed image data is input, and the type of sewage pipe represented by the developed image (Hume pipe, ceramic pipe, PVC pipe, etc.) It is possible to use an image recognition learning algorithm by deep learning that outputs . In machine learning using deep learning, as teacher data (learning data), developed image data (input) and the type of sewage pipe represented by the developed image (Hume pipe, ceramic pipe, PVC pipe, etc.). output), and these training data (in one example, learning data is prepared as a large number of developed image data labeled with information on the type of pipe) to apply machine learning to the image recognition model. It is possible to improve the judgment accuracy of the model. A pipe type estimation model can also be constructed as a model for estimating not only the pipe type but also the presence or absence of cleaning. In this case, as teacher data (learning data), developed image data (input), and "sewer pipe type (Hume pipe, ceramic pipe, PVC pipe, etc.) and cleaning status etc. represented by the developed image" Prepare a large number of data sets of "survey conditions" (output), and label these teacher data (in one example, information on survey conditions such as pipe type and cleaning state ("Hume tube, cleaned", etc.) Learning data is prepared as a large number of developed image data.) allows the image recognition model to perform machine learning, thereby improving the accuracy of model determination. The pipe type estimation model generation unit 46 stores the generated pipe type estimation model in the storage unit 39 as a machine-learned pipe type estimation model 53 . The pipe type estimation model transmission unit 47 transmits the machine-learned pipe type estimation model generated by the pipe type estimation model generation unit 46 to the client machine 1 using the communication unit 41 . The pipe type estimation model evaluation unit 49 evaluates the determination accuracy of the machine-learned pipe type estimation model 53 using the developed image data and pipe type data (which may include information on survey conditions such as cleaning conditions) as the test data 56 . Also, the diagnosis result of the machine-learned pipe type estimation model 53 is compared with the change contents on the client side indicated by the change registration data 57 to evaluate the judgment accuracy of the machine-learned pipe type estimation model 53. .
 記憶部39は、ハードディスクドライブ、SSD等を備えた記憶(記録)装置であり、上述のとおり制御部38をさまざまな機能部として機能させるためにプロセッサ42によって実行されるプログラムとして、機械学習関連プログラム51(診断モデル生成プログラム、管種推定モデル生成プログラム、診断モデル送信生成プログラム、管種推定モデル送信プログラム、診断モデル評価プログラム、管種推定モデル評価プログラム)、各種制御、表示プログラム等54を記憶する。また記憶部39は、サーバマシン37の診断モデル生成部44により生成される機械学習済みモデルとして、展開画像から画像認識等により異状箇所を判定する機械学習済み診断モデル52を記憶する。さらに記憶部39は、サーバマシン37の管種推定モデル生成部46により生成される機械学習済みモデルとして、展開画像から管種を推定する機械学習済み管種推定モデル53を記憶する。また記憶部39は、図7に示すとおり教師データ55、テストデータ56、変更登録データ57も記憶する。 The storage unit 39 is a storage (recording) device including a hard disk drive, an SSD, etc. As described above, a machine learning related program is executed by the processor 42 to cause the control unit 38 to function as various functional units. 51 (diagnostic model generation program, pipe type estimation model generation program, diagnostic model transmission generation program, pipe type estimation model transmission program, diagnostic model evaluation program, pipe type estimation model evaluation program), various control and display programs 54 are stored. . The storage unit 39 also stores, as a machine-learned model generated by the diagnostic model generation unit 44 of the server machine 37, a machine-learned diagnostic model 52 for determining an abnormal location from a developed image by image recognition or the like. Further, the storage unit 39 stores a machine-learned pipe type estimation model 53 for estimating the pipe type from the developed image as a machine-learned model generated by the pipe type estimation model generation unit 46 of the server machine 37 . The storage unit 39 also stores teacher data 55, test data 56, and change registration data 57 as shown in FIG.
 下水管内異状診断支援システムの動作
 以下、図8~図32も参照しつつ、下水管内異状診断支援システムの動作を説明する。図8は、下水管内異状診断支援システムの動作フローを示すフローチャートである。なお、以下の例においては撮影画像が動画であるとして説明するが、静止画としての撮影画像(例えば静止画カメラを搭載したカメラ車が下水道管路内部で走行しつつ静止画を断続的に撮影することにより、動画と同様の画像情報を得ることができる)を用いても、同様の原理で下水管内異状診断支援システムを動作させることができる。
Operation of Sewer Pipe Abnormality Diagnosis Support System Below, the operation of the sewer pipe abnormality diagnosis support system will be described with reference to FIGS. FIG. 8 is a flow chart showing the operation flow of the sewer pipe abnormality diagnosis support system. In the following example, the captured image is described as a moving image, but the captured image as a still image (for example, a camera car equipped with a still image camera intermittently captures still images while traveling inside the sewage pipe) image information similar to moving images can be obtained), the sewer pipe abnormality diagnosis support system can be operated on the same principle.
 下水管内の動画は、既に説明したとおりカメラを搭載した無人航空機に管内を撮影飛行させるなどして予め撮影済みであるとし、またクライアントマシン1の記憶部3に撮影動画(撮影画像)データ24として記憶されているとする。クライアントマシン1のユーザは、各種制御、表示部16(図25に示すような閲覧画面表示、ユーザからの変更登録等を受け付けるインタフェース等の処理も行う。)により入出力部4のディスプレイ装置に表示される画面を参照しつつ、撮影動画データのリストから、入出力部4のキーボード、マウス等を介して(ユーザからのその他の入出力においても同様)読み込むべき撮影動画を選択する。各種制御、表示部16は、撮影データ24に含まれる撮影動画データ1~N(Nは1以上の整数)のうち、選択された撮影動画データを読み込んで入出力部4のディスプレイ装置に表示する(ステップS101)。 As already explained, it is assumed that the moving image inside the sewage pipe has already been captured by flying an unmanned aerial vehicle equipped with a camera, etc. Suppose it is memorized. The user of the client machine 1 performs various control and display on the display device of the input/output unit 4 by means of the display unit 16 (displaying a browsing screen as shown in FIG. 25, and also performs processing such as an interface for receiving change registration from the user). While referring to the displayed screen, a photographed moving image to be read is selected from the list of photographed moving image data via the keyboard, mouse, etc. of the input/output unit 4 (the same applies to other input/output from the user). The various control/display unit 16 reads selected moving image data among the moving image data 1 to N (N is an integer equal to or greater than 1) included in the image data 24 and displays it on the display device of the input/output unit 4. (Step S101).
 次にクライアントマシン1のユーザは、各種制御、表示部16により入出力部4のディスプレイ装置に表示される画面を参照しつつ、展開図作成の指示を入力し、展開画像作成部8は選択された撮影動画データを用いて展開図(展開画像)を作成する(ステップS102)。 Next, the user of the client machine 1 inputs an instruction to create a developed drawing while referring to the screen displayed on the display device of the input/output unit 4 by the various control/display units 16, and the developed image creating unit 8 is selected. A developed view (developed image) is created using the photographed moving image data (step S102).
 図9~図17は、図8のフロー中、展開図作成の原理を説明する図である。ただし本実施形態における展開図作成は、図9~図17に示すものに限らず任意の手法を用いてよい。 9 to 17 are diagrams for explaining the principle of development drawing creation in the flow of FIG. However, the developed view creation in this embodiment is not limited to those shown in FIGS. 9 to 17, and any method may be used.
 1.展開画像作成部8は、まずOpenCV(オープンソースのコンピュータビジョン向けライブラリ)の機能(展開画像生成プログラム17に含まれる)を用いて、動画をフレーム単位に画面キャプチャし、フレーム単位の画像データを生成する(図9)。 1. The developed image creation unit 8 first captures a moving image frame by frame using the OpenCV (open source library for computer vision) function (included in the developed image generation program 17), and generates image data for each frame. (Fig. 9).
 2.次に展開画像生成部8は、連続した2枚の画像から進んだ距離(dist)を算出する(図10)。ここにおいて、撮影動画は図11に示すとおり距離メータ表示(単位はmとする)を含み、ある画像中の距離メータ表示をAとし、次の画像の距離メータ表示をBとすると、進んだ距離(dist)は、
Figure JPOXMLDOC01-appb-M000001
となる。画像中の左上の距離メータは、画像認識(パターンマッチング)で読み取る。また距離メータがない場合、下記「4.」以降の処理は、切り取り角度wdeg=181.0として行う。
2. Next, the developed image generation unit 8 calculates the distance (dist) advanced from the two consecutive images (FIG. 10). Here, as shown in FIG. 11, the captured moving image includes a distance meter display (unit is m). (dist) is
Figure JPOXMLDOC01-appb-M000001
becomes. The upper left distance meter in the image is read by image recognition (pattern matching). If there is no distance meter, the processing after "4." below is performed with the cropping angle wdeg=181.0.
 3.次に展開画像生成部8は、求めた距離(dist)からA画像の切り取り角度(wdeg)を以下の手順で算出する(図12)。
(1)rlの円周と、実際の管径(入力パラメータ)から求めた円周よりrl地点の1画素当たりの距離を算出
Figure JPOXMLDOC01-appb-M000002
ただし「*」は乗算を表す。

(2)rlをrs方向に縮めて行き、距離の合計が入力距離となるrsを算出
Figure JPOXMLDOC01-appb-M000003

(3)上記で求めたrsから切り取り角度(wdeg)を算出
Figure JPOXMLDOC01-appb-M000004
3. Next, the developed image generation unit 8 calculates the cropping angle (wdeg) of the image A from the obtained distance (dist) in the following procedure (FIG. 12).
(1) Calculate the distance per pixel of the rl point from the circumference of rl and the circumference obtained from the actual pipe diameter (input parameter)
Figure JPOXMLDOC01-appb-M000002
However, "*" represents multiplication.

(2) Shorten rl in the direction of rs and calculate rs whose total distance is the input distance
Figure JPOXMLDOC01-appb-M000003

(3) Calculate the clipping angle (wdeg) from the rs obtained above
Figure JPOXMLDOC01-appb-M000004
 4.次に展開画像生成部8は、以下の手順で切り取り角度を基に画像の切り取りを行う(図13~図15)。
(1)切り取り角度wdegよりrsを再度算出
(2)展開図高さはrlの円周画素+1とする
(3)展開図幅は以下の式で算出
Figure JPOXMLDOC01-appb-M000005

(4)rlの円周角度単位に正像変換で展開画像を作成。
Figure JPOXMLDOC01-appb-M000006
※param…可変パラメータ。別途チューニングして設定
 上記角度パラメータより展開パラメータを計算する。
Figure JPOXMLDOC01-appb-M000007

展開図幅単位に参照画素を求める。
Figure JPOXMLDOC01-appb-M000008

Figure JPOXMLDOC01-appb-I000009
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000013
4. Next, the developed image generation unit 8 cuts the image based on the cut-out angle in the following procedure (FIGS. 13 to 15).
(1) Recalculate rs from the cropping angle wdeg (2) Set the developed view height to the circumference pixel of rl + 1 (3) Calculate the developed view width by the following formula
Figure JPOXMLDOC01-appb-M000005

(4) A developed image is created by normal image conversion in units of circumference angles of rl.
Figure JPOXMLDOC01-appb-M000006
*param: Variable parameter. Set by tuning separately Calculate the deployment parameters from the angle parameters above.
Figure JPOXMLDOC01-appb-M000007

A reference pixel is obtained in units of development width.
Figure JPOXMLDOC01-appb-M000008

Figure JPOXMLDOC01-appb-I000009
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000013
 5.次に展開画像生成部8は、以下の手順で展開画像の輝度を平準化する(図16。光の辺り具合にばらつきがある場合があるので、平準化する)。
・天井部分の平均輝度(lY)を求める。
・対象行の平均輝度(rY)と調べる画素輝度(Y)の差がZZ(閾値)以下であれば輝度を補正する。
Figure JPOXMLDOC01-appb-M000014
5. Next, the developed image generation unit 8 levels the luminance of the developed image according to the following procedure (FIG. 16. Leveling is performed because there may be variations in the lighting conditions).
• Calculate the average brightness (lY) of the ceiling.
If the difference between the average luminance (rY) of the target row and the pixel luminance (Y) to be checked is ZZ (threshold) or less, the luminance is corrected.
Figure JPOXMLDOC01-appb-M000014
 6.最後に展開画像生成部8は、展開図の接合処理を行う(図17)。 6. Finally, the developed image generation unit 8 performs a process of joining the developed views (FIG. 17).
 図18は、展開図作成により得られる展開図の一例を模式的に示す図である。一例においては円筒状の下水管の内側表面が、図18に示すとおり長方形状の平面図として表される。後述の画像解析、診断モデルによる判定により、下水管内壁58中の異状箇所60が自動検出され、また検出結果はクライアント側で変更登録することができる。展開画像生成部8が生成した展開画像は、入出力部4のディスプレイ装置に表示される。 FIG. 18 is a diagram schematically showing an example of a development view obtained by creating a development view. In one example, the inner surface of a cylindrical sewer is represented as a rectangular plan view as shown in FIG. Abnormal locations 60 in the sewage pipe inner wall 58 are automatically detected by image analysis and judgment by a diagnostic model, which will be described later, and detection results can be changed and registered on the client side. The developed image generated by the developed image generation unit 8 is displayed on the display device of the input/output unit 4 .
 次にクライアントマシン1のユーザは、表示された展開画像に対する異状解析方法として、(A)画像解析、及び(B)診断モデルによる解析のどちらかを選択する(ステップS103)。 Next, the user of the client machine 1 selects either (A) image analysis or (B) analysis using a diagnostic model as an abnormality analysis method for the displayed expanded image (step S103).
 (A)画像解析が選択された場合、画像解析部10は、ステップS102で生成された展開画像を用いて図19に示す手順で異状箇所の画像解析を行う(ステップS104)。 (A) When image analysis is selected, the image analysis unit 10 performs image analysis of the abnormal location using the developed image generated in step S102 according to the procedure shown in FIG. 19 (step S104).
 図19は、図8のフロー中、画像解析の動作フローを示すフローチャートである。画像解析は、展開画像の生成と同様にOpenCVの機能(画像解析プログラム18に含まれる)を用いて実装することができる。まずユーザは、展開画像の管種を手動設定して入出力部4から入力し、管種及び調査条件受付部28が入力を受け付ける(ステップS201)。一例においてユーザはヒューム管、陶管、塩ビ管のうちから設定して入力する。次にユーザは、展開画像の調査条件を手動設定して入出力部4から入力し、管種及び調査条件受付部28が入力を受け付ける(ステップS202)。一例においては清掃の有無を設定して入力する。次に二値化パラメータ設定部29は、管種、調査条件より、画像変換(二値化)のパラメータを生成する(ステップS203)。具体的に、二値化パラメータ設定部29は、管種、調査条件ごとにあらかじめ決められた二値化の閾値を設定する(OpenCVのthresholdパラメータを設定)。引き続き画像変換部30は、「パラメータ生成」により生成されたパラメータをもとに二値化を行い、展開画像を白黒画像(二値画像)に変換し(ステップS204)、二値画像解析部31は、「画像変換」により二値化した画像をピクセル単位に分割する(ステップS205)。次に二値画像解析部31は、二値画像に対してピクセル単位で走査を行い、黒色が含まれる部分を黒色でマークし、黒色の塊(特徴点)として抽出し(ステップS206)、抽出された特徴点を分析して異常箇所を判定する(ステップS207)。具体的に、二値画像解析部31は、各異常箇所(破損、クラック…etc)の特徴を判定するアルゴリズムにより、異常箇所であるかの判定および異常箇所であった場合、異常種別を判定する。ここにおけるアルゴリズムとしては、OpenCV等により公知の物体認識アルゴリズム等、任意のアルゴリズムを用いることができ、異常種類ごとにあらかじめ決められた黒色箇所の繋がり、広がり、割合の情報(パラメータ)を用いて異常箇所の判定を行う。具体的な判定基準としては、例えば黒色のピクセル数が所定の閾値を超えた時に異常であると判定し、黒色箇所の形状に応じて破損、クラック等に分類する等、ノウハウ(パターン)の蓄積により判定条件を設定してプログラム化することができる。 FIG. 19 is a flow chart showing the operation flow of image analysis in the flow of FIG. Image analysis can be implemented using the OpenCV functions (included in the image analysis program 18) in the same manner as the development image generation. First, the user manually sets the pipe type of the developed image and inputs it from the input/output unit 4, and the pipe type and investigation condition reception unit 28 receives the input (step S201). In one example, the user sets and inputs a Hume tube, a ceramic tube, or a vinyl chloride tube. Next, the user manually sets survey conditions for the developed image and inputs them from the input/output unit 4, and the pipe type and survey condition reception unit 28 receives the input (step S202). In one example, the presence or absence of cleaning is set and input. Next, the binarization parameter setting unit 29 generates parameters for image conversion (binarization) from the tube type and investigation conditions (step S203). Specifically, the binarization parameter setting unit 29 sets a predetermined binarization threshold for each tube type and investigation condition (sets the OpenCV threshold parameter). Subsequently, the image conversion unit 30 performs binarization based on the parameters generated by the “parameter generation”, converts the developed image into a black and white image (binary image) (step S204), and the binary image analysis unit 31 divides the binarized image by "image conversion" into pixel units (step S205). Next, the binary image analysis unit 31 scans the binary image pixel by pixel, marks portions containing black in black, and extracts them as black clusters (feature points) (step S206). The detected feature points are analyzed to determine an abnormal point (step S207). Specifically, the binary image analysis unit 31 uses an algorithm to determine the characteristics of each abnormal location (damage, crack, etc.) to determine whether it is an abnormal location and, if it is an abnormal location, determine the type of abnormality. . As an algorithm here, an arbitrary algorithm such as a known object recognition algorithm by OpenCV etc. can be used. Judgment of the location is performed. As a specific judgment standard, for example, when the number of black pixels exceeds a predetermined threshold, it is judged to be abnormal, and depending on the shape of the black part, it is classified as damaged, cracked, etc., etc. Accumulation of know-how (pattern) can be programmed by setting the judgment conditions.
 ステップS103で(B)診断モデルによる解析が選択された場合、クライアント側診断モデル判定部11は、ステップS102で生成された展開画像を用いて図20,図21に示す手順で異状箇所の画像解析を行う(ステップS105)。 When (B) analysis using a diagnostic model is selected in step S103, the client-side diagnostic model determination unit 11 performs image analysis of the abnormal location using the developed image generated in step S102 in the procedure shown in FIGS. (step S105).
 まず診断モデル受信部32は、最新の診断モデルをクラウドサーバ37からダウンロードし、機械学習済み診断モデル20として記憶部3に記憶する(ステップS301)。引き続き、異状箇所モデル判定部34は、ダウンロードした機械学習済み診断モデル20を用いて異状箇所の判定を行う(ステップS302)。上述のとおり機械学習済み診断モデルは、一例において、展開画像データを入力とし、当該展開画像データ中の異状箇所の情報(位置、タイプ、及び異状箇所周辺の画像データ等)を出力とするディープラーニングによる画像認識の学習アルゴリズムを用いて機械学習を行った診断モデルである。異状箇所モデル判定部34は、機械学習済み診断モデル20に対する入力として展開画像データを用い、出力として異状箇所の情報(位置、タイプ、及び異状箇所周辺の画像データ等)を得ることにより異状箇所の判定を行う。 First, the diagnostic model receiving unit 32 downloads the latest diagnostic model from the cloud server 37 and stores it in the storage unit 3 as the machine-learned diagnostic model 20 (step S301). Subsequently, the abnormal location model determination unit 34 uses the downloaded machine-learned diagnostic model 20 to determine the abnormal location (step S302). As described above, the machine-learned diagnostic model, in one example, is developed image data as input, and deep learning that outputs information on the abnormal location in the developed image data (position, type, image data around the abnormal location, etc.) It is a diagnostic model that has been machine-learned using the image recognition learning algorithm by. The abnormal location model determination unit 34 uses the developed image data as an input to the machine-learned diagnostic model 20, and obtains information on the abnormal location (position, type, image data around the abnormal location, etc.) as an output, thereby identifying the abnormal location. make a judgment.
 なお、後述の実施例でも説明するとおり、予めk-means法(k平均法)等の手法により展開画像データをクラスタリングした上で異状箇所のモデル判定を行うことにより判定精度の向上が期待できる。この場合は、展開画像分類部33が異状箇所モデル判定プログラムを実行することにより展開画像データを分類し、その後に異常箇所モデル判定部34が任意のアルゴリズムによる機械学習済みの診断モデル20を用いて異状箇所の判定を行う。一例においては、まず展開画像分類部33がk-means法により展開画像データをクラスタリングし、その後に異常箇所モデル判定部34が、クラスタリング結果に応じて定められる特徴量抽出方法に従って展開画像データから複数のパラメータ(背景色、最大明度差、背景とみなす範囲等)から特徴量を抽出し、抽出した特徴量を入力として、ランダムフォレスト、ニューラルネットワーク等の機械学習アルゴリズムにより機械学習された診断モデルに入力して診断を行い、異状箇所の情報(位置、タイプ、及び異状箇所周辺の画像データ等)を出力として得ることができる。 As will be explained in the examples below, it is expected that the accuracy of the determination can be improved by clustering the developed image data in advance by a method such as the k-means method (k-means method) and then performing model determination of the abnormal location. In this case, the developed image classification unit 33 classifies the developed image data by executing the abnormal site model determination program, and then the abnormal site model determination unit 34 uses the machine-learned diagnostic model 20 by an arbitrary algorithm. Determine the location of anomalies. In one example, first, the developed image classification unit 33 clusters the developed image data by the k-means method, and then the abnormal part model determination unit 34 extracts a plurality of features from the developed image data according to the feature amount extraction method determined according to the clustering result. Extract features from the parameters (background color, maximum brightness difference, range considered as background, etc.), and input the extracted feature values into a diagnostic model machine-learned by machine-learning algorithms such as random forests and neural networks. Then, it is possible to obtain information on the abnormal location (position, type, image data around the abnormal location, etc.) as an output.
 なお、ステップS105においては、クライアント側管種推定部12による管種、及び清掃状態等の調査条件の推定と併せて異状箇所の判定を行ってもよい。この場合、ステップS301においては、上述の処理に加えて管種推定モデル受信部35が最新の管種推定モデルをクラウドサーバ37からダウンロードし、機械学習済み管種推定モデル21として記憶部3に記憶する。引き続き、ステップS302においては、管種モデル推定部36は、ダウンロードした機械学習済み管種推定モデル21を用いて管種、及び清掃状態等の調査条件の推定を行い、異状箇所モデル判定部34は、管種モデル推定部36の推定結果に応じた機械学習済み診断モデル20を用いて(ステップS301において、診断モデル受信部32はさまざまな管種、及び清掃状態等の調査条件に応じた機械学習済み診断モデル20を受信して記憶部3に記憶しているとする。)異状箇所の判定を行う。 It should be noted that in step S105, the client-side pipe type estimating unit 12 may estimate the pipe type, the cleaning condition, and other investigation conditions, and determine the location of the abnormality. In this case, in step S301, in addition to the above process, the pipe type estimation model receiving unit 35 downloads the latest pipe type estimation model from the cloud server 37 and stores it in the storage unit 3 as the machine-learned pipe type estimation model 21. do. Subsequently, in step S302, the pipe type model estimating unit 36 uses the downloaded machine-learned pipe type estimating model 21 to estimate the pipe type and investigation conditions such as cleaning conditions. , using the machine-learned diagnostic model 20 according to the estimation result of the pipe type model estimating unit 36 (in step S301, the diagnostic model receiving unit 32 performs machine learning according to investigation conditions such as various pipe types and cleaning conditions) Assume that the completed diagnosis model 20 has been received and stored in the storage unit 3.) The location of abnormality is determined.
 ステップS106において、判定結果表示部13は、ステップS104又はS105において判定された異状箇所の情報(診断結果)を入出力部4のディスプレイ装置にシステム画面として出力する。一例においては、図25に示す診断結果閲覧画面が判定結果表示部13によって入出力部4のディスプレイ装置に表示される。図25の閲覧画面中、左上には撮影動画データファイルのリストが表示され、ユーザは任意の撮影動画を選択して、上述の展開図生成及び異状箇所診断を閲覧画面内のボタン押下により指示する。また閲覧画面の下部には展開画像が表示され、特に画像解析又は診断モデルによる解析で異状であると判定された箇所を囲むよう色付きの枠(本件図面ではグレースケール表示となっている。)が表示されている。また接続部分(継手部分)に重なるよう色付きの線(本件図面ではグレースケール表示となっている。)が表示されている(画像解析アルゴリズム、或いは機械学習により異状箇所の判定モデルを生成する際に、出力として異状箇所の情報だけでなく接続部分(継手部分)の位置等も出力するよう構成できる。例えば診断モデルにディープラーニングとしての機械学習をさせる場合は、教師データとして、異状箇所の情報に加えて継手部分の情報とセットにした展開画像データを与えてモデルに機械学習をさせることができるし、或いは画像解析アルゴリズムとして、二値画像において下水管を切断するような線状部分を検出した時に接続部分(継手部分)と判断するようアルゴリズムを構築することができる。)。 In step S106, the determination result display unit 13 outputs the information (diagnosis result) of the abnormal location determined in step S104 or S105 to the display device of the input/output unit 4 as a system screen. In one example, the diagnosis result browsing screen shown in FIG. 25 is displayed on the display device of the input/output unit 4 by the determination result display unit 13. In the browsing screen of FIG. 25, a list of photographed moving image data files is displayed on the upper left, and the user selects an arbitrary photographed moving image, and instructs the above-described developed drawing generation and abnormal point diagnosis by pressing buttons in the browsing screen. . In addition, a developed image is displayed at the bottom of the browsing screen, and a colored frame (displayed in gray scale in the drawing) surrounds the part determined to be abnormal by image analysis or diagnostic model analysis. is displayed. In addition, a colored line (in the drawing, it is displayed in grayscale) is displayed so as to overlap the connecting part (joint part) , it can be configured to output not only the information of the abnormal location but also the position of the connection part (joint part) etc. For example, when making the diagnostic model perform machine learning as deep learning, as teacher data In addition, it is possible to give the developed image data set with the information of the joint part and let the model perform machine learning, or as an image analysis algorithm, it is possible to detect a linear part that cuts the sewage pipe in the binary image. Algorithms can be constructed to determine that it is sometimes a connection part (joint part).).
 またステップS106で表示される診断結果閲覧画面において、ユーザは、異状箇所の判定結果に対する変更登録(編集)を行うことができる。一例において、図25の閲覧画面を見たユーザが、上記色付きの枠で囲まれていない部分において異状箇所を発見した時は、当該発見した異状箇所を入出力部4のマウス操作(ドラッグ・アンド・ドロップ等)により選択し、画面案内に沿って異状の種類(破損・クラック等)等の情報を選択することにより、異状箇所の判定結果を編集することができる。またここにおいてユーザは、管種、清掃状態等の調査条件も同様に編集することができることとしてよい。 In addition, on the diagnostic result browsing screen displayed in step S106, the user can perform change registration (editing) for the determination result of the abnormal location. In one example, when a user viewing the browsing screen of FIG.・Drop, etc.) and select information such as the type of abnormality (damage, crack, etc.) according to the guidance on the screen to edit the determination result of the abnormal location. Also, here, the user may be able to similarly edit investigation conditions such as pipe type and cleaning condition.
 編集が終わり、ユーザによる異状箇所の確認、変更が終了すると、ユーザが閲覧画面で完了ボタンを押下することにより、変更内容が変更登録データ27として記憶部3に記憶されて異状箇所の確定処理が行われる(ステップS107)。また変更がない場合も、ユーザ操作により出力された異常箇所の確認を行い、異常箇所の確定を行う。引き続き、異状箇所の画像データ、及びユーザによる変更後の異状箇所のデータ(管種、清掃状態等の調査条件の変更後のデータも含んでよい。)がクライアントマシン1の通信部5からサーバマシン37へと送信され(異常箇所データアップロード)、サーバマシン37の通信部41がこれを受信して、記憶部39が変更登録データ57として変更後の異状箇所のデータを異状箇所の画像データとともに記憶する(ステップS108)。サーバマシン37の診断モデル生成部44、管種推定モデル生成部46は、受信したユーザによる変更後の異状箇所データ、管種、清掃状態等の調査条件の変更後のデータと、対応する展開画像データ(クライアントマシン1から受信するとする。)を教師データとして異状箇所診断モデル、管種推定モデルをそれぞれ再度機械学習させて機械学習済みモデルを生成することにより、異状箇所診断モデル、管種推定モデルの判定精度を向上させることができる(ステップS109)。引き続き、診断モデル評価部48、管種推定モデル評価部49は、生成した機械学習済みの異状箇所診断モデル、管種推定モデルの精度を、テストデータ56を用いる等して検証し(ステップS110)、診断モデル送信部45、管種推定モデル送信部47からクライアントマシン1へと送信し、クライアントマシン1は次回の診断モデルによる解析においては判定精度が向上した異状箇所診断モデル、管種推定モデルを利用することができる。このように、ステップS105、S106、S107、S108、S109、S110、S105のサイクルを繰り返すことで、異常箇所の教師データが蓄積され、機械学習により生成される診断モデル、管種推定モデルの異常解析、管種推定精度が向上する。 When the user finishes editing and confirms and changes the abnormal portion, the user presses the completion button on the viewing screen, and the changed content is stored in the storage unit 3 as the change registration data 27, and the abnormal portion is confirmed. is performed (step S107). Also, even if there is no change, the abnormal point output by the user's operation is checked and the abnormal point is determined. Subsequently, the image data of the abnormal location and the data of the abnormal location after the change by the user (the data after changing the investigation conditions such as the pipe type and the cleaning state may also be included) are sent from the communication unit 5 of the client machine 1 to the server machine. 37 (abnormal location data upload), the communication unit 41 of the server machine 37 receives this, and the storage unit 39 stores the changed abnormal location data as change registration data 57 together with the abnormal location image data. (step S108). The diagnostic model generation unit 44 and the pipe type estimation model generation unit 46 of the server machine 37 receive the abnormal location data after the change by the user, the data after the change of the investigation conditions such as the pipe type and the cleaning state, and the corresponding expanded image. Data (assumed to be received from the client machine 1) is used as teacher data to perform machine learning again on the abnormal location diagnosis model and the pipe type estimation model, respectively, to generate machine-learned models, whereby the abnormal location diagnosis model and the pipe type estimation model are generated. can be improved (step S109). Subsequently, the diagnostic model evaluation unit 48 and the pipe type estimation model evaluation unit 49 verify the accuracy of the generated machine-learned abnormal point diagnosis model and pipe type estimation model by using the test data 56 (step S110). , the diagnostic model transmission unit 45 and the pipe type estimation model transmission unit 47 to the client machine 1, and the client machine 1 transmits the abnormal location diagnosis model and the pipe type estimation model with improved judgment accuracy in the analysis by the next diagnostic model. can be used. In this way, by repeating the cycle of steps S105, S106, S107, S108, S109, S110, and S105, the supervised data of the abnormal points is accumulated, and the diagnostic model generated by machine learning and the abnormality analysis of the pipe type estimation model are performed. , pipe type estimation accuracy is improved.
 なお、既に述べたとおり、異状箇所診断モデル、管種推定モデルはディープラーニングによる機械学習済みの異状箇所診断モデル、管種推定モデルであってよいが、これに限らず、ランダムフォレスト、ニューラルネットワーク等、任意の機械学習アルゴリズムにより異状箇所診断モデル、管種推定モデルを構築してよい。診断モデル生成の動作フローの一例を図21に示す。 As already mentioned, the abnormal location diagnosis model and the pipe type estimation model may be machine-learned abnormal location diagnosis models and pipe type estimation models by deep learning. , any machine learning algorithm may be used to construct an abnormal location diagnosis model and a pipe type estimation model. FIG. 21 shows an example of the operational flow of diagnostic model generation.
 ステップS401において、診断モデル生成部44は、クラスタリングにより異状箇所の画像を分類する。一例において、診断モデル生成部44は、k-means法によりクラスタリングを行う。この際、分類の数は任意の値Nを設定し、N種類に分類する。ステップS402において、診断モデル生成部44は、分類された画像ごとに、以下のとおり分類器作成のパラメータを生成する。
[パラメータ生成手法]
分類種別ごとに決められたOpen CVのパラメータ(内容は以下)を設定する
<ベクトルファイル作成パラメータ>
-inv -randinv…色の反転をする場合に指定
-bgcolor …背景色
-bgthresh…背景とみなす範囲
-maxidev…最大明度差
-maxxangle
-maxyangle   ...最大回転角度rad
-maxzangle
-w…ベクトルの横幅
-h…ベクトルの高さ
<分類器作成パラメータ>
-featureType …特徴量の見つけ方
 HAAR:画像の明暗差により特徴を捉える手法
 LBP:輝度の分布(ヒストグラム)により特徴を捉える手法
 HOG:輝度の勾配方向の分布により特徴を捉える手法
-bt …boost分類器のタイプ
 DAB:Discrete AdaBoost
 RAB :Real AdaBoost
 LB:LogitBoost
 GAB:Gentle AdaBoost
-minHitRate…最小ヒット率
-maxFalseAlarmRate…最大false alarm率
-weightTrimRate…値によりトリミングを行うか指定する
-maxDepth…弱検出器の最大の深さ
-maxWeakCount…maxFalseAlarmRate達成に必要とされる弱分類器の最大数
In step S401, the diagnostic model generation unit 44 classifies images of abnormal locations by clustering. In one example, the diagnostic model generator 44 performs clustering using the k-means method. At this time, the number of classifications is set to an arbitrary value N to classify into N types. In step S402, the diagnostic model generator 44 generates classifier creation parameters for each classified image as follows.
[Parameter generation method]
Set Open CV parameters (details below) determined for each classification type <Vector file creation parameters>
-inv -randinv...specified when reversing colors -bgcolor...background color -bgthresh...range to be regarded as background -maxidev...maximum lightness difference -maxxangle
-maxyangle. . . Maximum rotation angle rad
-max angle
-w... vector width -h... vector height <classifier creation parameter>
-featureType … How to find feature quantity HAAR: A method to capture features from the difference in brightness of an image LBP: A method to capture features from the luminance distribution (histogram) HOG: A method to capture features from the distribution in the gradient direction of luminance -bt … Boost classification Device type DAB: Discrete AdaBoost
RAB: Real Ada Boost
LB: LogitBoost
GAB: Gentle AdaBoost
-minHitRate...minimum hit rate -maxFalseAlarmRate...maximum false alarm rate -weightTrimRate...specifies whether to trim by value -maxDepth...maximum depth of weak detectors -maxWeakCount...maximum of weak classifiers required to achieve maxFalseAlarmRate number
 ステップS403において、診断モデル生成部44は、ステップS402の「分類器生成パラメータ生成」により生成されたパラメータをもとに分類器の作成を行う。ここにおける分類器とは、ランダムフォレスト、ニューラルネットワーク等、任意の機械学習アルゴリズムによる分類器であってよい。 In step S403, the diagnostic model generating unit 44 generates a classifier based on the parameters generated by "generate classifier generation parameter" in step S402. A classifier here may be a classifier according to any machine learning algorithm, such as random forest, neural network, or the like.
 図22は、学習アルゴリズムの一例として、ランダムフォレストの概念(学習段階)を説明する図であり、図23はランダムフォレストの概念(運用段階)を説明する図である。ランダムフォレストの概要を次に説明する。ランダムフォレストにより機械学習された最終的なモデルは、運用段階においては、決定木と呼ばれるモデルを複数用いて、各々の決定木による予測(推定)結果の多数決(分類)、平均(回帰)をとることにより最終的な出力を得る。ランダムフォレストの学習段階においては、多数の説明変数をブートストラップ法と呼ばれる手法のランダムな復元抽出によって複数のサブサンプルに分類し、各々のサブサンプルにおける大量の教師データを各々の決定木に与えることにより各々の決定木が別個に独立した学習を行って、複数のモデル(決定木)での学習がなされる。ランダムフォレストの機械学習アルゴリズムにより生成される最終的な機械学習モデルは、複数の決定木の集合体と解釈することができる。 FIG. 22 is a diagram explaining the concept of random forest (learning stage) as an example of a learning algorithm, and FIG. 23 is a diagram explaining the concept of random forest (operation stage). An overview of random forests is given below. In the operation stage, the final model machine-learned by random forest uses multiple models called decision trees, and takes the majority (classification) and average (regression) of the prediction (estimation) results by each decision tree. to get the final output. In the learning stage of a random forest, a large number of explanatory variables are classified into multiple subsamples by random replacement sampling using a method called bootstrap method, and a large amount of teacher data in each subsample is given to each decision tree. Each decision tree performs independent learning, and learning is performed with a plurality of models (decision trees). The final machine learning model generated by the random forest machine learning algorithm can be interpreted as a collection of multiple decision trees.
 図24は、学習アルゴリズムの一例として、ニューラルネットワークの概念を説明する図である。ニューラルネットワークの概要を次に説明する。ニューラルネットワークにおいては、入力層と出力層との間に1以上の隠れ層(中間層)が存在し、入力層中のノード値が隠れ層中のノード値へと変換され、隠れ層中のノード値が出力層のノード値へと変換されることにより、入力データ(説明変数データ)から出力データ(目的変数データ、分類結果)が得られる。或る層から次の層へのノード値の変換は、線形変換や、活性化関数を用いた非線形変換によって行われる。入力層と隠れ層との間にあるノード間の結合、そして隠れ層と出力層との間にあるノード間の結合は、1つ1つが別個に重みの値を有しており、説明変数と目的変数の教師データを与えて学習させることにより、それぞれの重みの値が更新されていく。学習時に各層の重さは、誤差逆伝播法によって重みを更新している。要求出力と実際の出力の差が小さくなるように計算して、各層に反映する。中間層の層数や、個々の中間層に属するノード数等のハイパーパラーメータを調整することによりモデルを任意に構築することができる。ランダムフォレストやニューラルネットワークは周知の機械学習アルゴリズムであるから、ここではこれ以上詳しく説明しない。 FIG. 24 is a diagram explaining the concept of a neural network as an example of a learning algorithm. An overview of neural networks is given below. In a neural network, one or more hidden layers (intermediate layers) exist between an input layer and an output layer, node values in the input layer are converted to node values in the hidden layer, and nodes in the hidden layer are converted to node values in the hidden layer. Output data (objective variable data, classification results) are obtained from input data (explanatory variable data) by converting the values into node values of the output layer. Transformation of node values from one layer to the next is performed by linear transformation or nonlinear transformation using an activation function. The connections between nodes between the input layer and the hidden layer and the connections between the nodes between the hidden layer and the output layer each have a separate weight value, and are used as explanatory variables. The value of each weight is updated by giving the training data of the objective variable and making it learn. The weight of each layer is updated by error backpropagation during learning. Calculate so that the difference between the required output and the actual output is small, and reflect it in each layer. A model can be arbitrarily constructed by adjusting hyperparameters such as the number of intermediate layers and the number of nodes belonging to each intermediate layer. Random forests and neural networks are well known machine learning algorithms and will not be described in further detail here.
 ステップS404において、診断モデル生成部44は、異常分類ごとに作成された分類器をもとに診断モデルを生成する。 In step S404, the diagnostic model generation unit 44 generates a diagnostic model based on the classifier created for each abnormality classification.
 またステップS107で異状箇所が確定されると、帳票出力部15による帳票出力が行われる(ステップS111)。帳票出力は、クライアントマシン1に接続されたプリンタにより紙媒体に出力することにより行われてもよいし、EXCEL(登録商標)ファイル等、電子ファイルの形式で出力することにより行われてもよい。図26に、帳票出力における帳票の一例(本管用調査記録表)を、図27~図32に、図26の本管用調査記録表の一部を拡大した図を、それぞれ示す(図29~図32は別シートの展開画像)。ただし、実際の異常箇所を表しているわけではなく例としてシステムで入力したものであることに留意する。 Also, when the abnormal location is determined in step S107, the form is output by the form output unit 15 (step S111). The form output may be performed by outputting to a paper medium using a printer connected to the client machine 1, or may be performed by outputting in an electronic file format such as an EXCEL (registered trademark) file. FIG. 26 shows an example of a form (main investigation record table) in form output, and FIGS. 32 is an expanded image of another sheet). However, it should be noted that this does not represent the actual location of the abnormality, but is entered in the system as an example.
 図27,図28に示す帳票の各項目中、調査番号61、調査名62、上流人孔番号63、下流人孔番号64、(上流)人孔種別、人孔深、土被り65、管種、管径、路線延長66、(下流)人孔種別、人孔深、土被り67、異常箇所イメージ68、管本数、取付管数、管不良数、布設年度69、距離70、異常(破損)箇所の内容、距離71、異常(クラック)箇所の内容、距離72、異常個所ごとの、各異常内容の数73は、システム側から出力される項目であり、システムにて登録した値およびそれらを集計した値が出力される。それ以外の箇所は、テンプレートとしてあらかじめ用意された項目である。特に異常箇所イメージ68は、システムで登録した異常箇所のイメージ(画像解析、或いは診断モデル解析により判定され、必要に応じてユーザ側で変更された異状箇所のイメージ)を出力し、異常箇所の内容、距離71,72は、システムで登録した異常箇所の情報(異常種別、ランク、左端からの距離)を出力する。図29~図32は、別シートで展開画像を表示したものであり、これらにおいても、路線番号、上流人孔番号、下流人孔番号、調査方向、管本数、延長、管径、管種、管長74、破損個所75、路線番号、上流人孔番号、下流人孔番号、調査方向、管本数、延長、管径、管種、管長76、クラック個所77のように、登録した情報に基づき帳票出力が行われる。 27 and 28, survey number 61, survey name 62, upstream manhole number 63, downstream manhole number 64, (upstream) manhole type, manhole depth, overburden 65, pipe type , pipe diameter, route length 66, (downstream) manhole type, manhole depth, overburden 67, abnormal location image 68, number of pipes, number of attached pipes, number of faulty pipes, year of installation 69, distance 70, abnormality (damage) Contents of location, distance 71, content of anomaly (crack) location, distance 72, number of each anomaly content 73 for each anomaly location are items output from the system side, and values registered in the system and their Aggregated value is output. Other parts are items prepared in advance as a template. In particular, the abnormal location image 68 outputs an image of an abnormal location registered in the system (an image of an abnormal location determined by image analysis or diagnostic model analysis and changed by the user if necessary), and displays the contents of the abnormal location. , and distances 71 and 72, information (abnormality type, rank, distance from the left end) of an abnormal point registered in the system is output. 29 to 32 are developed images displayed on separate sheets. In these also, route number, upstream manhole number, downstream manhole number, survey direction, number of pipes, length, pipe diameter, pipe type, Report based on registered information such as pipe length 74, damaged location 75, route number, upstream manhole number, downstream manhole number, survey direction, number of pipes, length, pipe diameter, pipe type, pipe length 76, crack location 77, etc. Output is done.
 下水管内異状診断支援システムの性能評価
 以下、本発明の実施例として作成した下水管内異状診断支援システムの性能評価結果を説明する。
Performance Evaluation of Sewer Pipe Abnormality Diagnosis Support System Below, the performance evaluation results of the sewer pipe abnormality diagnosis support system prepared as an embodiment of the present invention will be described.
(実施例のシステムの構成)
 本実施例のシステムにおいては、TVカメラ車に搭載した以下の仕様:
・走行速度 21m/分
・適用管径 200~1200mm
・解像度  CMOSセンサ
・レンズ  超広角レンズ(画角185°)
・画素数  500万画素
・その他  リアルタイム確認可
のカメラで撮影した下水管渠(管きょ)の映像をパーソナルコンピュータ(クライアントマシン)に取り込み、展開図を作成する。作成した展開図に対して、画像解析及び機械学習によって生成された診断モデルを用いて異常箇所の判定を行い、機械的に異常箇所と判定された箇所を展開図上に表示する。調査担当者(ユーザ)は、その結果を基に不具合等の異常箇所の判定を行い、調査報告書を作成する。クラウド側の機能は、調査担当者が最終判断した結果とシステム判定結果(誤検出含む)を教師データとして管理し、機械学習モデル(診断モデル)を生成する。この診断モデルは、パーソナルコンピュータ側の機械学習モデルを最適化し、次回以降の異常箇所判定に適用される。この異常箇所判定から機械学習モデルの精度向上のサイクルを繰り返し行うことで、判定精度を向上させ、診断にかかる作業負担や調査時間の短縮を目指すことができる。
(System Configuration of Example)
In the system of this embodiment, the following specifications installed in the TV camera car:
・Running speed 21m/min ・Applicable pipe diameter 200-1200mm
・Resolution CMOS sensor ・Lens Super wide-angle lens (angle of view 185°)
・Number of pixels: 5,000,000 pixels ・Others Images of the sewage pipe taken by a camera that can be checked in real time are imported into a personal computer (client machine), and a development drawing is created. A diagnostic model generated by image analysis and machine learning is used to determine an abnormal portion of the created developed view, and the portion that is mechanically determined to be an abnormal portion is displayed on the developed view. The person in charge of investigation (user) determines the location of an abnormality such as a problem based on the result, and prepares an investigation report. The function on the cloud side manages the final judgment results of the investigator and the system judgment results (including false positives) as training data, and generates a machine learning model (diagnosis model). This diagnostic model optimizes the machine-learning model on the personal computer side and is applied to the next and subsequent fault point determinations. By repeating the cycle of improving the accuracy of the machine learning model from this abnormality point determination, it is possible to improve the determination accuracy and reduce the work burden and investigation time required for diagnosis.
(検査対象)
 本実施例のシステムの判定精度の検証は、管路調査結果報告書の異常箇所の情報(人による診断結果)を正解として、システムの判定結果との比較により精度評価を行う。検証対象とするデータは、3都市で実施したスクリーニング調査結果(管きょ延長11.7km)のうち、スパン単位の異常判定結果が主にクラックや破損によるランクA及びランクBの管きょ映像をサンプリングした(図33の表-1参照)。
(Subject to inspection)
Verification of the determination accuracy of the system of the present embodiment is performed by comparing the determination result of the system with the information on the abnormal location in the pipeline inspection result report (diagnosis result by a person) as the correct answer. The data to be verified is the results of a screening survey conducted in three cities (pipe pipe length: 11.7 km), and the results of abnormality judgment in span units are mainly rank A and rank B pipe footage due to cracks and damage. was sampled (see Table-1 in FIG. 33).
(画像解析による異常箇所判定)
 画像解析による異常箇所判定は、異常箇所とその他の画像的な特徴の違いにより行う。処理フローと特徴的な処理イメージを図34に示す。スクリーニング調査結果を基に生成する展開図に対して、管種や清掃の有無等の調査条件を設定し、それぞれの管種、管内の状態に応じて輝度やノイズ除去の要否等の画像処理に係わるパラメータを設定する。このパラメータは管内調査の条件を基に特徴箇所の抽出を容易にする値となっている。次に解析処理では、変換画像をピクセル単位に分割し、このピクセル単位を黒色でマークする。マークされた黒の塊を特徴点として抽出し、その分布(繋がり、広がり方、割合等)から異常箇所の判定及び異常種別の判定を行う。
(Abnormal location judgment by image analysis)
Abnormal location determination by image analysis is performed based on the difference between the abnormal location and other image features. FIG. 34 shows a processing flow and a characteristic processing image. Investigation conditions, such as pipe type and whether or not cleaning is required, are set for development drawings generated based on the results of screening investigations, and image processing is performed, such as whether or not brightness and noise reduction are required, depending on the pipe type and conditions inside the pipe. Set parameters related to This parameter is a value that facilitates the extraction of characteristic locations based on the conditions of the pipe investigation. The analysis process then divides the transformed image into pixel units and marks the pixel units in black. The marked black lumps are extracted as feature points, and the abnormal point and the abnormal type are determined from the distribution (connection, spread, ratio, etc.).
(検証結果)
 検証対象データ(図33の表-1)に対して、画像解析による異常箇所判定を実施した。判定結果は、破損やクラックの異常箇所のうち、ランクa及びbは、異常ではない箇所を23%~35%判定したものの、異常箇所として全て特定することができた。その一方で、ランクcの小さな異常では、検知できない箇所が生じ、異常箇所特定率は96.7%であった(図35の表-2)。画像解析による判定は、調査条件等の調査映像に左右されるものの比較的大きな異常であれば、異常ではない箇所も含まれるが、漏れなく特定できると考えられる。
(inspection result)
Image analysis was performed to determine the location of an abnormality with respect to the data to be verified (Table 1 in FIG. 33). As a result of determination, 23% to 35% of the abnormal portions of ranks a and b were judged to be not abnormal, but all of them could be identified as abnormal portions. On the other hand, for minor anomalies of rank c, some locations could not be detected, and the anomaly location identification rate was 96.7% (Table-2 in FIG. 35). Judgment by image analysis is influenced by survey images such as survey conditions, but it is considered that if there is a relatively large abnormality, it can be identified without omission, although it includes areas that are not abnormal.
(機械学習を用いた異常箇所判定精度の向上)
 これまでの検証結果から、画像処理による異常箇所の判定においても一定の精度を確保することが確認できた。しかし、抽出できなかった異常が検出される場合は、パラメータの最適化や異常箇所判定手法の改良を行う必要があり、恒久的に精度向上を図っていくには課題が残る。そこで、恒久的な精度向上を目的として、機械学習による異常箇所の判定が有用と考え、本研究では、データ数は少ないものの診断結果を教師データとして機械学習モデルを生成し、その判定精度について検証を試みた。
(Improving the accuracy of determining anomalies using machine learning)
Based on the verification results so far, it has been confirmed that a certain level of accuracy is ensured even in the determination of abnormal locations by image processing. However, if an abnormality that could not be extracted is detected, it is necessary to optimize the parameters and improve the method for determining the location of the abnormality. Therefore, in order to permanently improve accuracy, we believe that it is useful to use machine learning to determine abnormal locations. In this research, although the amount of data is small, we generated a machine learning model using diagnostic results as training data and verified its determination accuracy. tried.
(機械学習による診断モデルの生成)
 機械学習により生成されるモデルの異常箇所の判定精度は、学習を行うデータの量に依存する。そのため、今回の検証においては、教師データとなる診断結果の数が比較的多い破損とクラックを対象に、学習データ量の違いによる判定精度を比較する。生成する診断モデルは、異常箇所と異常箇所以外の比率(7:3)を同じにした状態で、学習データ量を変更させて複数生成する。それぞれの診断モデルにて学習データ以外の異常箇所判定を実施した。また、学習データが少ないため、機械的な判定が比較的容易なランクa程度の特徴的なもの(図36参照)を対象に検証する。診断モデルは、破損とクラックに対してそれぞれの分類器(クラスタリングを行って分類された画像ごとに学習(ディープラーニング)させた結果生成される機械学習モデル)を用いた生成フローとした(図37参照)。異常箇所の画像は、k-means法によるクラスタリングにより共通した特徴を持つ画像に分類する。その特徴を抽出するパラメータでの機械学習(ディープラーニング)により、異常を判定する診断モデルが生成される。
(Generation of diagnostic model by machine learning)
Accuracy of determination of abnormal points in a model generated by machine learning depends on the amount of data for learning. Therefore, in this verification, we will compare the accuracy of judgment based on the difference in the amount of learning data, targeting damage and cracks, for which the number of diagnostic results used as training data is relatively large. A plurality of diagnostic models to be generated are generated by changing the amount of learning data while keeping the same ratio (7:3) between abnormal locations and non-abnormal locations. In each diagnostic model, an abnormality point determination was performed for data other than learning data. In addition, since the amount of learning data is small, verification is performed on characteristic items of about rank a (see FIG. 36) for which mechanical determination is relatively easy. The diagnostic model is a generation flow using each classifier (machine learning model generated as a result of learning (deep learning) for each image classified by clustering) for damage and cracks (Fig. 37 reference). Images of abnormal locations are classified into images having common features by clustering using the k-means method. Machine learning (deep learning) using parameters for extracting the features generates a diagnostic model for determining anomalies.
(機械学習を用いた異常箇所判定の検証結果)
 機械学習モデルを用いた異常箇所判定結果は、学習データを増やすことで、破損では9割以上、クラックでは、約7割の判定結果となった(図38,図39)。これは、破損と比較してクラックの形状に特徴が少なかったためであると考えられる。本検証の結果から、少ないデータ量でも比較的特定しやすい異常であれば、一定の精度を確保できることと、今後、学習データを増やすことで、判定精度を向上できることを確認した。
(Results of verification of abnormality location determination using machine learning)
By increasing the amount of learning data, the abnormal location determination results using the machine learning model were more than 90% for breakage and about 70% for cracks (Figs. 38 and 39). It is considered that this is because the shape of the crack had few features compared to the breakage. From the results of this verification, it was confirmed that a certain level of accuracy can be secured for abnormalities that are relatively easy to identify even with a small amount of data, and that the determination accuracy can be improved by increasing the learning data in the future.
 本発明は、ヒューム管、陶管、塩ビ管等、さまざまな管種の下水管の異状診断のために用いることができる。 The present invention can be used for diagnosing abnormalities in sewage pipes of various types, such as Hume pipes, ceramic pipes, and PVC pipes.
1         クライアントマシン
2         制御部
3         記憶部
4         入出力部
5         通信部
6         プロセッサ
7         一時メモリ
8         展開画像生成部
9         クライアント側異状箇所判定部
10        画像解析部
11        クライアント側診断モデル判定部
12        クライアント側管種推定部
13        判定結果表示部
14        変更登録受付部
15        帳票出力部
16        各種制御、表示部
17        展開画像生成プログラム
18        画像解析プログラム
19        機械学習関連プログラム
20        機械学習済み診断モデル
21        機械学習済み管種推定モデル
22        帳票出力プログラム
23        各種制御、表示プログラム等
24        撮影動画(画像)データ
25        展開画像データ
26        異状箇所、管種データ
27        変更登録データ
28        管種及び調査条件受付部
29        二値化パラメータ設定部
30        画像変換部
31        二値画像解析部
32        診断モデル受信部
33        展開画像分類部
34        異状箇所モデル判定部
35        管種推定モデル受信部
36        管種モデル推定部
37        (クラウド)サーバマシン
38        制御部
39        記憶部
40        入出力部
41        通信部
42        プロセッサ
43        一時メモリ
44        診断モデル生成部
45        診断モデル送信部
46        管種推定モデル生成部
47        管種推定モデル送信部
48        診断モデル評価部
49        管種推定モデル評価部
50        各種制御、表示部
51        機械学習関連プログラム
52        機械学習済み診断モデル
53        機械学習済み管種推定モデル
54        各種制御、表示プログラム等
55        教師データ
56        テストデータ
57        変更登録データ
58        下水管内壁
59        下水
60        異常(異状)箇所
61        調査番号
62        調査名
63        上流人孔番号
64        下流人孔番号
65        (上流)人孔種別、人孔深、土被り
66        管種、管径、路線延長
67        (下流)人孔種別、人孔深、土被り
68        異常箇所イメージ
69        管本数、取付管数、管不良数、布設年度
70        距離
71        異常(破損)箇所の内容、距離
72        異常(クラック)箇所の内容、距離
73        異常個所ごとの、各異常内容の数
74        路線番号、上流人孔番号、下流人孔番号、調査方向、管本数、延長、管径、管種、管長
75        破損個所
76        路線番号、上流人孔番号、下流人孔番号、調査方向、管本数、延長、管径、管種、管長
77        クラック個所
1000      下水道管
1001      地表
1002,1003 マンホール
1100      接続部(継手部分)
1200      管状空間
1 Client machine 2 Control unit 3 Storage unit 4 Input/output unit 5 Communication unit 6 Processor 7 Temporary memory 8 Expanded image generation unit 9 Client-side abnormal location determination unit 10 Image analysis unit 11 Client-side diagnostic model determination unit 12 Client-side pipe type estimation Unit 13 Judgment result display unit 14 Change registration reception unit 15 Form output unit 16 Various control and display unit 17 Expanded image generation program 18 Image analysis program 19 Machine learning related program 20 Machine-learned diagnosis model 21 Machine-learned pipe type estimation model 22 Report output program 23 Various control, display programs, etc. 24 Photographed video (image) data 25 Expanded image data 26 Abnormal location and pipe type data 27 Change registration data 28 Pipe type and investigation condition reception unit 29 Binary parameter setting unit 30 Image conversion Unit 31 Binary image analysis unit 32 Diagnosis model reception unit 33 Development image classification unit 34 Abnormal location model determination unit 35 Pipe type estimation model reception unit 36 Pipe type model estimation unit 37 (Cloud) server machine 38 Control unit 39 Storage unit 40 Input Output unit 41 Communication unit 42 Processor 43 Temporary memory 44 Diagnostic model generation unit 45 Diagnostic model transmission unit 46 Pipe type estimation model generation unit 47 Pipe type estimation model transmission unit 48 Diagnostic model evaluation unit 49 Pipe type estimation model evaluation unit 50 Various controls, Display unit 51 Machine learning-related program 52 Machine-learned diagnostic model 53 Machine-learned pipe type estimation model 54 Various control and display programs 55 Teacher data Data 56 Test data 57 Change registration data 58 Sewer pipe inner wall 59 Sewage 60 Abnormal (abnormal) location 61 Survey number 62 Survey name 63 Upstream manhole number 64 Downstream manhole number 65 (Upstream) manhole type, manhole depth, soil cover 66 Pipe type, pipe diameter, route length 67 (Downstream) manhole type, manhole depth, overburden Details, distance 72 Details of anomaly (crack) location, distance 73 Number of each anomaly content for each anomaly 74 Route number, upstream manhole number, downstream manhole number, survey direction, number of pipes, length, pipe diameter, pipe Type, pipe length 75 Damaged location 76 Route number, upstream manhole number, downstream manhole number, survey direction, number of pipes, length, pipe diameter, pipe type, pipe length 77 Crack location 1000 Sewer pipe 1001 Ground surface 1002, 1003 Manhole 1100 Connection (joint part)
1200 tubular space

Claims (16)

  1.  下水管内で撮影した動画又は静止画である撮影画像を用いて行われる該下水管の異状診断を支援する、下水管内異状診断支援システムであって、
     前記下水管内の撮影画像を取り込む、撮影画像取込部と、
     前記撮影画像から前記下水管の内側の展開画像を生成する、展開画像生成部と、
     前記展開画像から前記下水管の内側における異状箇所を判定する、異状箇所判定部と、
     前記異状箇所判定部により異状であると判定された箇所を判定結果として表示する、判定結果表示部と、
     前記判定結果に対する変更内容の登録を受け付ける、変更登録受付部と
     を備えたシステム。
    A sewage pipe abnormality diagnosis support system that supports abnormality diagnosis of a sewage pipe using a photographed image that is a moving image or a still image photographed inside the sewage pipe,
    a captured image capturing unit that captures a captured image of the inside of the sewage pipe;
    a developed image generation unit that generates a developed image of the inside of the sewage pipe from the captured image;
    an abnormal location determination unit that determines an abnormal location inside the sewage pipe from the developed image;
    a determination result display unit for displaying a determination result of a location determined to be abnormal by the abnormal location determination unit;
    A system comprising: a change registration accepting unit that accepts registration of changes to the determination result.
  2.  前記異状箇所判定部は画像解析部を備え、該画像解析部は、
     管種、及び調査条件の入力を受け付ける、管種及び調査条件受付部と、
     前記管種、及び調査条件に応じて二値化パラメータを設定する、二値化パラメータ設定部と、
     前記二値化パラメータを用いて前記展開画像を二値画像に変換する、画像変換部と、
     前記二値画像を解析して異常箇所を判定する、二値画像解析部と
     を備える、請求項1に記載のシステム。
    The abnormal location determination unit includes an image analysis unit, and the image analysis unit
    a pipe type and survey condition reception unit that receives inputs of pipe types and survey conditions;
    a binarization parameter setting unit that sets a binarization parameter according to the pipe type and survey conditions;
    an image conversion unit that converts the developed image into a binary image using the binarization parameter;
    The system according to claim 1, further comprising: a binary image analysis unit that analyzes the binary image and determines an abnormal location.
  3.  前記異状箇所判定部は診断モデル判定部を備え、該診断モデル判定部は、
     下水管の内側の展開画像と、該展開画像中の異状箇所とを記憶する教師データ記憶部と、
     前記教師データ記憶部に記憶された展開画像と該展開画像中の異状箇所とを教師データとして用い、入力を展開画像とし、出力を該展開画像中の異状箇所とする診断モデルを機械学習により生成する診断モデル生成部と、
     前記診断モデル生成部により生成された診断モデルを用いて、前記展開画像生成部が生成した展開画像から前記下水管の内側における異状箇所を判定する、異状箇所モデル判定部と
     を備える、請求項1又は2に記載のシステム。
    The abnormal location determination unit includes a diagnostic model determination unit, and the diagnostic model determination unit includes:
    a training data storage unit that stores an expanded image of the inside of a sewage pipe and abnormal locations in the expanded image;
    Using the developed image stored in the training data storage unit and the abnormal location in the developed image as training data, generating a diagnostic model using the developed image as the input and the abnormal location in the developed image as the output by machine learning. a diagnostic model generator that
    and an abnormal location model determining unit that determines an abnormal location inside the sewage pipe from the developed image generated by the developed image generating unit using the diagnostic model generated by the diagnostic model generating unit. Or the system according to 2.
  4.  前記診断モデル判定部は、前記異状箇所モデル判定部による異状箇所の判定に先立って前記展開画像をクラスタリングにより分類する展開画像分類部を更に備える、請求項3に記載のシステム。 4. The system according to claim 3, wherein the diagnostic model determination unit further comprises a development image classification unit that classifies the development images by clustering prior to the abnormal site determination by the abnormal site model determination unit.
  5.  前記教師データ記憶部は、前記下水管の内側の展開画像に対応する管種を更に記憶し、
     前記教師データ記憶部に記憶された展開画像と該展開画像に対応する管種とを教師データとして用い、入力を展開画像とし、出力を該展開画像に対応する管種とする管種推定モデルを機械学習により生成する管種推定モデル生成部と、
     前記管種推定モデル生成部により生成された管種推定モデルを用いて、前記展開画像生成部が生成した展開画像に対応する管種を推定する、管種推定部と
     を更に備える、請求項3又は4に記載のシステム。
    The teacher data storage unit further stores a pipe type corresponding to the developed image inside the sewage pipe,
    A pipe type estimation model that uses the developed image stored in the training data storage unit and the pipe type corresponding to the developed image as training data, uses the developed image as an input, and uses the pipe type corresponding to the developed image as an output. a pipe type estimation model generator generated by machine learning;
    4. A pipe type estimation unit that estimates a pipe type corresponding to the developed image generated by the developed image generation unit using the pipe type estimation model generated by the pipe type estimation model generation unit, further comprising: Or the system according to 4.
  6.  前記診断モデル生成部は、前記変更登録受付部が登録を受け付けた前記判定結果に対する変更内容に基づいて、機械学習により前記診断モデルを更新する、請求項3乃至5のいずれか一項に記載のシステム。 6. The diagnostic model generation unit according to any one of claims 3 to 5, wherein the diagnostic model generation unit updates the diagnostic model by machine learning based on the content of the change to the determination result whose registration has been received by the change registration receiving unit. system.
  7.  前記撮影画像取込部と、前記展開画像生成部と、前記判定結果表示部と、前記変更登録受付部とは、クライアントマシンによって具現化され、
     前記異状箇所判定部に含まれる前記診断モデル判定部のうち、前記教師データ記憶部と、前記診断モデル生成部とは、サーバマシンによって具現化され、前記異状箇所モデル判定部は前記クライアントマシンによって具現化される、
     請求項3乃至6のいずれか一項に記載のシステム。
    The photographed image capturing unit, the developed image generating unit, the determination result display unit, and the change registration receiving unit are embodied by a client machine,
    Of the diagnostic model determination unit included in the abnormal location determination unit, the teacher data storage unit and the diagnostic model generation unit are embodied by a server machine, and the abnormal location model determination unit is embodied by the client machine. to be
    7. A system according to any one of claims 3-6.
  8.  前記異状箇所判定部による判定結果、及び、前記変更登録受付部が登録を受け付けた該判定結果に対する変更内容に基づき、前記下水管の異状診断結果を帳票として出力する帳票出力部を更に備えた、請求項1乃至7のいずれか一項に記載のシステム。 A form output unit for outputting the abnormality diagnosis result of the sewage pipe as a form based on the determination result by the abnormal location determination unit and the change content for the determination result registered by the change registration reception unit. System according to any one of claims 1-7.
  9.  下水管内で撮影した動画又は静止画である撮影画像を用いて行われる該下水管の異状診断を支援する、下水管内異状診断支援システム用のクライアントマシンであって、
     前記下水管内の撮影画像を取り込む、撮影画像取込部と、
     前記撮影画像から前記下水管の内側の展開画像を生成する、展開画像生成部と、
     前記展開画像から前記下水管の内側における異状箇所を判定する、クライアント側異状箇所判定部と、
     前記クライアント側異状箇所判定部により異状であると判定された箇所を判定結果として表示する、判定結果表示部と、
     前記判定結果に対する変更内容の登録を受け付ける、変更登録受付部と
     を備えたクライアントマシン。
    A client machine for a sewage pipe abnormality diagnosis support system that supports diagnosis of sewage pipe abnormality using a photographed image that is a moving image or a still image photographed inside a sewage pipe,
    a captured image capturing unit that captures a captured image of the inside of the sewage pipe;
    a developed image generation unit that generates a developed image of the inside of the sewage pipe from the captured image;
    a client-side abnormal location determination unit that determines an abnormal location inside the sewage pipe from the developed image;
    a determination result display unit for displaying, as a determination result, a location determined to be abnormal by the client-side abnormal location determination unit;
    A client machine comprising: a change registration receiving unit that receives registration of change content for the determination result.
  10.  前記クライアント側異状箇所判定部は画像解析部を備え、該画像解析部は、
     管種、及び調査条件の入力を受け付ける、管種及び調査条件受付部と、
     前記管種、及び調査条件に応じて二値化パラメータを設定する、二値化パラメータ設定部と、
     前記二値化パラメータを用いて前記展開画像を二値画像に変換する、画像変換部と、
     前記二値画像を解析して異常箇所を判定する、二値画像解析部と
     を備える、請求項9に記載のクライアントマシン。
    The client-side abnormal location determination unit includes an image analysis unit, and the image analysis unit
    a pipe type and survey condition reception unit that receives inputs of pipe types and survey conditions;
    a binarization parameter setting unit that sets a binarization parameter according to the pipe type and survey conditions;
    an image conversion unit that converts the developed image into a binary image using the binarization parameter;
    10. The client machine according to claim 9, further comprising: a binary image analysis unit that analyzes the binary image and determines abnormal locations.
  11.  前記クライアント側異状箇所判定部はクライアント側診断モデル判定部を備え、該クライアント側診断モデル判定部は、
     展開画像と該展開画像中の異状箇所とを教師データとして用いて機械学習により生成された、入力を展開画像とし、出力を該展開画像中の異状箇所とする診断モデルを受信する診断モデル受信部と、
     前記診断モデル受信部が受信した診断モデルを用いて、前記展開画像生成部が生成した展開画像から前記下水管の内側における異状箇所を判定する、異状箇所モデル判定部と
     を備える、請求項10に記載のクライアントマシン。
    The client-side abnormal location determination unit includes a client-side diagnostic model determination unit, and the client-side diagnostic model determination unit includes:
    A diagnostic model receiver for receiving a diagnostic model generated by machine learning using a developed image and an abnormal location in the developed image as teacher data, and having an input as the developed image and an abnormal location in the developed image as the output. and,
    11. The method according to claim 10, further comprising: an abnormal location model determining unit that determines an abnormal location inside the sewage pipe from the developed image generated by the developed image generating unit using the diagnostic model received by the diagnostic model receiving unit. Client machine listed.
  12.  下水管内で撮影した動画又は静止画である撮影画像を用いて行われる該下水管の異状診断を支援する、下水管内異状診断支援システム用のサーバマシンであって、
     下水管の内側の展開画像と、該展開画像中の異状箇所とを記憶する教師データ記憶部と、
     前記教師データ記憶部に記憶された展開画像と該展開画像中の異状箇所とを教師データとして用い、入力を展開画像とし、出力を該展開画像中の異状箇所とする診断モデルを機械学習により生成する診断モデル生成部と、
     前記診断モデル生成部により生成された診断モデルを送信する診断モデル送信部と
     を備える、サーバマシン。
    A server machine for a sewage pipe abnormality diagnosis support system that supports diagnosis of sewage pipe abnormalities performed using captured images that are moving images or still images captured inside the sewage pipe,
    a training data storage unit that stores an expanded image of the inside of a sewage pipe and abnormal locations in the expanded image;
    Using the developed image stored in the training data storage unit and the abnormal location in the developed image as training data, generating a diagnostic model using the developed image as the input and the abnormal location in the developed image as the output by machine learning. a diagnostic model generator that
    a diagnostic model transmission unit that transmits the diagnostic model generated by the diagnostic model generation unit.
  13.  クラウドサーバとして具現化される、請求項12に記載のサーバマシン。 The server machine according to claim 12, which is embodied as a cloud server.
  14.  下水管内で撮影した動画又は静止画である撮影画像を用いて行われる該下水管の異状診断を支援する、下水管内異状診断支援システムによる方法であって、
     撮影画像取込部が、前記下水管内の撮影画像を取り込むことと、
     展開画像生成部が、前記撮影画像から前記下水管の内側の展開画像を生成することと、
     異状箇所判定部が、前記展開画像から前記下水管の内側における異状箇所を判定することと、
     判定結果表示部が、前記異状箇所判定部により異状であると判定された箇所を判定結果として表示することと、
     変更登録受付部が、前記判定結果に対する変更内容の登録を受け付けることと
     を含む、方法。
    A method using a sewage pipe abnormality diagnosis support system that supports abnormality diagnosis of a sewage pipe performed using a captured image that is a moving image or a still image captured inside a sewage pipe,
    a captured image capturing unit capturing a captured image of the inside of the sewage pipe;
    a developed image generation unit generating a developed image of the inside of the sewage pipe from the captured image;
    an abnormal location determination unit determining an abnormal location inside the sewage pipe from the developed image;
    a determination result display unit displaying a location determined to be abnormal by the abnormal location determination unit as a determination result;
    A method, comprising: a change registration accepting unit accepting registration of change content for the determination result.
  15.  下水管内で撮影した動画又は静止画である撮影画像を用いて行われる該下水管の異状診断を支援する、下水管内異状診断支援システム用のクライアントマシンによる方法であって、
     撮影画像取込部が、前記下水管内の撮影画像を取り込むことと、
     展開画像生成部が、前記撮影画像から前記下水管の内側の展開画像を生成することと、
     クライアント側異状箇所判定部が、前記展開画像から前記下水管の内側における異状箇所を判定することと、
     判定結果表示部が、前記異状箇所判定部により異状であると判定された箇所を判定結果として表示することと、
     変更登録受付部が、前記判定結果に対する変更内容の登録を受け付けることと
     を含む、方法。
    A method by a client machine for a sewage pipe abnormality diagnosis support system that supports diagnosis of sewer pipe abnormality performed using a photographed image that is a moving image or a still image photographed inside a sewer pipe,
    a captured image capturing unit capturing a captured image of the inside of the sewage pipe;
    a developed image generation unit generating a developed image of the inside of the sewage pipe from the captured image;
    a client-side abnormality location determination unit determining an abnormality location inside the sewage pipe from the developed image;
    a determination result display unit displaying a location determined to be abnormal by the abnormal location determination unit as a determination result;
    A method, comprising: a change registration accepting unit accepting registration of change content for the determination result.
  16.  下水管内で撮影した動画又は静止画である撮影画像を用いて行われる該下水管の異状診断を支援する、下水管内異状診断支援システム用のサーバマシンによる方法であって、
     教師データ記憶部が、下水管の内側の展開画像と、該展開画像中の異状箇所とを記憶することと、
     診断モデル生成部が、前記教師データ記憶部に記憶された展開画像と該展開画像中の異状箇所とを教師データとして用い、入力を展開画像とし、出力を該展開画像中の異状箇所とする診断モデルを機械学習により生成することと、
     診断モデル送信部が、前記診断モデル生成部により生成された診断モデルを送信することと
     を含む、方法。
    A method using a server machine for a sewage pipe abnormality diagnosis support system that supports diagnosis of sewage pipe abnormalities performed using a photographed image that is a moving image or a still image photographed inside a sewer pipe,
    a training data storage unit storing a developed image of the inside of a sewage pipe and an abnormal location in the developed image;
    A diagnostic model generation unit uses the developed image stored in the training data storage unit and an abnormal location in the developed image as teaching data, and diagnoses the input as the developed image and the output as the abnormal location in the developed image. generating a model by machine learning;
    a diagnostic model transmitter transmitting a diagnostic model generated by the diagnostic model generator.
PCT/JP2021/027478 2021-07-26 2021-07-26 Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method WO2023007535A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023537745A JPWO2023007535A1 (en) 2021-07-26 2021-07-26
PCT/JP2021/027478 WO2023007535A1 (en) 2021-07-26 2021-07-26 Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/027478 WO2023007535A1 (en) 2021-07-26 2021-07-26 Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method

Publications (1)

Publication Number Publication Date
WO2023007535A1 true WO2023007535A1 (en) 2023-02-02

Family

ID=85086432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/027478 WO2023007535A1 (en) 2021-07-26 2021-07-26 Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method

Country Status (2)

Country Link
JP (1) JPWO2023007535A1 (en)
WO (1) WO2023007535A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "In-pipe inspection camera ", QI TOTAL VIDEO SYSTEM, 5 August 2020 (2020-08-05), XP093028492, Retrieved from the Internet <URL:https://www.qi-inc.com/products/snake-camera/> [retrieved on 20230302] *
IDEMITSU, TAKESHI; MATSUDA, HIROAKI; ARIMURA, RYOICHI: "TOSWACS-V Monitoring and Control System Enhancing Water Supply and Sewerage Business Foundations", TOSHIBA REVIEW, TOSHIBA, JP, vol. 71, no. 4, 1 January 2016 (2016-01-01), JP , pages 52 - 56, XP009542891, ISSN: 0372-0462 *

Also Published As

Publication number Publication date
JPWO2023007535A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN109977921B (en) Method for detecting hidden danger of power transmission line
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
TWI506565B (en) Dynamic object classification
CN112270280B (en) Open-pit mine detection method in remote sensing image based on deep learning
JP6397379B2 (en) CHANGE AREA DETECTION DEVICE, METHOD, AND PROGRAM
WO2021095693A1 (en) Information processing device, information processing method, and program
JP6707920B2 (en) Image processing apparatus, image processing method, and program
Myrans et al. Automated detection of fault types in CCTV sewer surveys
JP6867054B2 (en) A learning method and a learning device for improving segmentation performance used for detecting a road user event by utilizing a double embedding configuration in a multi-camera system, and a testing method and a testing device using the learning method and a learning device. {LEARNING METHOD AND LEARNING DEVICE FOR IMPROVING SEGMENTATION PERFORMANCE TO BE USED FOR DETECTING ROAD USER EVENTS USING DOUBLE EMBEDDING CONFIGURATION IN MULTI-CAMERA SYSTEM AND TESTING METHOD AND TESTING DEVICE USING THE SAME}
WO2019172451A1 (en) Learning data creation device, learning model creation system, learning data creation method, and program
CN112651358A (en) Target detection method and device, network camera and storage medium
KR102391853B1 (en) System and Method for Processing Image Informaion
WO2024060529A1 (en) Pavement disease recognition method and system, device, and storage medium
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN115409789A (en) Power transmission line engineering defect detection method based on image semantic segmentation
Kumari et al. YOLOv8 Based Deep Learning Method for Potholes Detection
CN116959099B (en) Abnormal behavior identification method based on space-time diagram convolutional neural network
KR102230559B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
WO2023007535A1 (en) Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method
CN112052823A (en) Target detection method and device
CN115496960A (en) Sample generation method, target detection model training method, target detection method and system
KR102342495B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN112861682B (en) Road surface image acquisition and classification method and device based on naive Bayes cloud computing
JP2023038060A (en) Label generation method, model generation method, label generation device, label generation program, model generation device, and model generation program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21951748

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023537745

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE