CN117351339A - Visual inspection method and system for robot - Google Patents

Visual inspection method and system for robot Download PDF

Info

Publication number
CN117351339A
CN117351339A CN202311223457.4A CN202311223457A CN117351339A CN 117351339 A CN117351339 A CN 117351339A CN 202311223457 A CN202311223457 A CN 202311223457A CN 117351339 A CN117351339 A CN 117351339A
Authority
CN
China
Prior art keywords
robot
depth
ambiguity
lower image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311223457.4A
Other languages
Chinese (zh)
Inventor
李丽丽
王丽杨
李红霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shunde Polytechnic
Original Assignee
Shunde Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shunde Polytechnic filed Critical Shunde Polytechnic
Priority to CN202311223457.4A priority Critical patent/CN117351339A/en
Publication of CN117351339A publication Critical patent/CN117351339A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual detection method and a visual detection system for a robot, wherein the depth of the robot is calculated based on sonar; if the depth of the robot is in a preset depth threshold value, shooting the lower environment of the robot based on a plurality of cameras so as to start shooting of the plurality of cameras at a certain depth, thereby guaranteeing the lower image, carrying out ambiguity analysis on the lower image in order to further guarantee the clarity of the lower image, and adjusting shooting coefficients of the plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than the preset ambiguity threshold value, thereby guaranteeing the clarity of the lower image and reducing the influence of water on shooting.

Description

Visual inspection method and system for robot
Technical Field
The invention relates to the technical field of robots, in particular to a visual detection method and a visual detection system for robots.
Background
Along with the development of science and technology, the robot is gradually applied to marine research, and the robot fully utilizes the characteristic of activity in water to detect the underwater environment, and at this moment, the robot detects the position of the underwater environment through the sonar and detects the distance, and the distance between the robot and the underwater object is determined through the distance fed back by the sonar, but the actual situation of the underwater environment cannot be known, and the existing robot cannot model the underwater environment.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a visual detection method and a visual detection system for a robot.
In order to solve the technical problems, the embodiment of the invention provides a visual detection method of a robot, which is applied to the robot;
the visual inspection method of the robot comprises the following steps:
acquiring a signal of the robot in an underwater state;
triggering the depth monitoring of the robot based on the signal of the robot in the underwater state, wherein the depth of the robot is calculated based on sonar;
if the depth of the robot is in a preset depth threshold value, shooting the lower environment of the robot based on a plurality of cameras, and acquiring a lower image;
performing ambiguity analysis on the lower image, and adjusting shooting coefficients of a plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than a preset ambiguity threshold;
and carrying out three-dimensional modeling on the lower image and virtualizing an underwater environment, wherein the features of the lower image are subjected to three-dimensional sensing based on sonar.
In addition, the embodiment of the invention also provides a visual inspection system of the robot, which comprises:
the acquisition module is used for acquiring signals of the robot in an underwater state;
the depth module is used for triggering the depth monitoring of the robot based on the signal of the underwater state of the robot, wherein the depth of the robot is calculated based on sonar;
the image module is used for shooting the lower environment of the robot based on a plurality of cameras and acquiring a lower image if the depth of the robot is in a preset depth threshold;
the analysis module is used for carrying out ambiguity analysis on the lower image and adjusting shooting coefficients of the cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than a preset ambiguity threshold;
and the sensing module is used for carrying out three-dimensional modeling on the lower image and virtualizing an underwater environment, wherein the features of the lower image are subjected to three-dimensional sensing based on sonar.
According to the method, depth monitoring of the robot is triggered based on signals of the robot in an underwater state, and depth of the robot is calculated based on sonar; if the depth of the robot is in a preset depth threshold value, shooting the lower environment of the robot based on a plurality of cameras so as to start shooting of the plurality of cameras at a certain depth, thereby guaranteeing the lower image, carrying out ambiguity analysis on the lower image so as to further guarantee the clarity of the lower image, adjusting shooting coefficients of the plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than the preset ambiguity threshold value, thereby guaranteeing the clarity of the lower image and reducing the influence of water on shooting, at the moment, carrying out three-dimensional modeling on the lower image, and virtualizing the underwater environment, so that the underwater environment is reproduced according to sonar and the plurality of cameras, carrying out three-dimensional modeling on the lower image in an actual environment, so that the robot can be conveniently adjusted according to the virtual underwater environment, thereby guaranteeing the safety of the robot moving underwater, and determining the actual condition of the underwater environment.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a visual inspection method of a robot in an embodiment of the present invention;
FIG. 2 is a schematic flow diagram of S11 in traffic lighting in an embodiment of the invention;
FIG. 3 is a schematic flow diagram of S12 in traffic lighting in an embodiment of the invention;
fig. 4 is a schematic flow chart of S13 in driving illumination in the embodiment of the present invention;
FIG. 5 is a schematic flow chart of S14 in driving illumination in an embodiment of the invention;
FIG. 6 is a schematic flow diagram of S15 in traffic lighting in an embodiment of the invention;
fig. 7 is a schematic structural diagram of a visual inspection system of a robot in an embodiment of the present invention;
fig. 8 is a hardware diagram of an electronic device, according to an example embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Referring to fig. 1 to 8, a visual inspection method of a robot is applied to the robot; the visual inspection method of the robot comprises the following steps:
s11: acquiring a signal of the robot in an underwater state;
in the embodiment of the application, the robot can move underwater and is transferred to the underwater from the land, at the moment, the robot gradually enters the underwater environment in the process of transferring from the land to the underwater, the robot is adjusted to an underwater state from a land state, and signals of the robot in the underwater state are acquired, so that the state monitoring of the robot is ensured.
In the implementation process of the invention, the specific steps can be as follows:
s111: when the robot enters the water surface from the land, acquiring the wetting area of the robot;
s112: if the wetting area of the robot reaches a preset wetting area threshold, recording the water inlet state of the robot, and triggering the sealing test of the robot.
In the embodiment of the application, the robot enters the water surface from the land, the body of the robot is gradually in the water environment, and at the moment, the wetting area of the robot is acquired so as to be convenient for the state judgment of entering the robot through the wetting area of the robot, and therefore the state monitoring of the robot is guaranteed.
If the wetting area of the robot reaches a preset wetting area threshold, the water inlet state of the robot is recorded, the robot gradually starts the underwater activity function in the water inlet state, and simultaneously the sealing test of the robot is triggered, so that the sealing performance of the robot in the water inlet state is conveniently monitored, the water inlet of the robot can be controlled, and the control of the internal environment of the robot is ensured.
S113: and detecting the dryness coefficient of the sealing channel in the robot.
S114: if the drying coefficient of the sealing channel is lower than the preset drying coefficient threshold, the sealing channel is further plugged, and the opening of hot air is triggered, so that the drying coefficient of the sealing channel is improved.
In the embodiment of the application, if the robot has an internal water inlet condition, external water preferentially enters the sealing channel in the robot and wets the sealing channel in the robot, at this time, the sealing channel in the robot is wetted by the water to influence the drying coefficient of the sealing channel in the robot, and the monitoring of the internal environment of the robot is realized by detecting the drying coefficient of the sealing channel in the robot.
If the drying coefficient of the sealing channel is lower than a preset drying coefficient threshold value, the sealing channel is further plugged so as to plug external water through a plurality of plugging walls, the sealing channel is further plugged, in addition, in order to further avoid the influence of the sealing channel, the opening of hot air is triggered so as to improve the drying coefficient of the sealing channel, the sealing channel is dried through the hot air, the water in the sealing channel is evaporated so as to ensure the drying coefficient of the sealing channel, and the adjustment of the drying coefficient of the sealing channel is realized.
S115: if the drying coefficient of the sealing channel reaches a preset drying coefficient threshold value, determining an average value of the drying coefficients in a preset time.
S116: and if the average value of the drying coefficients exceeds the preset drying coefficient threshold value, turning off the hot air.
In the embodiment of the application, the sealing channel is dried through hot air, so that the drying coefficient of the sealing channel is adjusted, the drying coefficient of the sealing channel is close to a preset drying coefficient threshold value, and the drying coefficient of the sealing channel reaches the preset drying coefficient threshold value.
If the drying coefficient of the sealing channel reaches a preset drying coefficient threshold value, determining an average value of the drying coefficients in a preset time. If the average value of the drying coefficient exceeds the preset drying coefficient threshold value, the hot air is turned off, at the moment, the overall dryness of the sealing channel is determined through the average value of the drying coefficient, and the hot air is not turned off when the drying coefficient of the sealing channel just reaches the preset drying coefficient threshold value, but is turned off when the average value of the drying coefficient exceeds the preset drying coefficient threshold value, so that the effectiveness of the hot air is ensured.
S12: triggering the depth monitoring of the robot based on the signal of the robot in the underwater state, wherein the depth of the robot is calculated based on sonar;
in the implementation process of the invention, the specific steps can be as follows:
s121: analyzing signals of the robot in an underwater state, and determining monitoring information;
s122: triggering the depth monitoring of the robot based on the monitoring information, wherein the robot is in a monitoring state;
s123: when the robot is in a monitoring state, starting the sonar of the robot;
s124: and measuring and calculating the depth of the robot based on the sonar of the robot.
In the embodiment of the application, the underwater state of the robot is monitored, at the moment, the signal of the underwater state of the robot is analyzed, and the monitoring information is determined, so that the monitoring information is further processed, the depth monitoring of the robot is triggered based on the monitoring information, the robot is in the monitoring state, the current state of the robot is ensured, and the monitoring of the robot to the underwater environment is ensured.
When the robot is in the monitoring state, the sonar of the robot is started, the sonar is adjusted to be in the starting state by the closing state, and the sonar utilizes the propagation and reflection characteristics of sound waves in water to carry out underwater detection so as to monitor the underwater environment, and then the depth of the robot is calculated based on the sonar of the robot, meanwhile, the sonar and the camera are matched for use, the sonar carries out depth detection, the camera carries out image detection, and the camera is combined based on the depth and the image so as to conveniently complete modeling of the underwater environment.
S13: if the depth of the robot is in a preset depth threshold value, shooting the lower environment of the robot based on a plurality of cameras, and acquiring a lower image;
in the embodiment of the application, when the robot is in water, the robot is in a monitoring state, monitors the underwater environment, tests the depth of the robot through the sonar, if the depth of the robot is in a preset depth threshold value, shoots the lower environment of the robot based on a plurality of cameras, and acquires a lower image so as to shoot the lower environment according to the plurality of cameras, thereby ensuring the acquisition of the lower image so as to combine the lower image with the depth tested by the sonar.
In the implementation process of the invention, the specific steps can be as follows:
s131: if the depth of the robot is in a preset depth threshold, triggering the starting of a plurality of cameras of the robot, and combining the cameras with the sonar;
s132: when the cameras and the sonar are combined, shooting the lower environment of the robot along different angles based on the cameras so as to obtain lower images of the different angles;
in the embodiment of the application, the preset depth threshold value can be adjusted according to human experience, if the depth of the robot is in the preset depth threshold value, the robot is indicated to be in a detectable depth so as to trigger the starting of a plurality of cameras of the robot, and the underwater environment is shot through the plurality of cameras of the robot so as to acquire lower images of a plurality of different angles.
At this time, a plurality of cameras and sonar combine to use, and the sonar is used with the camera cooperation, and the sonar carries out the degree of depth and surveys, and the camera carries out image detection to combine based on degree of depth and image, so that accomplish the modeling of underwater environment, in addition, shoot the below environment of robot along different angles based on a plurality of cameras, in order to obtain the below image of a plurality of different angles, so that the synthesis of the below image of a plurality of following different angles, thereby further image processing.
S133: synthesizing based on the lower images with different angles, and denoising the superposition areas among the lower images with different angles to form an integral underwater environment image, wherein the integral underwater environment image is a plane image;
in the embodiment of the application, a plurality of lower images with different angles are acquired, and the lower images with different angles are shot by different angles so as to be synthesized based on the lower images with different angles, and then a panoramic underwater ring image is formed, wherein overlapping areas among the lower images with different angles are denoised to form an integral underwater environment image.
At this time, two adjacent lower images are compared, and the overlapping area between the two adjacent lower images is determined through area traversal, so that denoising is conducted on the overlapping area, and then other lower images are sequentially processed, so that denoising is conducted on the overlapping areas between the lower images with different angles, an integral underwater environment image is formed, the integrity and the accuracy of the underwater environment image are guaranteed, and the influence of noise points is reduced.
S134: identifying an integral underwater environment map, and marking contour lines in the integral underwater environment map to determine boundaries in the integral underwater environment map;
s135: locating a depth marker position of the underwater environment based on a boundary in the integrated underwater environment map;
s136: and detecting the depth marking position in the underwater environment based on sonar, and marking the depth fault.
In the embodiment of the application, the integral underwater environment map is identified so as to highlight the contour lines of the integral underwater environment map, so that the contour lines in the integral underwater environment map are marked to determine the boundary in the integral underwater environment map, and at the moment, the depth marking position of the underwater environment is positioned based on the boundary in the integral underwater environment map.
The method comprises the steps of guiding the position of a depth mark of an underwater environment so as to guide detection of a sonar, detecting the position of the depth mark of the underwater environment based on the sonar, marking a depth fault so as to facilitate depth detection along a contour line, and endowing the contour line with a depth value, so that three-dimensional modeling of the boundary of an underwater environment map is ensured, highlighting of two adjacent three-dimensional features in the underwater environment is ensured, and three-dimensional modeling of a specific position is realized.
S14: performing ambiguity analysis on the lower image, and adjusting shooting coefficients of a plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than a preset ambiguity threshold;
in the implementation process of the invention, the specific steps can be as follows:
s141: dividing regions based on the integral underwater environment map, and marking each fuzzy region;
s142: carrying out ambiguity analysis on each fuzzy region, and measuring and calculating average ambiguity of each fuzzy region;
s143: determining the ambiguity of the lower image according to the average ambiguity of each ambiguity region;
in the embodiment of the application, the integral underwater environment map is divided into a plurality of area blocks so as to be convenient for feature marking for the plurality of area blocks, and meanwhile, the fuzzy test is carried out for the plurality of area blocks so as to be convenient for marking each fuzzy area.
And carrying out ambiguity analysis on each fuzzy region so as to determine the ambiguity of each fuzzy region, determining the average ambiguity of each fuzzy region by the average value of the ambiguities of each fuzzy region, thereby completing the calculation of the average ambiguity of each fuzzy region so as to facilitate the further processing by using the average ambiguity, determining the ambiguity of the lower image according to the average ambiguity of each fuzzy region at the moment, and controlling the ambiguity of the lower image.
S144: associating the ambiguity of the lower image with a shooting coefficient matching table, and adjusting shooting coefficients of a plurality of cameras based on the ambiguity of the lower image;
s145: shooting an underwater environment based on the shooting coefficients of the plurality of cameras after adjustment, and determining the ambiguity of the image below;
s146: and fixing the shooting coefficients of the adjusted cameras until the ambiguity of the image below is lower than a preset ambiguity threshold.
In the embodiment of the application, the ambiguity and shooting coefficient matching table of the lower image is obtained, the ambiguity of the lower image is related to the shooting coefficient matching table, wherein the shooting coefficient matching table records the matching relation between the ambiguity and the shooting coefficient, so that the shooting coefficients of a plurality of cameras are conveniently adjusted based on the ambiguity of the lower image, and the adjustment of the cameras is realized.
At this time, the underwater environment is shot based on the shooting coefficients of the adjusted cameras, the blurring degree of the lower image is determined, the underwater environment is further shot through the adjusted cameras, the blurring areas are updated, so that the blurring areas are adjusted, the blurring degree of the lower image is redetermined, and then the blurring degree of the lower image is adjusted, in addition, the shooting coefficients of the adjusted cameras are fixed until the blurring degree of the lower image is lower than a preset blurring degree threshold value, so that the shooting coefficients of the cameras are fixed, and shooting of the underwater environment by the cameras is achieved by utilizing the better shooting coefficients.
S15: performing three-dimensional modeling on the lower image and virtualizing an underwater environment, wherein features of the lower image are subjected to three-dimensional sensing based on sonar;
in the implementation process of the invention, the specific steps can be as follows:
s151: acquiring a lower image;
s152: performing region identification on the lower image, and determining each characteristic region in the lower image;
s153: positioning the central position of each characteristic region, and driving the robot to move based on the central position of the characteristic region;
s154: when the robot is positioned at the center of the characteristic region, path planning is performed based on the characteristic region, and stereo sensing is performed by adopting sonar so as to acquire depth information of the characteristic region.
In the implementation process of the invention, a lower image is acquired, and region division is performed based on the lower image so as to form each characteristic region, at the moment, region identification is performed on the lower image, each characteristic region in the lower image is determined, and position positioning is performed according to each characteristic region so as to position the center position of each characteristic region, so that the movement of the robot is driven based on the center position of the characteristic region.
When the robot is positioned at the center of the characteristic region, path planning is performed based on the characteristic region, and simultaneously, three-dimensional induction is performed by adopting sonar so as to acquire depth information of the characteristic region, so that the path planning of the robot based on the sonar performs three-dimensional induction on each characteristic region so as to determine the depth information of the characteristic region, and the depth information of the characteristic region is added to the characteristic region so as to complete three-dimensional modeling of the characteristic region.
The stereo modeling is performed on the lower image, and the underwater environment is virtual, wherein the stereo sensing is performed on the characteristics of the lower image based on sonar, and the method further comprises the following steps: constructing a stereoscopic model of the feature region based on the depth information of the feature region; filling each three-dimensional model into a lower image, and sequentially completing three-dimensional modeling of the lower image; a virtual underwater environment based on stereoscopic modeling of the lower image; performing defect identification aiming at an underwater environment, and determining a defect area; and performing specific detection of the sonar and the camera aiming at the defect area so as to perfect the defect area.
In the embodiment of the application, a stereoscopic model of a feature region is constructed for depth information based on the feature region, at this time, the depth information of the feature region and an image of the feature region are combined, and stereoscopic modeling is performed on each feature region, so that the stereoscopic model of the feature region is constructed based on the depth information of the feature region, and each stereoscopic model is filled into an underlying image, and stereoscopic modeling of the underlying image is sequentially completed.
The method comprises the steps of carrying out defect identification on the underwater environment and determining a defect area in order to further ensure the integrity of the underwater environment based on a three-dimensional modeling virtual underwater environment of a lower image; at this time, the defective area is an unclear area in the underwater environment, and in order to ensure the integrity of the underwater environment, specific detection of the sonar and the camera is performed on the defective area so as to perfect the defective area, so that the perfection of the specific area in the underwater environment is realized, and secondary re-modeling is avoided.
According to the method, depth monitoring of the robot is triggered based on signals of the robot in an underwater state, and depth of the robot is calculated based on sonar; if the depth of the robot is in a preset depth threshold value, shooting the lower environment of the robot based on a plurality of cameras so as to start shooting of the plurality of cameras at a certain depth, thereby guaranteeing the lower image, carrying out ambiguity analysis on the lower image so as to further guarantee the clarity of the lower image, adjusting shooting coefficients of the plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than the preset ambiguity threshold value, thereby guaranteeing the clarity of the lower image and reducing the influence of water on shooting, at the moment, carrying out three-dimensional modeling on the lower image, and virtualizing the underwater environment, so that the underwater environment is reproduced according to sonar and the plurality of cameras, carrying out three-dimensional modeling on the lower image in an actual environment, so that the robot can be conveniently adjusted according to the virtual underwater environment, thereby guaranteeing the safety of the robot moving underwater, and determining the actual condition of the underwater environment.
Examples
Referring to fig. 7, fig. 7 is a schematic structural diagram of a visual inspection system of a robot according to an embodiment of the invention.
As shown in fig. 7, a vision inspection system of a robot, the vision inspection system of the robot includes:
an acquisition module 21, configured to acquire a signal of a state of the robot under water;
a depth module 22, configured to trigger depth monitoring of the robot based on a signal of the underwater state of the robot, where the depth of the robot is calculated based on sonar;
an image module 23, configured to capture a lower environment of the robot based on a plurality of cameras and acquire a lower image if the depth of the robot is at a preset depth threshold;
an analysis module 24, configured to perform a blur degree analysis on the lower image, and adjust the capturing coefficients of the plurality of cameras based on the blur degree of the lower image until the blur degree of the lower image is lower than a preset blur degree threshold;
the sensing module 25 is used for performing stereoscopic modeling on the lower image and virtualizing an underwater environment, wherein the features of the lower image are subjected to stereoscopic sensing based on sonar.
Examples
Referring to fig. 8, an electronic device 40 according to this embodiment of the present invention is described below with reference to fig. 8. The electronic device 40 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, the electronic device 40 is in the form of a general purpose computing device. Components of electronic device 40 may include, but are not limited to: the at least one processing unit 41, the at least one memory unit 42, a bus 43 connecting the different system components, including the memory unit 42 and the processing unit 41.
Wherein the storage unit stores program code that is executable by the processing unit 41 such that the processing unit 41 performs the steps according to various exemplary embodiments of the present invention described in the above-described "example methods" section of the present specification.
The memory unit 42 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 421 and/or cache memory 422, and may further include Read Only Memory (ROM) 423.
The storage unit 42 may also include a program/utility 424 having a set (at least one) of program modules 425, such program modules 425 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The bus 43 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Electronic device 40 may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with electronic device 40, and/or any device (e.g., router, modem, etc.) that enables electronic device 40 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 44. Also, electronic device 40 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, via network adapter 45. As shown in fig. 8, the network adapter 45 communicates with other modules of the electronic device 40 over the bus 43. It should be appreciated that although not shown in fig. 8, other hardware and/or software modules may be used in connection with electronic device 40, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup planning systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like. And which stores computer program instructions which, when executed by a computer, cause the computer to perform a method according to the above.
In addition, the method and system for detecting the vision of the robot provided by the embodiment of the present invention are described in detail, and specific examples should be adopted to illustrate the principles and embodiments of the present invention, and the description of the above examples is only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. The visual inspection method of the robot is characterized by being applied to the robot;
the visual inspection method of the robot comprises the following steps:
acquiring a signal of the robot in an underwater state;
triggering the depth monitoring of the robot based on the signal of the robot in the underwater state, wherein the depth of the robot is calculated based on sonar;
if the depth of the robot is in a preset depth threshold value, shooting the lower environment of the robot based on a plurality of cameras, and acquiring a lower image;
performing ambiguity analysis on the lower image, and adjusting shooting coefficients of a plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than a preset ambiguity threshold;
and carrying out three-dimensional modeling on the lower image and virtualizing an underwater environment, wherein the features of the lower image are subjected to three-dimensional sensing based on sonar.
2. The method for visual inspection of a robot according to claim 1, wherein the acquiring the signal of the state of the robot under water comprises:
when the robot enters the water surface from the land, acquiring the wetting area of the robot;
if the wetting area of the robot reaches a preset wetting area threshold, recording the water inlet state of the robot, and triggering the sealing test of the robot.
3. The method of claim 2, wherein the triggering the sealing test of the robot comprises:
detecting a drying coefficient of a sealing channel in the robot;
if the drying coefficient of the sealing channel is lower than a preset drying coefficient threshold value, further plugging the sealing channel, and triggering the opening of hot air to improve the drying coefficient of the sealing channel;
if the drying coefficient of the sealing channel reaches a preset drying coefficient threshold value, determining an average value of the drying coefficients in preset time;
and if the average value of the drying coefficients exceeds the preset drying coefficient threshold value, turning off the hot air.
4. The method for visual inspection of a robot according to claim 1, wherein the signal based on the underwater state of the robot triggers depth monitoring of the robot, wherein the measuring the depth of the robot based on sonar comprises:
analyzing signals of the robot in an underwater state, and determining monitoring information;
triggering the depth monitoring of the robot based on the monitoring information, wherein the robot is in a monitoring state;
when the robot is in a monitoring state, starting the sonar of the robot;
and measuring and calculating the depth of the robot based on the sonar of the robot.
5. The method according to claim 1, wherein if the depth of the robot is at a preset depth threshold, capturing the lower environment of the robot based on the plurality of cameras, and acquiring the lower image, comprises:
if the depth of the robot is in a preset depth threshold, triggering the starting of a plurality of cameras of the robot, and combining the cameras with the sonar;
when a plurality of cameras and sonar are combined, the lower environment of the robot is shot along different angles based on the plurality of cameras, so that lower images of a plurality of different angles are acquired.
6. The method according to claim 5, wherein if the depth of the robot is at a preset depth threshold, capturing the lower environment of the robot based on the plurality of cameras, and acquiring the lower image, further comprising:
synthesizing based on the lower images with different angles, and denoising the superposition areas among the lower images with different angles to form an integral underwater environment image, wherein the integral underwater environment image is a plane image;
identifying an integral underwater environment map, and marking contour lines in the integral underwater environment map to determine boundaries in the integral underwater environment map;
locating a depth marker position of the underwater environment based on a boundary in the integrated underwater environment map;
and detecting the depth marking position in the underwater environment based on sonar, and marking the depth fault.
7. The method according to claim 6, wherein the performing the ambiguity analysis on the lower image and adjusting the shooting coefficients of the plurality of cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than a preset ambiguity threshold value comprises:
dividing regions based on the integral underwater environment map, and marking each fuzzy region;
carrying out ambiguity analysis on each fuzzy region, and measuring and calculating average ambiguity of each fuzzy region;
determining the ambiguity of the lower image according to the average ambiguity of each ambiguity region;
associating the ambiguity of the lower image with a shooting coefficient matching table, and adjusting shooting coefficients of a plurality of cameras based on the ambiguity of the lower image;
shooting an underwater environment based on the shooting coefficients of the plurality of cameras after adjustment, and determining the ambiguity of the image below;
and fixing the shooting coefficients of the adjusted cameras until the ambiguity of the image below is lower than a preset ambiguity threshold.
8. The method of claim 7, wherein the stereoscopic modeling of the underlying image and the virtual underwater environment, wherein the stereoscopic sensing of the features of the underlying image based on sonar, comprises:
acquiring a lower image;
performing region identification on the lower image, and determining each characteristic region in the lower image;
positioning the central position of each characteristic region, and driving the robot to move based on the central position of the characteristic region;
when the robot is positioned at the center of the characteristic region, path planning is performed based on the characteristic region, and stereo sensing is performed by adopting sonar so as to acquire depth information of the characteristic region.
9. The method of claim 8, wherein the stereoscopic modeling of the underlying image and the virtual underwater environment, wherein the stereoscopic sensing of the features of the underlying image based on sonar, further comprises:
constructing a stereoscopic model of the feature region based on the depth information of the feature region;
filling each three-dimensional model into a lower image, and sequentially completing three-dimensional modeling of the lower image;
a virtual underwater environment based on stereoscopic modeling of the lower image;
performing defect identification aiming at an underwater environment, and determining a defect area;
and performing specific detection of the sonar and the camera aiming at the defect area so as to perfect the defect area.
10. A vision inspection system of a robot, characterized in that the vision inspection system of a robot is applied to the vision inspection method of a robot according to any one of claims 1 to 9, the vision inspection system of a robot comprising:
the acquisition module is used for acquiring signals of the robot in an underwater state;
the depth module is used for triggering the depth monitoring of the robot based on the signal of the underwater state of the robot, wherein the depth of the robot is calculated based on sonar;
the image module is used for shooting the lower environment of the robot based on a plurality of cameras and acquiring a lower image if the depth of the robot is in a preset depth threshold;
the analysis module is used for carrying out ambiguity analysis on the lower image and adjusting shooting coefficients of the cameras based on the ambiguity of the lower image until the ambiguity of the lower image is lower than a preset ambiguity threshold;
and the sensing module is used for carrying out three-dimensional modeling on the lower image and virtualizing an underwater environment, wherein the features of the lower image are subjected to three-dimensional sensing based on sonar.
CN202311223457.4A 2023-09-20 2023-09-20 Visual inspection method and system for robot Pending CN117351339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311223457.4A CN117351339A (en) 2023-09-20 2023-09-20 Visual inspection method and system for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311223457.4A CN117351339A (en) 2023-09-20 2023-09-20 Visual inspection method and system for robot

Publications (1)

Publication Number Publication Date
CN117351339A true CN117351339A (en) 2024-01-05

Family

ID=89354944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311223457.4A Pending CN117351339A (en) 2023-09-20 2023-09-20 Visual inspection method and system for robot

Country Status (1)

Country Link
CN (1) CN117351339A (en)

Similar Documents

Publication Publication Date Title
CN108955718B (en) Visual odometer and positioning method thereof, robot and storage medium
CN112102369B (en) Autonomous inspection method, device, equipment and storage medium for water surface floating target
US11170525B2 (en) Autonomous vehicle based position detection method and apparatus, device and medium
US10580227B2 (en) Method and apparatus for testing operation of unmanned vehicle
CN108839016B (en) Robot inspection method, storage medium, computer equipment and inspection robot
CN105675610A (en) Online detection system for object surface texture characteristics and working principle
JP4660569B2 (en) Object detection apparatus and object detection method
JP4874693B2 (en) Image processing apparatus and processing method thereof
WO2022105676A1 (en) Method and system for measuring wear of workpiece plane
BR102015011234A2 (en) local information processing process
CN112492207B (en) Method and device for controlling camera to rotate based on sound source positioning
CN108827252B (en) Method, device, equipment and system for drawing underwater live-action map and storage medium
CN112985263A (en) Method, device and equipment for detecting geometrical parameters of bow net
CN116060256A (en) Battery cell glue dripping method and device and electronic equipment
CN115578326A (en) Road disease identification method, system, equipment and storage medium
CN113884011A (en) Non-contact concrete surface crack measuring equipment and method
CN113589230B (en) Target sound source positioning method and system based on joint optimization network
CN117215316B (en) Method and system for driving environment perception based on cooperative control and deep learning
CN117351339A (en) Visual inspection method and system for robot
Drews Jr et al. Tracking system for underwater inspection using computer vision
CN105043341B (en) The measuring method and device of unmanned plane distance away the ground
CN115265570A (en) Automatic parking performance evaluation method and device, storage medium and equipment
CN110824525A (en) Self-positioning method of robot
CN116299374B (en) Sonar imaging underwater automatic calibration positioning method and system based on machine vision
WO2023058128A1 (en) Position estimation device, moving body system, position estimation method, and non-transitory computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination