CN113177944B - Underwater lens stain detection method and underwater robot - Google Patents

Underwater lens stain detection method and underwater robot Download PDF

Info

Publication number
CN113177944B
CN113177944B CN202110731265.9A CN202110731265A CN113177944B CN 113177944 B CN113177944 B CN 113177944B CN 202110731265 A CN202110731265 A CN 202110731265A CN 113177944 B CN113177944 B CN 113177944B
Authority
CN
China
Prior art keywords
camera
image
detected
auxiliary camera
auxiliary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110731265.9A
Other languages
Chinese (zh)
Other versions
CN113177944A (en
Inventor
魏建仓
赵国腾
张增虎
侯明波
郭轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deepinfar Ocean Technology Inc
Original Assignee
Deepinfar Ocean Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deepinfar Ocean Technology Inc filed Critical Deepinfar Ocean Technology Inc
Priority to CN202110731265.9A priority Critical patent/CN113177944B/en
Publication of CN113177944A publication Critical patent/CN113177944A/en
Application granted granted Critical
Publication of CN113177944B publication Critical patent/CN113177944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a method for detecting stains on an underwater lens. The method comprises the following steps: obtaining a camera image to be detected; obtaining an initial image of the auxiliary camera; carrying out translation on the initial image of the auxiliary camera for at least two times to obtain at least two contrast images of the auxiliary camera; obtaining at least two difference images; finding out the difference image with the minimum sum; performing black and white threshold processing on the difference image with the minimum sum to generate a binary image; generating an opening operation image; calculating the area and the gravity center position of the connected domain in the open operation image; obtaining a processed connected domain; and taking the processed connected domain as lens dirt, and outputting the dirt area and the dirt gravity center position. The method for detecting the dirt of the underwater lens can detect the dirt of the lens in real time when the underwater robot is in a hovering state.

Description

Underwater lens stain detection method and underwater robot
Technical Field
The application relates to the field of underwater robots, in particular to a detection method for stains on an underwater lens and an underwater robot.
Background
Optical cameras are common devices used by underwater robots to perform observation tasks. Because of a large amount of suspended matters such as silt, animal and plant debris and the like in water, a lens of an underwater camera is very easy to be stained, so that dark spots appear in an image, and the imaging quality is reduced.
For the cable-free underwater robot, due to the limitation of underwater wireless communication bandwidth, image data cannot be uploaded in real time, and the detection of lens stains by human eyes is completely infeasible. For the cable type underwater robot, although image data can be uploaded in real time, the time and labor are wasted, and the efficiency is low when people stare at the image to detect the stains of the lens.
Meanwhile, in the process of detecting the lens stains of the underwater robot, the lens stains are kept static in the image, and the shooting object moves in the image along with the relative motion of the shooting object and the camera, and most of common stain detection technologies use the kinematics principle to detect the stains. At this time, the photographic subject and the camera are almost stationary, resulting in the unavailability of the detection method based on the above principle.
The existing image-based lens stain detection method generally utilizes the motion difference of stains and shot objects on images, for example, the method disclosed in patent CN112037188A selects a frame of image in a video frame sequence as a detection frame, and extracts a candidate region in the detection frame; in subsequent frames, tracking the candidate area by adopting an image tracking algorithm and acquiring motion characteristics; and then analyzing various motion characteristics to determine a non-stain area and a suspicious area. This approach has two disadvantages: firstly, stains in the image and the shot object are required to have obvious relative motion, and the condition cannot be met when the camera and the shot object are relatively static; secondly, the stain detection can be carried out only after a period of video shooting, so that the real-time performance is not realized.
Therefore, there is a need for an automatic detection method for lens stains of an underwater robot, which can detect the lens stains in real time when the underwater robot is in a hovering state.
In this background section, the above information disclosed is only for enhancement of understanding of the background of the application and therefore it may contain prior art information that does not constitute a part of the common general knowledge of a person skilled in the art.
Disclosure of Invention
The application aims to provide a method for detecting stains on an underwater lens, which can detect the stains on the lens in real time when an underwater robot is in a hovering state.
According to an aspect of the present application, a method for detecting stains on an underwater lens is provided, the method comprising:
setting exposure time, and shooting by a camera to be detected to obtain an image of the camera to be detected;
controlling an auxiliary camera to be exposed in the same exposure time, and shooting by the auxiliary camera to obtain an initial image of the auxiliary camera, wherein the auxiliary cameras are adjacently arranged on the camera to be detected;
translating the initial image of the auxiliary camera at least twice in the direction of the auxiliary camera relative to the camera to be detected to obtain at least two contrast images of the auxiliary camera;
respectively calculating pixel gray value difference values of the at least two auxiliary camera contrast images and the camera image to be detected to obtain at least two difference images;
summing the pixel gray values of the at least two difference images respectively, and finding out the difference image with the minimum sum;
performing black and white threshold processing on the difference image with the minimum sum to generate a binary image;
performing morphological on operation on the binary image by using the structural element, removing white pixels generated by the parallax of the camera, and generating an on operation image;
finding out a connected domain of white pixels in the open operation image, and calculating the area and the gravity center position of the connected domain in the open operation image;
and rejecting the connected domain with the area smaller than the first threshold value to obtain the processed connected domain.
And taking the processed connected domain as lens dirt, and outputting the dirt area and the dirt gravity center position.
According to some embodiments, at least one auxiliary camera is arranged in the vicinity of the camera to be detected.
According to some embodiments, the auxiliary camera is the same as the primary parameter settings of the camera to be detected.
According to some embodiments, the setting of the exposure time and the capturing of the camera to be detected to obtain the image of the camera to be detected further include:
and initializing parameters of the camera to be detected and the auxiliary camera.
According to some embodiments, the controlling of the auxiliary camera to expose at the same exposure time, the auxiliary camera taking a picture to obtain an initial image of the auxiliary camera, wherein the auxiliary camera is adjacently disposed behind the camera to be detected further comprises:
and comparing the auxiliary camera initial image with the camera image to be detected.
According to some embodiments, the at least two translations are translated in different amounts.
According to some embodiments, the structural element is circular or rectangular in shape.
According to some embodiments, the first threshold is 1% of the area of the on operation image.
According to another aspect of the present application, there is provided an underwater robot comprising:
a robot body;
the camera to be detected is arranged on the robot body;
an auxiliary camera provided to the robot body;
a moving module for moving a position of the robot body;
a stain detection module for implementing any of the above described methods.
According to some embodiments of the present application, the detection principle is based on the comparison of images of multiple cameras at the same time, and does not depend on the relative motion of the lens stains and the shooting object, so that the lens stains can be detected when the cameras and the shooting object are relatively still.
According to some embodiments of the application, the camera is used for shooting and comparing instantly without waiting for recording a video with a certain duration, so that the lens stain can be detected frame by frame in real time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 shows a flowchart of a method for detecting contamination of an underwater lens according to an exemplary embodiment of the present application.
FIG. 2 illustrates a flow chart of a method of detecting underwater lens soiling according to some embodiments of the present application.
Fig. 3 illustrates a camera image to be detected according to some embodiments of the present application.
FIG. 4 illustrates an auxiliary camera initial image according to some embodiments of the present application.
FIG. 5 illustrates a first auxiliary camera contrast image according to some embodiments of the present application.
FIG. 6 illustrates a second auxiliary camera contrast image according to some embodiments of the present application.
FIG. 7 illustrates a third auxiliary camera contrast image according to some embodiments of the present application.
FIG. 8 illustrates a fourth auxiliary camera contrast image according to some embodiments of the present application.
FIG. 9 illustrates a first difference image according to some embodiments of the present application.
FIG. 10 illustrates a second difference image according to some embodiments of the present application.
FIG. 11 illustrates a third difference image according to some embodiments of the present application.
FIG. 12 illustrates a fourth difference image according to some embodiments of the present application.
FIG. 13 illustrates a binary image according to some embodiments of the present application.
FIG. 14 illustrates an on operation image according to some embodiments of the present application.
FIG. 15 illustrates a connected domain image according to some embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the disclosure can be practiced without one or more of the specific details, or with other means, components, materials, devices, or the like. In such cases, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The application provides a method for detecting stains on underwater lens, which can detect the stains on the lens in real time when an underwater robot is in a hovering state.
According to the technical concept of the application, the detection principle is based on the comparison of images of multiple cameras at the same time, and does not depend on the relative motion of the lens stains and the shooting object, so that the lens stains can be detected when the cameras and the shooting object are relatively static. The camera is used for shooting and comparing instantly without waiting for recording a video with a certain duration, so that the lens stain can be detected frame by frame in real time.
Hereinafter, a method for detecting contamination of an underwater lens according to an embodiment of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a method for detecting contamination of an underwater lens according to an exemplary embodiment of the present application.
As shown in fig. 1, a method for detecting stains on an underwater lens according to an exemplary embodiment of the present application includes the following steps:
in S101, a camera image to be detected is obtained.
And setting exposure time, and shooting by the camera to be detected to obtain an image of the camera to be detected.
According to an example embodiment, the exposure time is 1/1000 seconds to 1/40 seconds.
At S103, an auxiliary camera initial image is obtained.
And controlling the auxiliary camera to expose in the same exposure time, and shooting by the auxiliary camera to obtain an initial image of the auxiliary camera, wherein the auxiliary cameras are adjacently arranged on the camera to be detected.
At S105, the translation results in an auxiliary camera contrast image.
And translating the initial image of the auxiliary camera at least twice in the direction of the auxiliary camera relative to the camera to be detected to obtain at least two contrast images of the auxiliary camera.
According to an example embodiment, the amount of translation per translation is 1-20 pixels.
According to an example embodiment, after the auxiliary camera contrast image is obtained, an area which is generated by translation and has no overlap with the auxiliary camera initial image is marked as a boundary blank area.
According to the embodiment of the application, the marginal blank area in the translation image can be ignored in the calculation of the difference value, and the pixel gray value can be set to any value between 0 and 255 theoretically. In the translated image of the embodiment of the application, for the sake of beautiful display, the values of the pixels in each row of the boundary blank area are all the same as the pixels in the leftmost row of the original image.
In S107, a difference image is obtained.
And respectively calculating the pixel gray value difference value of the at least two auxiliary camera contrast images and the to-be-detected camera image to obtain at least two difference images.
According to the embodiment of the application, all the pixel gray values in the difference image corresponding to the boundary blank area are assigned to be 0.
At S109, the difference image with the smallest sum is found.
And summing the pixel gray values of the at least two difference images respectively, and finding out the difference image with the minimum sum.
In S111, a binary image is generated.
And performing black and white threshold processing on the difference image with the minimum sum to generate a binary image.
According to some embodiments, in addition to black and white thresholding, other colors that distinguish the boundary range may be selected for thresholding.
In S113, an on operation image is generated.
And performing morphological on operation on the binary image by using the structural element, removing white pixels generated by the parallax of the camera, and generating an on operation image.
In S115, the area and the center of gravity position of the connected domain are obtained.
And finding out a connected domain of the white pixels in the opening operation image, and calculating the area and the gravity center position of the connected domain in the opening operation image.
At S117, the connected component is processed.
And rejecting the connected domain with the area smaller than the first threshold value to obtain the processed connected domain.
At S119, the area and center of gravity position of the soil are output.
And taking the processed connected domain as lens dirt, and outputting the dirt area and the dirt gravity center position.
According to the embodiment of the application, the camera is used for shooting and comparing instantly, and a video with a certain time does not need to be recorded, so that the lens stains can be detected in real time frame by frame.
FIG. 2 illustrates a flow chart of a method of detecting underwater lens soiling according to some embodiments of the present application.
As shown in fig. 2, a method for detecting stains on an underwater lens according to some embodiments of the present application includes the following steps:
at S201, parameters are initialized.
And initializing parameters of the camera to be detected and the auxiliary camera.
According to some embodiments, at least one auxiliary camera is arranged in the vicinity of the camera to be detected.
According to some embodiments, the auxiliary camera is the same as the primary parameter settings of the camera to be detected.
In S203, a camera image to be detected is obtained.
And setting exposure time, and shooting by the camera to be detected to obtain an image of the camera to be detected.
At S205, an auxiliary camera initial image is obtained.
And controlling the auxiliary camera to expose in the same exposure time, and shooting by the auxiliary camera to obtain an initial image of the auxiliary camera, wherein the auxiliary cameras are adjacently arranged on the camera to be detected.
At S207, the translation results in an auxiliary camera contrast image.
And translating the initial image of the auxiliary camera at least twice in the direction of the auxiliary camera relative to the camera to be detected to obtain at least two contrast images of the auxiliary camera.
According to some embodiments, the at least two translations are translated in different amounts.
At S209, a difference image is obtained.
And respectively calculating the pixel gray value difference value of the at least two auxiliary camera contrast images and the to-be-detected camera image to obtain at least two difference images.
At S211, a difference image with the smallest sum is found.
And summing the pixel gray values of the at least two difference images respectively, and finding out the difference image with the minimum sum.
At S213, a binary image is generated.
And performing black and white threshold processing on the difference image with the minimum sum to generate a binary image.
At S215, an on operation image is generated.
And performing morphological on operation on the binary image by using the structural element, removing white pixels generated by the parallax of the camera, and generating an on operation image.
According to some embodiments, the structural element is circular or rectangular in shape.
In S217, the area and the center of gravity position of the connected component are obtained.
And finding out a connected domain of the white pixels in the opening operation image, and calculating the area and the gravity center position of the connected domain in the opening operation image.
At S219, the connected domain is processed.
And rejecting the connected domain with the area smaller than the first threshold value to obtain the processed connected domain.
According to some embodiments, the first threshold is 1% of the area of the on operation image.
At S221, the area of the soil and the position of the center of gravity are output.
And taking the processed connected domain as lens dirt, and outputting the dirt area and the dirt gravity center position.
According to one aspect of the application, the detection principle of the application is based on the comparison of images of multiple cameras at the same time, and does not depend on the relative motion of the lens stains and the shooting object, so that the lens stains can be detected when the cameras and the shooting object are relatively static.
Fig. 3 illustrates a camera image to be detected according to some embodiments of the present application.
Referring to fig. 3, the exposure time of the camera to be detected is set to obtain an image of the camera to be detected, as shown in fig. 3.
FIG. 4 illustrates an auxiliary camera initial image according to some embodiments of the present application.
Referring to fig. 4, the auxiliary camera is controlled to expose for the same exposure time as the main camera, so that the average gray scale values of the images captured by the main camera and the auxiliary camera are substantially consistent, and an initial image of the auxiliary camera is obtained, as shown in fig. 4.
FIG. 5 illustrates a first auxiliary camera contrast image according to some embodiments of the present application. FIG. 6 illustrates a second auxiliary camera contrast image according to some embodiments of the present application. FIG. 7 illustrates a third auxiliary camera contrast image according to some embodiments of the present application. FIG. 8 illustrates a fourth auxiliary camera contrast image according to some embodiments of the present application.
Referring to fig. 5 to 8, the initial image of the auxiliary camera is translated rightward for 20 times, each time the translation amount is 4 pixels, and the first auxiliary camera comparison image when the translation amount is 20 pixels is obtained in sequence, as shown in fig. 5; a second auxiliary camera contrast image with a translation of 36 pixels, as shown in fig. 6; the third auxiliary camera contrast image when the translation amount is 60 pixels, as shown in fig. 7; the fourth auxiliary camera contrast image at an amount of translation of 80 pixels is shown in fig. 8.
FIG. 9 illustrates a first difference image according to some embodiments of the present application. FIG. 10 illustrates a second difference image according to some embodiments of the present application. FIG. 11 illustrates a third difference image according to some embodiments of the present application. FIG. 12 illustrates a fourth difference image according to some embodiments of the present application.
Referring to fig. 9 to 12, the difference between the auxiliary camera contrast image and the camera image to be detected is calculated respectively, and a first difference image when the translation amount is 20 pixels is obtained in sequence, as shown in fig. 9; a second difference image when the amount of translation is 36 pixels, as shown in fig. 10; a third difference image when the amount of translation is 60 pixels, as shown in fig. 11; the fourth difference image when the amount of translation is 80 pixels is shown in fig. 12.
FIG. 13 illustrates a binary image according to some embodiments of the present application.
Referring to fig. 13, the gray values of all pixels of each difference image are summed to find the difference image with the smallest sum, and in some embodiments, the second difference image with a translation amount of 36 pixels is taken as an example. The difference image with the smallest sum is subjected to black-and-white thresholding to generate a binary image, as shown in fig. 13.
FIG. 14 illustrates an on operation image according to some embodiments of the present application.
Referring to fig. 14, the binary image is subjected to a morphological on operation with a circular or rectangular structural element, and white pixels due to camera parallax are removed to generate an on operation image, as shown in fig. 14.
FIG. 15 illustrates a connected domain image according to some embodiments of the present application.
Referring to fig. 15, finding out the connected component of the white pixel in the opening operation image to obtain a connected component image, and calculating the area and the gravity center position of the connected component in the opening operation image as shown in fig. 15. And eliminating the connected domain with the area less than 1% of the area of the open operation image to obtain the processed connected domain. And taking the processed connected domain as lens dirt, and outputting the dirt area and the dirt gravity center position.
In summary, the detection principle of the present application is based on the comparison of images of multiple cameras at the same time, and does not depend on the relative motion between the lens stains and the subject, so that the lens stains can be detected when the camera and the subject are relatively still. The camera is used for shooting and comparing instantly, and a video with a certain duration does not need to be recorded, so that the lens stain can be detected frame by frame in real time.
Finally, it should be noted that: it should be understood that the above examples are only for clearly illustrating the present invention and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.

Claims (8)

1. A method of detecting contamination of an underwater lens, the method comprising:
setting exposure time, and shooting by a camera to be detected to obtain an image of the camera to be detected;
controlling an auxiliary camera to be exposed in the same exposure time, and shooting by the auxiliary camera to obtain an initial image of the auxiliary camera, wherein the auxiliary cameras are adjacently arranged on the camera to be detected;
translating the initial image of the auxiliary camera at least twice in the direction of the auxiliary camera relative to the camera to be detected to obtain at least two contrast images of the auxiliary camera;
respectively calculating pixel gray value difference values of the at least two auxiliary camera contrast images and the camera image to be detected to obtain at least two difference images;
summing the pixel gray values of the at least two difference images respectively, and finding out the difference image with the minimum sum;
performing black and white threshold processing on the difference image with the minimum sum to generate a binary image;
performing morphological on operation on the binary image by using the structural element, removing white pixels generated by the parallax of the camera, and generating an on operation image;
finding out a connected domain of white pixels in the open operation image, and calculating the area and the gravity center position of the connected domain in the open operation image;
rejecting the connected domain with the area smaller than a first threshold value to obtain a processed connected domain;
and taking the processed connected domain as lens dirt, and outputting the dirt area and the dirt gravity center position.
2. Method according to claim 1, characterized in that at least one auxiliary camera is arranged in the vicinity of the camera to be detected.
3. The method according to claim 1, characterized in that the auxiliary camera is the same as the primary parameter settings of the camera to be detected.
4. The method according to claim 1, wherein the setting of the exposure time further comprises, before the camera to be detected takes a picture to obtain the camera image to be detected, the steps of:
and initializing parameters of the camera to be detected and the auxiliary camera.
5. The method of claim 1, wherein the at least two translations are translated in different amounts.
6. The method of claim 1, wherein the structural elements are circular or rectangular in shape.
7. The method of claim 1, wherein the first threshold is 1% of the area of the on operation image.
8. An underwater robot, comprising:
a robot body;
the camera to be detected is arranged on the robot body;
an auxiliary camera provided to the robot body;
a moving module for moving a position of the robot body;
a stain detection module for implementing the method according to any one of claims 1-7.
CN202110731265.9A 2021-06-30 2021-06-30 Underwater lens stain detection method and underwater robot Active CN113177944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110731265.9A CN113177944B (en) 2021-06-30 2021-06-30 Underwater lens stain detection method and underwater robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110731265.9A CN113177944B (en) 2021-06-30 2021-06-30 Underwater lens stain detection method and underwater robot

Publications (2)

Publication Number Publication Date
CN113177944A CN113177944A (en) 2021-07-27
CN113177944B true CN113177944B (en) 2021-09-17

Family

ID=76927930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110731265.9A Active CN113177944B (en) 2021-06-30 2021-06-30 Underwater lens stain detection method and underwater robot

Country Status (1)

Country Link
CN (1) CN113177944B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888632A (en) * 2021-09-14 2022-01-04 上海景吾智能科技有限公司 Method and system for positioning stains in pool by combining RGBD image
CN114040181A (en) * 2021-10-29 2022-02-11 中国铁塔股份有限公司盐城市分公司 Holographic display system and holographic display method
CN114778558B (en) * 2022-06-07 2022-09-09 成都纵横通达信息工程有限公司 Bridge monitoring device, system and method based on video image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354624A (en) * 2008-05-15 2009-01-28 中国人民解放军国防科学技术大学 Surface computing platform of four-way CCD camera collaborative work and multi-contact detection method
CN102176244A (en) * 2011-02-17 2011-09-07 东方网力科技股份有限公司 Method and device for determining shielding condition of camera head
CN104601965A (en) * 2015-02-06 2015-05-06 巫立斌 Camera shielding detection method
US9807372B2 (en) * 2014-02-12 2017-10-31 Htc Corporation Focused image generation single depth information from multiple images from multiple sensors
CN107493403A (en) * 2017-08-11 2017-12-19 宁波江丰生物信息技术有限公司 A kind of digital pathological section scanning system
CN109712123A (en) * 2018-12-14 2019-05-03 成都安锐格智能科技有限公司 A kind of spot detection method
CN112927178A (en) * 2019-11-21 2021-06-08 中移物联网有限公司 Occlusion detection method, occlusion detection device, electronic device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354624A (en) * 2008-05-15 2009-01-28 中国人民解放军国防科学技术大学 Surface computing platform of four-way CCD camera collaborative work and multi-contact detection method
CN102176244A (en) * 2011-02-17 2011-09-07 东方网力科技股份有限公司 Method and device for determining shielding condition of camera head
US9807372B2 (en) * 2014-02-12 2017-10-31 Htc Corporation Focused image generation single depth information from multiple images from multiple sensors
CN104601965A (en) * 2015-02-06 2015-05-06 巫立斌 Camera shielding detection method
CN107493403A (en) * 2017-08-11 2017-12-19 宁波江丰生物信息技术有限公司 A kind of digital pathological section scanning system
CN109712123A (en) * 2018-12-14 2019-05-03 成都安锐格智能科技有限公司 A kind of spot detection method
CN112927178A (en) * 2019-11-21 2021-06-08 中移物联网有限公司 Occlusion detection method, occlusion detection device, electronic device, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向光条遮挡问题的双镜头激光面型测量方法;王蔚松 等;《中国激光》;20201130;正文第1-10页 *

Also Published As

Publication number Publication date
CN113177944A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN113177944B (en) Underwater lens stain detection method and underwater robot
CA2236082C (en) Method and apparatus for detecting eye location in an image
Iqbal et al. Underwater Image Enhancement Using an Integrated Colour Model.
KR101023207B1 (en) Video object abstraction apparatus and its method
Moghimi et al. Real-time underwater image enhancement: a systematic review
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
EP0932114A2 (en) A method of and apparatus for detecting a face-like region and observer tracking display
EP0844582A3 (en) System and method for detecting a human face
Atienza-Vanacloig et al. Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model
KR20120090491A (en) Image segmentation device and method based on sequential frame imagery of a static scene
US20050147304A1 (en) Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
CN112561813B (en) Face image enhancement method and device, electronic equipment and storage medium
US9338354B2 (en) Motion blur estimation and restoration using light trails
JP2009123081A (en) Face detection method and photographing apparatus
US20040022448A1 (en) Image processor
Madshaven et al. Hole detection in aquaculture net cages from video footage
Radzi et al. Extraction of moving objects using frame differencing, ghost and shadow removal
Xue Blind image deblurring: a review
JP4780564B2 (en) Image processing apparatus, image processing method, and image processing program
CN114757994B (en) Automatic focusing method and system based on deep learning multitask
KR100649384B1 (en) Method for extracting image characteristic in low intensity of illumination environment
Yong et al. Human motion analysis in dark surrounding using line skeleton scalable model and vector angle technique
JPH11306348A (en) Method and device for object detection
JP2009009206A (en) Extraction method of outline inside image and image processor therefor
JP2005165983A (en) Method for detecting jaw of human face, jaw detection system, and jaw detection program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant