CN117953189A - Viewpoint determining method and device, electronic equipment and storage medium - Google Patents

Viewpoint determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117953189A
CN117953189A CN202410139849.0A CN202410139849A CN117953189A CN 117953189 A CN117953189 A CN 117953189A CN 202410139849 A CN202410139849 A CN 202410139849A CN 117953189 A CN117953189 A CN 117953189A
Authority
CN
China
Prior art keywords
image
viewpoint
ambiguity
viewpoint position
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410139849.0A
Other languages
Chinese (zh)
Other versions
CN117953189B (en
Inventor
赵同彪
杨广京
汪振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Huiling Robot Technology Co ltd
Original Assignee
Beijing Zhongke Huiling Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Huiling Robot Technology Co ltd filed Critical Beijing Zhongke Huiling Robot Technology Co ltd
Priority to CN202410139849.0A priority Critical patent/CN117953189B/en
Publication of CN117953189A publication Critical patent/CN117953189A/en
Application granted granted Critical
Publication of CN117953189B publication Critical patent/CN117953189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a viewpoint determining method, a viewpoint determining device, electronic equipment and a storage medium. The viewpoint determining method comprises the following steps: acquiring a first image acquired by a camera fixed at the tail end of a mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image; determining a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a functional mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity; the first viewpoint position is adjusted based on the first position deviation to obtain a target viewpoint position, so that the first position deviation corresponding to the first viewpoint position can be determined through the first image ambiguity and the function mapping relation between the viewpoint position deviation and the image ambiguity, the first viewpoint position is adjusted based on the first position deviation, the target viewpoint position with higher accuracy is obtained, the image with higher definition is acquired, and the accuracy of surface defect detection is improved.

Description

Viewpoint determining method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of surface defect detection, and in particular relates to a viewpoint determining method, a viewpoint determining device, electronic equipment and a storage medium.
Background
In the industrial production process, the detection of the surface defects of the workpiece is an important step, and in the process of detecting the surface defects of the workpiece at present, the main method is to acquire images of objects to be detected at different viewpoint positions by controlling a mechanical arm of a robot, so that the surface defects of the objects to be detected are detected based on the acquired images.
The sharpness of the collected image of the object to be detected is critical to surface defect detection, however, the sharpness of the image of the object to be detected is affected by the accuracy of the viewpoint position, if the viewpoint position is inaccurate, the sharpness of the collected image of the object to be detected is poor, and thus the surface defect detection is inaccurate, so how to obtain the viewpoint position with higher accuracy and further obtain the image with higher sharpness is a technical problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a viewpoint determining method, a viewpoint determining device, an electronic device and a storage medium.
A first aspect of an embodiment of the present disclosure provides a viewpoint determining method, including:
acquiring a first image acquired by a camera fixed at the tail end of a mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image;
Determining a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a functional mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity;
And adjusting the first viewpoint position based on the first position deviation to obtain the target viewpoint position.
A second aspect of an embodiment of the present disclosure provides a viewpoint determining apparatus, including:
the image acquisition module is used for acquiring a first image acquired by a camera fixed at the tail end of the mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image;
The position deviation determining module is used for determining a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a functional mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity;
the viewpoint position adjustment module is used for adjusting the first viewpoint position based on the first position deviation to obtain the target viewpoint position.
A third aspect of the disclosed embodiments provides an electronic device, comprising:
A processor;
A memory for storing executable instructions;
the processor is configured to read the executable instructions from the memory, and execute the executable instructions to implement the viewpoint determining method provided in the first aspect.
A fourth aspect of embodiments of the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the viewpoint determining method provided in the first aspect described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
According to the viewpoint determining method, the device, the electronic equipment and the storage medium, the first image collected by the camera fixed at the tail end of the mechanical arm at the first viewpoint position and the first image ambiguity corresponding to the first image can be obtained, the first position deviation corresponding to the first viewpoint position is determined based on the first image ambiguity and the function mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity, the first viewpoint position is adjusted based on the first position deviation, and the target viewpoint position is obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a viewpoint determining method provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart of a first image blur obtaining method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for obtaining a function mapping relationship according to an embodiment of the disclosure;
fig. 4 is a flowchart of a first viewpoint position adjustment method provided by an embodiment of the present disclosure;
Fig. 5 is a schematic structural view of a viewpoint determining apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
In general, in the industrial production process, the detection of the surface defect of a workpiece is an important step, and in the process of inspecting the surface defect of the workpiece, the main method is to collect images of an object to be inspected at different viewpoint positions by controlling a mechanical arm of a robot, so as to detect the surface defect of the object to be inspected based on the collected images. The sharpness of the collected image of the object to be detected is critical to surface defect detection, however, the sharpness of the image of the object to be detected is affected by the accuracy of the viewpoint position, if the viewpoint position is inaccurate, the sharpness of the collected image of the object to be detected is poor, and thus the surface defect detection is inaccurate, so how to obtain the viewpoint position with higher accuracy and further obtain the image with higher sharpness is a technical problem to be solved. In view of this problem, embodiments of the present disclosure provide a viewpoint determining method, which is described below in connection with specific embodiments.
Fig. 1 is a flowchart of a viewpoint determining method provided by an embodiment of the present disclosure, where the method may be performed by a viewpoint determining apparatus, and the viewpoint determining apparatus may be implemented in software and/or hardware, and the viewpoint determining apparatus may be configured in an electronic device, for example, a server or a terminal, where the terminal specifically includes a computer or a tablet computer, and so on.
As shown in fig. 1, the viewpoint determining method provided by the present embodiment includes the following steps.
S110, acquiring a first image acquired by a camera fixed at the tail end of the mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image.
In the embodiment of the disclosure, the robot arm end may be a robot corresponding to the robot for performing the workpiece quality surface detection.
The first viewpoint position comprises three-dimensional coordinates of the first viewpoint under a mechanical arm base coordinate system and normal vector orientations corresponding to the coordinates of the three-dimensional coordinates, wherein the normal vector orientations can be understood as orientations of the first viewpoint perpendicular to the surface of the object to be detected.
The first image may be an image containing the object to be detected taken at the first viewpoint position.
The first image blur level is used to characterize the sharpness of the first image acquired at the first viewpoint position.
In some embodiments of the present disclosure, an electronic device receives an image acquisition instruction sent by a user side, where the image acquisition instruction includes a preset viewpoint sequence corresponding to an image of an object to be detected, analyzes the image acquisition instruction, performs image acquisition based on the preset viewpoint sequence in the image acquisition instruction, further obtains a first image acquired at a first viewpoint position, and calculates an ambiguity of the first image by a preset image ambiguity calculation method to obtain a first image ambiguity.
In other embodiments of the present disclosure, an electronic device receives a surface defect detection instruction sent by a user, where the surface defect detection instruction includes identification information of an object to be detected, and based on the identification information of the object to be detected, a first image acquired at a first viewpoint position of the object to be detected and a first image ambiguity corresponding to the first image are acquired from a preset database.
The first view position may be a position corresponding to any one view in the preset view sequence.
In the embodiment of the disclosure, the preset viewpoint sequence is a viewpoint sequence corresponding to the object to be detected acquired by the tail end of the mechanical arm, wherein the preset viewpoint sequence includes at least one viewpoint information for acquiring the object to be detected. The preset viewpoint sequence can be obtained by performing adaptive viewpoint generation on a CAD model of an object to be detected or point cloud information of the object obtained by scanning with a 3D camera through an existing adaptive viewpoint generation method to obtain a target viewpoint under a coordinate system of the object to be detected, and then converting the target viewpoint under the coordinate system of the object to be detected into a coordinate system of a mechanical arm base by adopting a hand eye calibration method according to the generated target viewpoint.
S120, determining a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a function mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity.
In the embodiment of the disclosure, after obtaining the first image ambiguity, the electronic device determines a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a functional mapping relationship between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity.
In the embodiment of the disclosure, the functional mapping relationship between the viewpoint position deviation and the image blur degree is a functional mapping relationship between the viewpoint position deviation of the corresponding viewpoint and the image blur degree corresponding to the acquired image when the image is acquired.
Specifically, after obtaining the first image ambiguity, the electronic device obtains a function mapping relation between a viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity, inputs the first image ambiguity into a function corresponding to the function mapping relation, obtains a viewpoint position deviation having a mapping relation with the first image ambiguity, and determines the viewpoint position deviation as a first position deviation corresponding to a first viewpoint position when the first image is acquired.
In some embodiments of the present disclosure, the functional mapping relationship between the viewpoint position deviation corresponding to the end of the mechanical arm and the image ambiguity may be obtained in advance and stored in a preset database of the electronic device, and when in use, the functional mapping relationship may be obtained by directly obtaining from the preset database.
In other embodiments of the present disclosure, the functional mapping relationship between the viewpoint position deviation corresponding to the end of the mechanical arm and the image ambiguity may be obtained by performing calculation according to the target dataset by using a preset algorithm before or after the first image is obtained or simultaneously with the first image is obtained.
The preset algorithm may be a conventional algorithm for obtaining a function mapping relationship between two variables, which is not described herein.
And S130, adjusting the first viewpoint position based on the first position deviation to obtain the target viewpoint position.
In the embodiment of the disclosure, after acquiring the first position deviation, the electronic device adjusts the first viewpoint position based on the first position deviation to obtain the target viewpoint position.
Specifically, after the electronic device obtains the first position deviation, the deviation of the first viewpoint position is eliminated through the first position deviation, so as to obtain the target viewpoint position, and if the position coordinate of the first viewpoint position is added with the first position deviation, the target viewpoint position is obtained.
In the embodiment of the disclosure, a first image acquired at a first viewpoint position by a camera fixed at the tail end of the mechanical arm and a first image ambiguity corresponding to the first image can be acquired, the first position deviation corresponding to the first viewpoint position is determined based on the first image ambiguity and a function mapping relation between viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity, the first viewpoint position is adjusted based on the first position deviation, and a target viewpoint position is obtained.
Fig. 2 is a flowchart of a first image blur obtaining method according to an embodiment of the present disclosure.
As shown in fig. 2, the obtaining the first image ambiguity corresponding to the first image may specifically include the following steps:
S210, gray scale processing is carried out on the first image, and a gray scale image corresponding to the first image is obtained.
In the embodiment of the disclosure, the gray scale processing of the first image may be performed by using a binarization processing method, or other image gray scale processing methods may be used, which is not limited herein.
S220, calculating gradient values of pixel points contained in the gray level map based on a preset Laplace operator.
In an embodiment of the present disclosure, the preset laplace operator may be a laplace operator manually preset and stored in the electronic device for calculating gradient values of pixels in the image.
Specifically, after obtaining a gray scale corresponding to the first image, the electronic device determines a gray scale value corresponding to each pixel point in the gray scale, multiplies each gray scale value corresponding to the pixel point by each value in a matrix corresponding to a preset laplace operator for each pixel point, and adds the obtained products, and the obtained values are determined as gradient values corresponding to the pixel points.
The gradient value can be understood as the change rate between adjacent pixel points, and can be used for representing the information of the edge position of the first image, wherein the larger the gradient value is, the more obvious the edge of the first image is.
In some embodiments of the present disclosure, before calculating the gradient value of the pixel point included in the gray scale map based on the preset laplacian may further include: carrying out smoothing treatment on the gray level image to obtain a smoothed gray level image; and calculating gradient values of pixel points contained in the gray level map after the smoothing process based on a preset Laplacian operator.
In the embodiment of the disclosure, after the electronic device obtains the gray level image corresponding to the first image, the gray level image may be smoothed by using a gaussian filter or the like, so as to remove noise in the gray level image, reduce the influence of the noise on the gradient value calculation, improve the accuracy of the obtained gradient value, and further improve the accuracy of the first image ambiguity.
The specific embodiment of calculating the gradient value of the pixel point included in the smoothed gray-scale map based on the preset laplacian is similar to the specific embodiment of calculating the gradient value of the pixel point included in the gray-scale map based on the preset laplacian, and will not be described here.
S230, calculating first image ambiguity corresponding to the first image based on gradient values of pixel points contained in the gray level map.
In the embodiment of the disclosure, after obtaining the gradient value of the pixel point included in the gray scale map, the electronic device performs calculation of the first image ambiguity based on the gradient value, thereby obtaining the first image ambiguity.
The calculating the first image ambiguity corresponding to the first image based on the gradient value of the pixel point included in the gray scale image may specifically include: averaging the gradient values of the pixel points contained in the gray level map to obtain a gradient average value; and calculating a variance corresponding to the first image based on the gradient average value and the gradient value of the pixel point contained in the gray scale image, and determining the variance as the first image ambiguity.
Specifically, the electronic device adds gradient values of a plurality of pixels after obtaining gradient values of the pixels included in the gray scale map, adds the gradient values of the plurality of pixels together to obtain a gradient average value, calculates a variance corresponding to the first image based on the gradient average value and the gradient values of the pixels included in the gray scale map after obtaining the gradient average value, and determines the variance as the first image ambiguity.
Further, calculating the variance corresponding to the first image based on the gradient average value and the gradient value of the pixel point included in the gray scale map may specifically include: respectively calculating the square value of the difference between the gradient value of each pixel point and the gradient average value; and adding the square values and dividing the sum by the total number of pixels in the first image to obtain the corresponding variance of the first image.
For example, the average gradient value is a, the gradient value of the pixels is B, the total number of pixels is C, and the electronic device determines the ratio of (a-B) 2 to C as the variance corresponding to the first image after obtaining the average gradient value.
In the embodiment of the disclosure, the gray level processing can be performed on the first image, the gradient value corresponding to each pixel point is determined based on the gray level value of each pixel point in the gray level graph after the gray level processing, and then the variance of the first image is calculated according to the gradient value, and the variance is determined as the first image ambiguity, so that the obtained first image ambiguity can accurately reflect the definition of the image.
In some embodiments of the present disclosure, before determining the first position deviation corresponding to the first viewpoint position based on the first image blur degree and the functional mapping relationship between the viewpoint position deviation corresponding to the end of the mechanical arm and the image blur degree, the viewpoint determining method may further include: and acquiring a function mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity.
Further, as shown in fig. 3, the obtaining a functional mapping relationship between the viewpoint position deviation corresponding to the end of the mechanical arm and the image ambiguity may specifically include the following steps:
s310, determining at least one viewpoint active position corresponding to the tail end of the mechanical arm when the second viewpoint position is taken as the center.
In the embodiment of the present disclosure, the second view position may be a position corresponding to any one view in the preset view sequence.
Specifically, the electronic device randomly selects one view point from a preset view point sequence, determines a position corresponding to the view point as a second view point position, and determines at least one view point active position corresponding to the tail end of the mechanical arm when the tail end of the mechanical arm is centered on the second view point position based on a preset margin range.
For example, the preset margin range is ±0.015, the position coordinate of the second viewpoint P 0 is (x, y, z), at this time, the movable range of the end of the mechanical arm is centered on P, the range corresponding to x±0.015, y±0.015, and z±0.015 is the movable range of the end of the mechanical arm, and the position corresponding to any point in the movable range is determined as the viewpoint movable position.
S320, obtaining a second image corresponding to each viewpoint active position and a second image ambiguity corresponding to the second image, and forming a target data set based on at least one viewpoint active position and the second image ambiguity corresponding to each viewpoint active position.
In the embodiment of the disclosure, after determining at least one viewpoint active position, the electronic device controls a camera at the tail end of the mechanical arm to acquire a second image at each viewpoint active position, further acquires a second image corresponding to each viewpoint active position, and performs image ambiguity calculation on the second image to obtain a second image ambiguity corresponding to the second image.
The manner of obtaining the second image ambiguity corresponding to the second image is similar to the specific implementation manner of obtaining the first image ambiguity in the above embodiment, and will not be described herein.
Further, after obtaining the second image ambiguity corresponding to the second image, the electronic device establishes an association relationship between the second image ambiguity and the viewpoint active position corresponding to the acquired second image, and determines the association relationship as a target data set.
S330, determining a third viewpoint position corresponding to the tail end of the mechanical arm based on the target data set, wherein the third viewpoint position is a viewpoint position corresponding to an image which has highest image definition and contains the target position of the object to be detected and can be shot by a camera at the tail end of the mechanical arm.
In some embodiments of the present disclosure, a viewpoint active position corresponding to a second image with a lowest second image blur degree is selected from the target data set, and is determined as a third viewpoint position.
In other embodiments of the present disclosure, determining, based on the target data set, a third viewpoint position corresponding to the end of the mechanical arm may specifically include: inputting the target data set into a preset state estimator; and screening at least one viewpoint active position in the target data set and the second image ambiguity corresponding to each viewpoint active position based on a preset reward function in the state estimator, and determining a third viewpoint position.
The preset state estimator may be any machine learning model that can be used for optimal viewpoint position estimation, and is not limited herein.
The preset reward functions include a location reward function and a status reward function.
In the disclosed embodiment, the positional reward function is related as: f p=‖p0-pr, wherein p 0 represents an original viewpoint position, i.e., a second viewpoint position; p r is any viewpoint active position.
The state reward function is related as: f h=hr×sign(hr-h0), wherein h r is second image ambiguity corresponding to any one viewpoint active position; h 0 is a preset image blur.
In the embodiment of the present disclosure, the preset image blur degree is a preset limit for evaluating whether the image is blurred or not, and the preset image blur degree may be an image blur degree obtained in advance according to analysis of the image data.
Further, the screening process is performed on at least one viewpoint active position in the target data set and the second image ambiguity corresponding to each viewpoint active position based on the preset reward function in the state estimator, and determining the third viewpoint position may specifically include:
Inputting a second viewpoint position and a viewpoint active position into a position rewarding function aiming at each viewpoint active position to obtain a position rewarding value corresponding to the viewpoint active position; inputting the second image ambiguity corresponding to the viewpoint active position and the preset image ambiguity into a state rewarding function to obtain a state rewarding value corresponding to the viewpoint active position; determining a first weight corresponding to the position rewarding value and a second weight corresponding to the state rewarding value, calculating a first product of the position rewarding value and the first weight and a second product of the state rewarding value and the second weight, and determining the sum of the first product and the second product as a cumulative rewarding value corresponding to the viewpoint active position; the viewpoint active position with the highest cumulative prize value is determined, and the viewpoint active position with the highest cumulative prize value is determined as the third viewpoint position.
In the disclosed embodiment, the cumulative prize value is calculated as: f=λ pfphfh, where λ p and λ h are weights of the position and status prize values, respectively.
In the embodiment of the present disclosure, the first weight and the second weight may be weights preset to the position reward value and the state reward value, respectively.
The first weight and the second weight may be weights obtained by respectively adaptively adjusting weights corresponding to a preset position rewarding value and weights corresponding to a preset state rewarding value in the process of determining the third viewpoint position by the state estimator.
And S340, obtaining a function mapping relation between the viewpoint position deviation and the image ambiguity based on the third viewpoint position, each viewpoint active position and the second image ambiguity.
In the embodiment of the disclosure, after obtaining the third viewpoint position, the electronic device calculates a position deviation between each viewpoint active position and the third viewpoint position; and obtaining a function mapping relation between the viewpoint position deviation and the image ambiguity based on the position deviation and the second image ambiguity.
In some embodiments of the present disclosure, after obtaining a position deviation between each viewpoint active position and a third viewpoint position, the electronic device inputs the position deviation corresponding to each viewpoint active position and the second image ambiguity to a preset machine learning model, and performs fitting processing on the position deviation corresponding to each viewpoint active position and the second image ambiguity based on the preset machine learning model, so as to obtain a functional mapping relationship between the viewpoint position deviation and the image ambiguity.
The preset machine learning model may be any machine learning model capable of performing data fitting processing, which is not limited herein.
In other embodiments of the present disclosure, after obtaining the position deviation between each viewpoint active position and the third viewpoint position, the electronic device performs a fitting process on the position deviation corresponding to each viewpoint active position and the second image ambiguity based on a preset fitting algorithm, so as to obtain a functional mapping relationship between the viewpoint position deviation and the image ambiguity.
The preset fitting algorithm may be an algorithm for data fitting, such as a least square method, and the like, which is not limited herein.
In the embodiment of the disclosure, a functional mapping relationship between a viewpoint position deviation and an image ambiguity can be determined by arbitrarily selecting one viewpoint, such as a second viewpoint, from a preset viewpoint sequence based on a second viewpoint position and based on each viewpoint active position "trial-error" within a preset margin range, and further according to a second image and a second image ambiguity obtained in the "trial-error" process and each viewpoint active position, so that the obtained functional mapping relationship is more fit to reality and more accurate.
In an embodiment of the present disclosure, determining, based on a functional mapping relationship between a first image ambiguity and a viewpoint position deviation corresponding to a distal end of a mechanical arm and the image ambiguity, a first position deviation corresponding to a first viewpoint position may specifically include: and inputting the first image ambiguity into a function mapping relation to calculate, obtaining a position deviation corresponding to the first image ambiguity, and determining the position deviation as the first position deviation.
Further, fig. 4 is a flowchart of a first viewpoint position adjustment method according to an embodiment of the present disclosure, and as shown in fig. 4, the adjustment of the first viewpoint position based on the first position deviation may specifically include the following steps:
S410, determining whether the first viewpoint position is a position corresponding to the first viewpoint in a preset viewpoint sequence corresponding to the tail end of the mechanical arm.
Specifically, before performing first viewpoint position adjustment, the electronic device matches three-dimensional coordinates corresponding to the first viewpoint position and normal vectors corresponding to the coordinates thereof with viewpoint information corresponding to a first viewpoint in a preset viewpoint sequence, if so, determines that the first viewpoint position is a position corresponding to the first viewpoint in the preset viewpoint sequence corresponding to the tail end of the mechanical arm, and performs step S440, otherwise, performs steps S420-S430.
And S420, when the first viewpoint position is determined to be the position corresponding to the non-first viewpoint in the preset viewpoint sequence corresponding to the tail end of the mechanical arm, determining second position deviation corresponding to a fourth viewpoint position which is positioned before the first viewpoint position and adjacent to the first viewpoint position.
In the embodiment of the present disclosure, the specific implementation of determining the second position deviation corresponding to the fourth viewpoint position located before and adjacent to the first viewpoint position is similar to the implementation of determining the first position deviation corresponding to the first viewpoint position described above, and will not be described herein.
And S430, adjusting the first viewpoint position based on the first position deviation and the second position deviation to obtain the target viewpoint position.
Specifically, after the second position deviation is obtained, the deviation of the first viewpoint position is eliminated based on the first position deviation and the second position deviation, for example, the position coordinate of the first viewpoint position is added to the first position deviation and the second position deviation to obtain the target viewpoint position.
And S440, when the first viewpoint position is determined to be the position corresponding to the first viewpoint in the preset viewpoint sequence corresponding to the tail end of the mechanical arm, adjusting the first viewpoint position based on the first position deviation to obtain the target viewpoint position.
In the embodiment of the present disclosure, the step S440 is similar to the specific implementation of the step S130, which is not described herein.
In the embodiment of the disclosure, the specific adjustment can be performed according to the position of the first viewpoint position in the preset viewpoint sequence, and when the first viewpoint corresponding to the first viewpoint position is not the first viewpoint, the influence of the viewpoint position deviation of other viewpoints positioned in front of the first viewpoint on the first viewpoint is considered, so that the accuracy of the obtained target viewpoint position is improved.
Fig. 5 is a schematic structural diagram of a viewpoint determining apparatus provided in an embodiment of the present disclosure.
In the embodiment of the disclosure, the viewpoint determining device may be disposed in an electronic device, and is understood to be a part of functional modules in the electronic device. Specifically, the electronic device may be a server or a terminal, where the terminal specifically includes a computer or a tablet computer, and the like, which is not limited herein.
As shown in fig. 5, the viewpoint determining apparatus 500 may include an image acquisition module 510, a position deviation determination module 520, and a viewpoint position adjustment module 530.
The image acquisition module 510 may be configured to acquire a first image acquired by a camera fixed at an end of the mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image.
The position deviation determining module 520 may be configured to determine a first position deviation corresponding to the first viewpoint position based on the first image blur degree and a functional mapping relationship between the viewpoint position deviation corresponding to the end of the mechanical arm and the image blur degree.
The viewpoint position adjustment module 530 may be configured to adjust the first viewpoint position based on the first position deviation, to obtain the target viewpoint position.
In the embodiment of the disclosure, a first image acquired at a first viewpoint position by a camera fixed at the tail end of the mechanical arm and a first image ambiguity corresponding to the first image can be acquired, the first position deviation corresponding to the first viewpoint position is determined based on the first image ambiguity and a function mapping relation between viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity, the first viewpoint position is adjusted based on the first position deviation, and a target viewpoint position is obtained.
In some embodiments of the present disclosure, the image acquisition module 510 includes a gray level processing unit, a gradient value calculation unit, and an image blur degree calculation unit.
The gray processing unit may be configured to perform gray processing on the first image, so as to obtain a gray map corresponding to the first image.
The gradient value calculating unit may be configured to calculate the gradient value of the pixel point included in the gray scale map based on a preset laplacian.
The image blur degree calculating unit may be configured to calculate the first image blur degree corresponding to the first image based on the gradient values of the pixel points included in the gray scale map.
In some embodiments of the present disclosure, the image acquisition module 510 may further include a smoothing processing unit.
The smoothing unit may be configured to perform smoothing on the gray-scale image before calculating the gradient value of the pixel point included in the gray-scale image based on a preset laplacian operator, so as to obtain the smoothed gray-scale image.
In some embodiments of the present disclosure, the image ambiguity calculation unit may be specifically configured to average gradient values of pixels included in the gray map to obtain a gradient average value; and calculating a variance corresponding to the first image based on the gradient average value and the gradient value of the pixel point contained in the gray scale image, and determining the variance as the first image ambiguity.
In some embodiments of the present disclosure, the image ambiguity calculation unit may be further specifically configured to calculate a square value of a difference between the gradient value and the gradient average value of each pixel point; and adding the square values and dividing the sum by the total number of pixels in the first image to obtain the corresponding variance of the first image.
In some embodiments of the present disclosure, the viewpoint determining apparatus 500 may further include a mapping relation acquisition module.
The mapping relationship obtaining module may be configured to obtain a functional mapping relationship between a viewpoint position deviation corresponding to the end of the mechanical arm and the image ambiguity before determining the first position deviation corresponding to the first viewpoint position based on the first image ambiguity and the functional mapping relationship between the viewpoint position deviation corresponding to the end of the mechanical arm and the image ambiguity.
In some embodiments of the present disclosure, the mapping relation acquisition module includes a viewpoint activity position determination unit, a data set generation unit, a viewpoint position determination unit, and a mapping relation acquisition unit.
The viewpoint active position determining unit may be configured to determine at least one viewpoint active position corresponding to the robot arm tip when centered on the second viewpoint position.
The data set generating unit may be configured to acquire a second image corresponding to each viewpoint active position and a second image blur degree corresponding to the second image, and form the target data set based on at least one viewpoint active position and the second image blur degree corresponding to each viewpoint active position.
The viewpoint position determining unit may be configured to determine, based on the target data set, a third viewpoint position corresponding to the end of the mechanical arm, where the third viewpoint position is a viewpoint position corresponding to an image that can be captured by a camera at the end of the mechanical arm and that has the highest image definition and includes the target position of the object to be detected.
The mapping relation obtaining unit may be configured to obtain a functional mapping relation between the viewpoint position deviation and the image ambiguity based on the third viewpoint position, each viewpoint active position, and the second image ambiguity.
In some embodiments of the present disclosure, the viewpoint position determination unit may be specifically configured to input the target data set to a preset state estimator; and screening at least one viewpoint active position in the target data set and the second image ambiguity corresponding to each viewpoint active position based on a preset reward function in the state estimator, and determining a third viewpoint position.
In some embodiments of the present disclosure, the preset reward functions include a location reward function and a status reward function.
The viewpoint position determining unit may be further specifically configured to input, for each viewpoint active position, a second viewpoint position and a viewpoint active position to the position rewarding function, to obtain a position rewarding value corresponding to the viewpoint active position; inputting the second image ambiguity corresponding to the viewpoint active position and the preset image ambiguity into a state rewarding function to obtain a state rewarding value corresponding to the viewpoint active position; determining a first weight corresponding to the position rewarding value and a second weight corresponding to the state rewarding value, calculating a first product of the position rewarding value and the first weight and a second product of the state rewarding value and the second weight, and determining the sum of the first product and the second product as a cumulative rewarding value corresponding to the viewpoint active position; the viewpoint active position with the highest cumulative prize value is determined, and the viewpoint active position with the highest cumulative prize value is determined as the third viewpoint position.
In some embodiments of the present disclosure, the mapping relationship obtaining unit may be specifically configured to calculate a positional deviation between each viewpoint active position and the third viewpoint position; and obtaining a function mapping relation between the viewpoint position deviation and the image ambiguity based on the position deviation and the second image ambiguity.
In some embodiments of the present disclosure, the mapping relationship obtaining unit may be further specifically configured to input the position deviation and the second image ambiguity into a preset machine learning model, and perform fitting processing on the position deviation and the second image ambiguity based on the preset machine learning model to obtain a functional mapping relationship.
In some embodiments of the present disclosure, the position deviation determining module 520 may be specifically configured to input the first image blur degree into the function mapping relationship to perform calculation, obtain a position deviation corresponding to the first image blur degree, and determine the position deviation as the first position deviation.
In some embodiments of the present disclosure, the viewpoint determining apparatus 500 may further include an information determining module.
The information determining module may be configured to determine, before adjusting the first viewpoint position based on the first position deviation, whether the first viewpoint position is a position corresponding to a first viewpoint in a preset viewpoint sequence corresponding to the end of the mechanical arm.
In some embodiments of the present disclosure, the viewpoint position adjustment module 530 may be specifically configured to, when determining that the first viewpoint position is a position corresponding to a non-first viewpoint in a preset viewpoint sequence corresponding to the end of the mechanical arm, determine a second position deviation corresponding to a fourth viewpoint position that is located before and adjacent to the first viewpoint position; and adjusting the first viewpoint position based on the first position deviation and the second position deviation to obtain the target viewpoint position.
It should be noted that, the viewpoint determining apparatus 500 shown in fig. 5 may perform the steps in the foregoing method embodiments, and implement the respective processes and effects in the foregoing method embodiments, which are not described herein.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
In the embodiment of the present disclosure, the electronic device shown in fig. 6 may be a server or a terminal, where the terminal specifically includes a computer or a tablet computer, and the disclosure is not limited thereto.
As shown in fig. 6, the electronic device may include a processor 610 and a memory 620 storing computer program instructions.
In particular, the processor 610 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present disclosure.
Memory 620 may include mass storage for information or instructions. By way of example, and not limitation, memory 620 may include a hard disk drive (HARD DISK DRIVE, HDD), a floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or a universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of these. Memory 620 may include removable or non-removable (or fixed) media, where appropriate. The memory 620 may be internal or external to the integrated gateway device, where appropriate. In a particular embodiment, the memory 620 is a non-volatile solid state memory. In a particular embodiment, the Memory 620 includes Read-Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (ELECTRICAL PROGRAMMABLE ROM, EPROM), electrically erasable PROM (ELECTRICALLY ERASABLE PROGRAMMABLE ROM, EEPROM), electrically rewritable ROM (ELECTRICALLY ALTERABLE ROM, EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor 610 reads and executes the computer program instructions stored in the memory 620 to perform the steps of the viewpoint determining method provided by the embodiments of the present disclosure.
In one example, the electronic device may also include a transceiver 630 and a bus 640. In which, as shown in fig. 6, processor 610, memory 620, and transceiver 630 are connected and communicate with each other via bus 640.
Bus 640 includes hardware, software, or both. By way of example, and not limitation, the buses may include an accelerated graphics Port (ACCELERATED GRAPHICS Port, AGP) or other graphics BUS, an enhanced industry Standard architecture (Extended Industry Standard Architecture, EISA) BUS, a Front Side BUS (FSB), a HyperTransport (HT) interconnect, an industry Standard architecture (Industrial Standard Architecture, ISA) BUS, an Infiniband interconnect, a Low Pin Count (LPC) BUS, a memory BUS, a micro channel architecture (Micro Channel Architecture, MCA) BUS, a peripheral control interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) BUS, a PCI-Express (PCI-X) BUS, a serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, SATA) BUS, a video electronics standards Association local (Video Electronics Standards Association Local Bus, VLB) BUS, or other suitable BUS, or a combination of two or more of these. Bus 640 may include one or more buses, where appropriate.
The present disclosure also provides a computer-readable storage medium, which may store a computer program that, when executed by a processor, causes the processor to implement the viewpoint determining method provided by the embodiments of the present disclosure.
The storage medium described above may, for example, include a memory 620 of computer program instructions executable by the processor 610 of the electronic device to perform the viewpoint determining method provided by the embodiments of the present disclosure. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, a ROM, a random access memory (Random Access Memory, RAM), a Compact disc-read only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A viewpoint determining method, comprising:
acquiring a first image acquired by a camera fixed at the tail end of a mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image;
determining a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a function mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity;
and adjusting the first viewpoint position based on the first position deviation to obtain a target viewpoint position.
2. The method of claim 1, wherein obtaining a first image blur level corresponding to the first image comprises:
carrying out gray scale processing on the first image to obtain a gray scale image corresponding to the first image;
Calculating gradient values of pixel points contained in the gray level map based on a preset Laplacian operator;
And calculating a first image ambiguity corresponding to the first image based on the gradient value of the pixel point contained in the gray scale map.
3. The method according to claim 2, wherein before the calculating the gradient value of the pixel point included in the gray scale map based on the preset laplacian is further comprising:
and carrying out smoothing treatment on the gray level image to obtain a smoothed gray level image.
4. The method according to claim 2, wherein the calculating the first image blur degree corresponding to the first image based on the gradient values of the pixel points included in the gray scale map includes:
averaging gradient values of pixel points contained in the gray level map to obtain a gradient average value;
And calculating a variance corresponding to the first image based on the gradient average value and the gradient value of the pixel point contained in the gray scale image, and determining the variance as the first image ambiguity.
5. A viewpoint determining apparatus, characterized by comprising:
The image acquisition module is used for acquiring a first image acquired by a camera fixed at the tail end of the mechanical arm at a first viewpoint position and a first image ambiguity corresponding to the first image;
The position deviation determining module is used for determining a first position deviation corresponding to the first viewpoint position based on the first image ambiguity and a functional mapping relation between the viewpoint position deviation corresponding to the tail end of the mechanical arm and the image ambiguity;
and the viewpoint position adjustment module is used for adjusting the first viewpoint position based on the first position deviation to obtain a target viewpoint position.
6. The apparatus of claim 5, wherein the image acquisition module comprises a gray scale processing unit, a gradient value calculation unit, and an image blur degree calculation unit;
The gray processing unit is used for carrying out gray processing on the first image to obtain a gray image corresponding to the first image;
The gradient value calculation unit is used for calculating the gradient value of the pixel point contained in the gray level map based on a preset Laplacian operator;
The image ambiguity calculating unit is used for calculating a first image ambiguity corresponding to the first image based on gradient values of pixel points contained in the gray level map.
7. The apparatus of claim 6, wherein the image acquisition module further comprises a smoothing processing unit;
the smoothing unit is configured to perform smoothing on the gray-scale image before calculating the gradient value of the pixel point included in the gray-scale image based on the preset laplace operator, so as to obtain a smoothed gray-scale image.
8. The apparatus according to claim 6, wherein the image ambiguity calculation unit is specifically configured to average gradient values of pixels included in the gray map to obtain a gradient average value;
And calculating a variance corresponding to the first image based on the gradient average value and the gradient value of the pixel point contained in the gray scale image, and determining the variance as the first image ambiguity.
9. An electronic device, comprising:
A processor;
A memory for storing executable instructions;
Wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the viewpoint determining method of any of the preceding claims 1-4.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program, which when executed by a processor causes the processor to implement the viewpoint determining method of any of the preceding claims 1-4.
CN202410139849.0A 2024-01-31 2024-01-31 Viewpoint determining method and device, electronic equipment and storage medium Active CN117953189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410139849.0A CN117953189B (en) 2024-01-31 2024-01-31 Viewpoint determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410139849.0A CN117953189B (en) 2024-01-31 2024-01-31 Viewpoint determining method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117953189A true CN117953189A (en) 2024-04-30
CN117953189B CN117953189B (en) 2024-06-18

Family

ID=90796182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410139849.0A Active CN117953189B (en) 2024-01-31 2024-01-31 Viewpoint determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117953189B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003329428A (en) * 2002-05-16 2003-11-19 Sumitomo Chem Co Ltd Device and method for surface inspection
CN112288734A (en) * 2020-11-06 2021-01-29 西安工程大学 Printed fabric surface defect detection method based on image processing
CN115063405A (en) * 2022-07-27 2022-09-16 武汉工程大学 Method, system, electronic device and storage medium for detecting defects on surface of steel
CN115345822A (en) * 2022-06-08 2022-11-15 南京航空航天大学 Automatic three-dimensional detection method for surface structure light of aviation complex part
CN116309510A (en) * 2023-03-29 2023-06-23 清华大学 Numerical control machining surface defect positioning method and device
CN116580028A (en) * 2023-07-12 2023-08-11 深圳思谋信息科技有限公司 Object surface defect detection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003329428A (en) * 2002-05-16 2003-11-19 Sumitomo Chem Co Ltd Device and method for surface inspection
CN112288734A (en) * 2020-11-06 2021-01-29 西安工程大学 Printed fabric surface defect detection method based on image processing
CN115345822A (en) * 2022-06-08 2022-11-15 南京航空航天大学 Automatic three-dimensional detection method for surface structure light of aviation complex part
CN115063405A (en) * 2022-07-27 2022-09-16 武汉工程大学 Method, system, electronic device and storage medium for detecting defects on surface of steel
CN116309510A (en) * 2023-03-29 2023-06-23 清华大学 Numerical control machining surface defect positioning method and device
CN116580028A (en) * 2023-07-12 2023-08-11 深圳思谋信息科技有限公司 Object surface defect detection method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周神特;王宇宇;张潇;左泽青;赵文宏;: "基于机器视觉的金属板材表面缺陷光学检测技术", 无损检测, no. 09, 10 September 2020 (2020-09-10), pages 45 - 50 *

Also Published As

Publication number Publication date
CN117953189B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
US9146185B2 (en) Hardness tester and hardness test method
CN106920245B (en) Boundary detection method and device
CN111768450A (en) Automatic detection method and device for line deviation of structured light camera based on speckle pattern
EP3330921A1 (en) Information processing device, measuring apparatus, system, calculating method, storage medium, and article manufacturing method
CN105718931B (en) System and method for determining clutter in acquired images
CN113176270B (en) Dimming method, device and equipment
KR20180090756A (en) System and method for scoring color candidate poses against a color image in a vision system
CN108802051B (en) System and method for detecting bubble and crease defects of linear circuit of flexible IC substrate
JP6405124B2 (en) Inspection device, inspection method, and program
EP3712842A1 (en) System and method for evaluating symbols
CN105354816B (en) Electronic component positioning method and device
CN114419045A (en) Method, device and equipment for detecting defects of photoetching mask plate and readable storage medium
CN113066088A (en) Detection method, detection device and storage medium in industrial detection
CN114612410A (en) Novel clothing detects device
CN112950598B (en) Flaw detection method, device, equipment and storage medium for workpiece
JP6585793B2 (en) Inspection device, inspection method, and program
CN112085752B (en) Image processing method, device, equipment and medium
US20170061649A1 (en) Image measuring apparatus and non-temporary recording medium on which control program of same apparatus is recorded
JP5772675B2 (en) Gray image edge extraction method, edge extraction device, and gray image edge extraction program
CN117073988B (en) System and method for measuring distance of head-up display virtual image and electronic equipment
CN117953189B (en) Viewpoint determining method and device, electronic equipment and storage medium
CN117392178A (en) Method and device for extracting motion characteristics of molten pool in powder spreading and material adding manufacturing process
CN113160223A (en) Contour determination method, contour determination device, detection device and storage medium
CN112508925B (en) Electronic lock panel quality detection method, system, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant