CN110866949A - Center point positioning method and device, electronic equipment and storage medium - Google Patents

Center point positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110866949A
CN110866949A CN201911125424.XA CN201911125424A CN110866949A CN 110866949 A CN110866949 A CN 110866949A CN 201911125424 A CN201911125424 A CN 201911125424A CN 110866949 A CN110866949 A CN 110866949A
Authority
CN
China
Prior art keywords
target product
sub
image
corner
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911125424.XA
Other languages
Chinese (zh)
Inventor
周俊杰
陈招东
杜兵
林雪婷
冯英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Lyric Robot Automation Co Ltd
Original Assignee
Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Lyric Robot Intelligent Automation Co Ltd filed Critical Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority to CN201911125424.XA priority Critical patent/CN110866949A/en
Publication of CN110866949A publication Critical patent/CN110866949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a center point positioning method, a center point positioning device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target product image; determining a first central point of a target product according to the gray data of the target product image; dividing the target product image according to the position of the first center point to obtain a plurality of sub-images, wherein each sub-image in the plurality of sub-images comprises an angular point position area of the target product; determining the corner position of a target product in each sub-image according to the corner position area contained in each sub-image; and determining the position of a second central point of the target product according to the positions of the corner points in the plurality of sub-images. Therefore, the problem that the center point of the product is difficult to determine in the prior art can be solved.

Description

Center point positioning method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of product processing, in particular to a center point positioning method and device, electronic equipment and a storage medium.
Background
In the production process of products, products often need to be moved to designated positions due to various actual requirements, for example, when products of different models are replaced, the products need to be grabbed and moved to the designated positions, and when alignment calibration is performed on workpiece products, the products also need to be grabbed and moved to the designated positions.
However, in the existing product processing flow, it is often difficult for a machine to correctly grasp a product and place the product in a proper position because it is difficult to determine the center point of the product.
Disclosure of Invention
An object of the embodiments of the present application is to provide a center point positioning method, an apparatus, an electronic device, and a storage medium, so as to solve the problem in the prior art that it is difficult to determine a center point of a product.
In a first aspect, an embodiment provides a center point positioning method, where the method includes:
acquiring a target product image;
determining a first central point of the target product according to the gray data of the target product image;
according to the position of the first center point, segmenting the target product image to obtain a plurality of sub-images, wherein each sub-image in the plurality of sub-images comprises an angular point position area of a target product;
determining the corner position of the target product in each sub-image according to the corner position area contained in each sub-image;
and determining the position of a second central point of the target product according to the positions of the corner points in the plurality of sub-images.
In the method, the whole processing process is based on the acquired image, the size of the real product does not need to be concerned in the whole process, and the position of the second center point suitable for the current target product can be determined according to the actually acquired image of the target product no matter the size of the real product, so that the real product corresponding to the image of the current target product can be effectively grabbed, and the product can be grabbed to the suitable position. In addition, in the method, the position of the second center point with higher accuracy is determined step by step in a layer-by-layer progressive mode, and compared with a processing mode of manual repeated debugging, the center point positioning method is high in automation degree and high in processing efficiency.
In an optional embodiment, the determining a first central point of the target product according to the gray data of the target product image includes:
extracting the outline of the target product according to the set gray threshold interval and the gray data of the target product image;
and determining a first central point of the target product according to the contour of the target product.
Through the implementation mode, the first central point serving as the temporary central point can be quickly determined, and the target product image can be favorably segmented into the sub-images based on the determined first central point. The image processing mode is simple and easy to realize.
In an optional embodiment, the segmenting the target product image according to the position of the first central point to obtain a plurality of sub-images includes:
and segmenting the target product image according to the coordinate of the first central point in a reference coordinate system to obtain a plurality of sub-images.
Through the implementation mode, a larger target product image can be quickly divided into a plurality of sub-images, each single sub-image can be processed in the follow-up process, the artificial participation amount can be reduced, and the time required by artificial proofreading and debugging can be saved.
In an optional embodiment, before the determining the corner position of the target product in each sub-image according to the corner position region included in each sub-image, the method further includes:
and matching each sub-image through a preset template to determine a corner position area in each sub-image.
The approximate positions of all corner points of the target product can be quickly obtained through the implementation mode.
In an optional embodiment, the determining the corner position of the target product in each sub-image according to the corner position region included in each sub-image includes:
aiming at the corner position area contained in each sub-image, acquiring an interested area corresponding to the corner position area, and searching for a mark point according to the interested area;
and determining the corner position of the target product in the sub-image according to the plurality of the mark points searched in the sub-image.
Through the implementation mode, even if the product edge of the real product image does not completely conform to the right-angle shape, the corner position which more conforms to the current product can be determined, and the method has universality and compatibility.
In an optional embodiment, the determining the corner position of the target product in the sub-image according to the plurality of found mark points in the sub-image includes:
performing fitting calculation on the plurality of found mark points in the subimage to obtain two edges of the target product;
and taking the intersection point position corresponding to the two edges of the target product as the corner point position of the target product.
Through the implementation mode, more accurate corner position can be determined in a fitting mode, and the position of the center point which is more in line with a real product can be determined.
In an optional embodiment, the determining a second center point position of the target product according to the corner point positions in the plurality of sub-images includes:
determining two diagonal lines of the target product according to the corner positions in the plurality of sub-images;
and taking the intersection point position of the two diagonal lines as the second central point position of the target product.
Through the implementation mode, the determined position of the second center point is closer to a real product, the execution equipment is favorable for effectively grabbing the target product, and the product processing efficiency can be improved.
In a second aspect, an embodiment provides a center point locating device, the device comprising:
the acquisition module is used for acquiring a target product image;
the calculation module is used for determining a first central point of the target product according to the gray data of the target product image;
the segmentation module is used for segmenting the target product image according to the position of the first central point to obtain a plurality of sub-images, and each sub-image in the plurality of sub-images comprises an angular point position area of a target product;
the computing module is further used for determining the corner position of the target product in each sub-image according to the corner position area contained in each sub-image;
the calculation module is further configured to determine a second center point position of the target product according to the corner point positions in the plurality of sub-images.
The method provided by the first aspect can be executed through the device, so that the position of a relatively accurate center point can be quickly determined, and the product can be effectively grabbed.
In a third aspect, an embodiment provides an electronic device, including:
a memory;
a processor;
the memory stores a computer program executable by the processor, the computer program, when executed by the processor, performing the method of the first aspect as set forth above.
In a fourth aspect, an embodiment provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method of the first aspect is performed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of a center point positioning method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a cell product in an example provided by the embodiment of the present application.
Fig. 3 is a schematic diagram of a target product image in an example provided by an embodiment of the present application.
Fig. 4 is a schematic diagram of a sub-image of a cell in an example provided by the embodiment of the present application.
Fig. 5 is a schematic diagram of another sub-image about a cell in an example provided by the embodiment of the present application.
Fig. 6 is a schematic diagram of corner positions in an example provided by the embodiment of the present application.
Fig. 7 is a schematic diagram of a second center point in an example provided by an embodiment of the present application.
Fig. 8 is a functional block diagram of a center point positioning device according to an embodiment of the present disclosure.
Fig. 9 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a center point positioning method according to an embodiment of the present disclosure. The center point positioning method can be applied to electronic equipment, and the electronic equipment can be equipment with operation processing capability such as a computer and an industrial personal computer. The center point positioning method can be used for determining the center point position of a product, is convenient for effectively grabbing the product under the condition of determining the center point position of the product, can accurately grab the product to a proper position, is beneficial to carrying out subsequent model changing on the product, and is also beneficial to carrying out alignment calibration on a workpiece in the product processing process based on the determined center point. Wherein, the model changing refers to the model of the product.
As shown in FIG. 1, the center point positioning method includes steps S11-S15.
Step S11: and acquiring a target product image.
The target product image comprises a background area and a product area. The target product image may be an image obtained by shooting a real product through an image acquisition device (e.g., a camera). In the shooting process, the background of the product is a backlight source, and the gray difference between the product and the background can be improved through the lighting mode of the backlight source, so that the larger gray difference exists between the background area and the product area in the target product image, and the higher the gray difference is, the easier the product features are extracted.
The electronic device can receive the target product image sent by the image acquisition device and also can receive the target product image transmitted or imported by other devices.
As one implementation, the shape of the target product corresponding to the product area may be a substantially rectangular shape such as a rectangle, a square, a rounded rectangle, or the like.
In one example, the target product is a cell as shown in fig. 2, the cell refers to an electrochemical cell having a positive electrode and a negative electrode, and is not generally used directly, the cell is a power storage part in a rechargeable battery, and the rechargeable battery generally comprises a protection circuit board, a cell and a shell. After determining the central point of electric core, be favorable to snatching and moving electric core to the assigned position (for example, the aluminium membrane, probably set up some marks that are used for the location on the aluminium membrane) based on the central point that determines to carry out follow-up processing to electric core. In other examples, the target product may be a substantially cylindrical product.
After the target product image including the target product is obtained, S12 may be performed.
Step S12: and determining a first central point of the target product according to the gray data of the target product image.
The product area containing the target product can be determined according to the gray data of all the areas in the target product image, the center point of the product area is used as the first center point of the target product, and the first center point can be used as a temporary center point. The target product image can be rapidly subjected to region segmentation through the first central point.
After the first center point is determined, S13 may be performed.
Step S13: and segmenting the target product image according to the position of the first center point to obtain a plurality of sub-images, wherein each sub-image in the plurality of sub-images comprises an angular point position area of the target product.
The number of sub-images can be determined according to the number of corner points required by an actual product, and each sub-image is provided with the corner point of a target product. For example, for the cell shown in fig. 2, the target product image including the cell may be divided into four sub-images, so that each sub-image has at least one corner position area of the cell.
After obtaining the plurality of sub-images, S14 may be performed.
Step S14: and determining the corner position of the target product in each sub-image according to the corner position area contained in each sub-image.
Wherein, because each sub-image has a corner position area, each corner position area has a corner. As an implementation manner, a preliminary corner position may be determined based on the corner position region, a more accurate corner position may be further determined based on the preliminary corner position, and the corner positions with higher accuracy may be determined step by step in a layer-by-layer progressive manner.
Wherein, after the processing of S14 is performed for each sub-image segmented according to the target product image, all corner positions of the plurality of sub-images may be obtained, and then S15 is performed.
Step S15: and determining the position of a second central point of the target product according to the positions of the corner points in the plurality of sub-images.
By the method, a temporary first central point position can be determined according to the obtained target product image containing the target product, the target product image is divided into a plurality of sub-images based on the first central point position, the corner position of each sub-image is determined according to each sub-image of the plurality of sub-images, and finally a second central point position with higher accuracy is determined according to the corner positions of the plurality of sub-images. Because the whole processing process is based on the image, the size of a real product does not need to be concerned in the whole process, and the position of a second central point suitable for the current target product can be determined according to the actually obtained target product image no matter the size of the real product, so that the problem of product size compatibility is solved, and the real product corresponding to the current target product image can be effectively captured. In addition, in the method, the position of the second center point with higher accuracy is determined step by step in a layer-by-layer progressive mode, and compared with a processing mode of manual repeated debugging, the center point positioning method is high in automation degree and high in processing efficiency.
It should be noted that any position determined based on the acquired target product image may be represented as a coordinate in the reference coordinate system, and the executing mechanism for grabbing the real product may perform the grabbing operation based on the coordinate in the reference coordinate system. The absolute position is not limited, and the origin and the coordinate axis of the reference coordinate system can be set according to actual needs.
Optionally, the above S12 may include sub-steps S121-S122.
S121: and extracting the outline of the target product according to the set gray threshold interval and the gray data of the target product image.
S122: and determining a first central point of the target product according to the contour of the target product.
As an implementation manner, a blob analysis method may be adopted to extract the outline of the target product. Blob analysis is an image processing method for analyzing closed shapes. In the Blob analysis process, product pixels of the target product image can be extracted according to the set gray threshold interval, so that the contour characteristics of the target product are extracted, and the contour of the target product is extracted.
In one example, the pixel gray scale of an image may be divided into 256 gray scale intervals (there may be more or less gray scale regions), each represented by a gray scale value of 0-255. And screening partial pixels meeting the gray scale condition in the target product image through the set gray scale threshold interval. For example, the pixels with gray values falling within a set gray threshold interval in all the pixels can be determined as the pixels meeting the gray condition by counting the gray values of all the pixels of the target product image. Therefore, the background area and the product area in the target product image can be distinguished based on the gray level difference between the background area and the product area, and the outline characteristic of the target product is extracted.
After the graying and binarization processing of the target product image are performed, the pixel gray greater than the critical gray value (the critical gray value can be set according to actual needs) is set as a maximum value, and the pixel gray smaller than the critical gray value is set as a minimum value, so that the gray image obtained after the graying is converted into a binary image. Each pixel in the target product image may be designated as either a target pixel or a background pixel (e.g., the target pixel is assigned a value of 1 and the background pixel is assigned a value of 0). After the image is segmented into target pixels and background pixels, the contour of the target product can be extracted.
After the contour of the target product is extracted, a coordinate maximum value and a coordinate minimum value of the contour in the coordinate axis extending direction can be determined based on the coordinate axis extending direction of the reference coordinate system, and the position of a first central point of the target product is determined according to an intermediate value between the coordinate maximum value and the coordinate minimum value.
Taking the cell shown in fig. 2 and the target product image shown in fig. 3 as examples, the background area a and the product area B of the cell may be determined first through the implementation manner. And determining the outline of the battery cell after the blob analysis. After the cell outline is determined, a central point M can be determined according to the coordinate positions of the cell at the leftmost, the rightmost, the uppermost and the lowermost positions and is used as the first central point position of a target product.
Through the implementation mode, image segmentation is carried out on the basis of gray data of the target product image, a background area and a product area are distinguished, feature extraction is carried out according to the gray difference between the background area and the product area, the outline feature of the target product is extracted, and finally the center point of the product outline is determined on the basis of the outline of the target product. The image processing mode is simple, the realization is easy, the first central point as the temporary central point can be quickly determined, and the target product image can be favorably divided into a plurality of sub-images based on the determined first central point.
Optionally, in implementation, a person skilled in the art may modify the set gray threshold region or the critical gray value, so that the extracted contour can include the complete target product. This embodiment may be omitted in some embodiments because in some cases the center point may be determined even without a complete image of the product, as long as the important feature areas constituting the target product are available. The important feature area of the target product refers to a main body part of the target product, and the main body part comprises all corner points for determining the position of the second center point. Still taking the cell shown in fig. 2 as an example, even if the two uppermost small rectangular regions in fig. 2 are absent in the acquired target product image, the effective central point can be determined.
Alternatively, the S13 may include: and segmenting the target product image according to the coordinate of the first central point in the reference coordinate system to obtain a plurality of sub-images.
Taking fig. 3 as an example, the X direction and the Y direction with arrows in fig. 3 are coordinate axis directions of the reference coordinate system. If the coordinates of the first center point M are (x1, y1), the image can be divided into two parts based on x1, and the image can be divided into two parts based on y1, so that four sub-images can be obtained after two divisions. Each of the four sub-images contains a corner position area of the target product.
By the implementation mode, a larger target product image can be quickly divided into a plurality of sub-images, and the corner points of each sub-image can be quickly determined by taking each sub-image as a processing object when the center point is further determined subsequently. The implementation modes provided by the embodiment of the application can reduce the amount of human participation and save the time required by manual proofreading and debugging.
Optionally, before performing the above S14, the center point positioning method may further include the sub-steps of: and matching each sub-image through a preset template to determine the corner position area in each sub-image.
For a cell product having a substantially rectangular shape, the four templates of the four subimages may be respectively shaped as "+", "+", etc., wherein each subimage may correspond to a template and one template may match a corner.
Taking fig. 4 as an example, for the first sub-image C at the upper left corner, a template having a shape of "gamma" is matched with the first sub-image C to identify an approximately right-angled region of the product, and a corner position region D in the first sub-image C is obtained, at this time, a preliminary corner position can be obtained.
Wherein, when a template matching algorithm only suitable for constant direction feature matching is used for matching, one template can only be used for matching one sub-image (i.e., a template having a shape of "gamma" is used for matching only the first sub-image); when a template matching algorithm is used that allows for angular differences (i.e. allows for rotation), one template can be used to match more sub-images.
Through the implementation mode, the corner position area in each sub-image can be respectively matched and determined aiming at each sub-image obtained by segmentation, so that the approximate positions of all corners of the target product can be quickly obtained.
Optionally, to avoid the situation that multiple corner position regions appear in the same sub-image, the shape of the template may be set so that only one corner position region is obtained in the same sub-image. Taking the electric core in the form of fig. 4 as an example, in order to avoid the situation shown in fig. 5, for the template in the form of "gamma", the lengths of both sides of the template may be set to be greater than a preset value in advance, the preset value is determined according to the minor component size (for example, "D" in fig. 5) of the target product, and the position which is not satisfactory may be filtered out by setting the size of the template, for example, "D '" in fig. 5 may be filtered, so that the region corresponding to "D" is taken as the corner position region corresponding to the sub-image C'.
And obtaining a preliminary corner position matched based on the template in the process of matching and determining the angle position area. In practice, however, the product edge of the real product image may not conform exactly to a right angle shape, for example, the product edge angle of the real product may be eighty or ninety-few degrees (the template angle used in template matching is ninety degrees). Therefore, in order to determine the corner positions that better correspond to the actual product, the above-mentioned S14 in the center point positioning method can be implemented by the following sub-steps S141 to S142.
S141: and aiming at the corner position area contained in each sub-image, acquiring an interested area corresponding to the corner position area, and searching for a mark point according to the interested area.
The Region of interest (ROI) refers to a Region to be processed, which is delineated from a processed image in a frame, circle, ellipse, irregular polygon, or the like in machine vision and image processing. Various operators (operators) and functions are commonly used in machine vision software such as Halcon, OpenCV, Matlab and the like to obtain an interested region, and subsequent image processing is performed. By grabbing N point positions (N can be set according to actual requirements and can be 3, 4 or 10 and the like) in the region of interest, after the N point positions are grabbed, the N point positions can be fitted into a line segment through a least square method.
As an implementation manner, in S141, two sides corresponding to the corner position region of each sub-image may respectively obtain a plurality of regions of interest, each region of interest may capture one marker point, and the captured point may be a point with the largest contrast difference in the region of interest. Therefore, a plurality of mark points with contrast difference can be searched from two directions respectively.
S142: and calculating the corner position of the target product in the sub-image according to the plurality of found mark points in the sub-image.
Based on the distribution condition of the plurality of mark points, the corner point position of the target product in the sub-image can be calculated.
In the implementation manner, even if the product edge of the real product image does not completely conform to the right-angle shape, the corner position more conforming to the current product can be determined through the above S141-S142, and the initial corner position matched by the template can be updated.
Optionally, S142 may include substeps S1421-S142.
S1421: and performing fitting calculation on the plurality of mark points found in the subimages to obtain two edges of the target product.
S1422: and taking the intersection point position corresponding to the two edges of the target product as the corner point position of the target product.
The searched mark points with contrast difference can be fitted through a least square method, so that two edges of the target product can be fitted, and the two edges obtained through fitting are closer to the actual situation of the real product. The intersection points corresponding to the two edges of the target product can be obtained by performing intersection operation on the two edges, and the intersection point positions corresponding to the two edges of the target product are used as the angular point positions of the target product, so that the accurate angular point positions are obtained.
In one example, for the corner position region D shown in fig. 6, a plurality of regions of interest are acquired from two directions, respectively, and a marker point is captured in each region of interest, where the captured point is the point with the largest contrast difference in each region of interest. If a plurality of points with the maximum contrast exist in one region of interest, a point with a central position can be selected from the plurality of points as a mark point. As shown in fig. 6, the region of interest acquired from the first direction includes g1, g2, g3, g4, gn, and the region of interest acquired from the second direction includes h1, h2, h3, h4, h5, hn. For each region of interest acquired in the first direction and the second direction, 1 marker point is captured in each region of interest. Based on the plurality of mark points of all the regions of interest in the first direction, an edge of the product can be fitted and determined. An edge of the product may also be fit determined based on the plurality of landmark points for all the regions of interest in the second direction. And performing cross operation on the two edges of the product obtained by fitting to determine an intersection point position G corresponding to the two edges, wherein an intersection point coordinate corresponding to the intersection point position G is used as the corner point coordinate of the target product.
Through the implementation mode, more accurate corner point positions can be determined in a fitting mode, and the center point positions which are more in line with real products can be determined.
Alternatively, after grabbing the marker points, each marker point may be detected, and the marker points for the part with the too far angle deviation may be discarded (for example, the marker point corresponding to gn-1 in fig. 6 may be discarded). However, for some products with original shapes of rounded rectangles, a certain angle deviation between some of the mark points can be allowed, and only the conditions for filtering the mark points (for example, changing parameters such as an angle threshold, a distance threshold, and the like for filtering the mark points) need to be changed.
In other embodiments, multiple marker points may be captured in each region of interest, for example, three points with large contrast difference may be captured as the marker points of the region of interest in each region of interest.
Optionally, after determining the positions of the corner points in the respective sub-images, S15 may include sub-steps S151-S152.
S151: and determining two diagonal lines of the target product according to the corner positions in the plurality of sub-images.
S152: and taking the intersection point position of the two diagonal lines as the second central point position of the target product.
Wherein, four angular point positions can determine two diagonals and an intersection point.
In one example, after obtaining the four corner positions corresponding to the four sub-images of the battery cell product, a second center point position P shown in fig. 7 may be determined.
Through the implementation mode, the position of the central point of the real target product corresponding to the current target product image can be determined, and the target product can be effectively grabbed by the execution equipment. Taking a product with a relatively high process requirement of the battery cell as an example, after the central point position of the battery cell is determined, the battery cell can be grabbed to the designated position based on the determined central point, and the designated position can be used for placing a film for wrapping the battery cell, so that the subsequent processing of the battery cell is facilitated.
It should be noted that, in the above method, no matter how the specification and size of the product to be currently grabbed change, as long as the product is within the product shooting visual field range, the target product image can be acquired, and the second center point position corresponding to the current product can be quickly and accurately found based on the center point positioning method, so that the execution device is facilitated to effectively grab the product.
Based on the same inventive concept, please refer to fig. 8, the embodiment of the present application further provides a center point positioning apparatus 200. The center point locating device 200 may be stored in a storage medium, for example, in a memory of an electronic device that performs the aforementioned center point locating method.
As shown in fig. 8, the center point locating device 200 includes: the device comprises an acquisition module 201, a calculation module 202 and a segmentation module 203.
The acquiring module 201 is configured to acquire an image of a target product.
The calculating module 202 is configured to determine a first center point of the target product according to the gray data of the target product image.
The dividing module 203 is configured to divide the target product image according to the position of the first center point to obtain a plurality of sub-images, where each sub-image in the plurality of sub-images includes a corner position area of the target product.
The calculating module 202 is further configured to determine corner positions of the target product in each sub-image according to the corner position regions included in each sub-image.
The calculating module 202 is further configured to determine a second center point position of the target product according to the corner point positions in the plurality of sub-images.
The center point positioning method can be executed by the center point positioning device 200, so that a relatively accurate center point position can be quickly determined, and the product can be effectively grabbed.
Optionally, the calculation module 202 may further be configured to: extracting the outline of the target product according to the set gray threshold interval and the gray data of the target product image; and determining a first central point of the target product according to the contour of the target product.
Optionally, the segmentation module 203 may further be configured to: and segmenting the target product image according to the coordinate of the first central point in a reference coordinate system to obtain a plurality of sub-images.
Optionally, the apparatus may further include a matching module, where the matching module is configured to match each sub-image with a preset template to determine a corner position area in each sub-image.
Optionally, the calculation module 202 may further be configured to: aiming at the corner position area contained in each sub-image, acquiring an interested area corresponding to the corner position area, and searching for a mark point according to the interested area; and determining the corner position of the target product in the sub-image according to the plurality of the mark points searched in the sub-image.
Optionally, the calculation module 202 may further be configured to: performing fitting calculation on the plurality of found mark points in the subimage to obtain two edges of the target product; and taking the intersection point position corresponding to the two edges of the target product as the corner point position of the target product.
Optionally, the calculation module 202 may further be configured to: determining two diagonal lines of the target product according to the corner positions in the plurality of sub-images; and taking the intersection point position of the two diagonal lines as the second central point position of the target product.
For other details of the center point positioning apparatus 200, please refer to the related description of the center point positioning method, which is not repeated herein.
Based on the same inventive concept, please refer to fig. 9, an embodiment of the present application further provides an electronic device 300, where the electronic device 300 is configured to perform the foregoing center point positioning method.
As shown in fig. 9, the electronic device 300 may include: memory 310, processor 320, communications component 330, display component 340. The memory 310, the processor 320, the communication component 330, and the display component 340 are directly or indirectly coupled. The electronic device 300 may be a computer, an industrial personal computer, or the like having processing capabilities.
The Memory 310 is a storage medium, and may be a Random Access Memory (RAM), a Read Only Memory (ROM), or the like. The processor 320 has an operation Processing capability, and may be a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array, or the like. The memory 310 stores a computer program executable by the processor 320, and the computer program is executed by the processor 320 to perform the center point positioning method.
The electronic device 300 can realize wired or wireless connection with other devices through the communication component 330, the other devices can be image acquisition devices, and the electronic device 300 can acquire images of target products sent by the image acquisition devices through the communication component 330; the other device may also be an execution device for grabbing the product, and the electronic device 300 may connect with the execution device through the communication component 330 to send the center point position data of the current product to the execution device.
The display component 340 may be a liquid crystal display or a touch display, and may be configured to provide an operation interface, and may also be configured to display an intermediate result in the center point positioning method, for example, to display an image of the target product, a position of the second center point, and the like.
It is understood that the electronic device 300 shown in fig. 9 is merely illustrative and that the electronic device 300 may have more or fewer components when implemented.
Based on the same inventive concept, the embodiment of the present application further provides a storage medium, where a computer program is stored on the storage medium, and the computer program is executed by the processor 320 to perform the foregoing center point positioning method. The storage medium may be any available medium that can be accessed by a computer, such as a floppy disk, tape, hard disk, DVD, Solid State Disk (SSD), etc.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the connections discussed above may be indirect couplings or communication connections between devices or units through some communication interfaces, and may be electrical, mechanical or other forms.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed to a plurality of places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above embodiments are merely examples of the present application and are not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A center point positioning method, the method comprising:
acquiring a target product image;
determining a first central point of the target product according to the gray data of the target product image;
according to the position of the first center point, segmenting the target product image to obtain a plurality of sub-images, wherein each sub-image in the plurality of sub-images comprises an angular point position area of a target product;
determining the corner position of the target product in each sub-image according to the corner position area contained in each sub-image;
and determining the position of a second central point of the target product according to the positions of the corner points in the plurality of sub-images.
2. The method of claim 1, wherein determining the first center point of the target product from the gray data of the target product image comprises:
extracting the outline of the target product according to the set gray threshold interval and the gray data of the target product image;
and determining a first central point of the target product according to the contour of the target product.
3. The method of claim 1, wherein the segmenting the target product image according to the position of the first center point into a plurality of sub-images comprises:
and segmenting the target product image according to the coordinate of the first central point in a reference coordinate system to obtain a plurality of sub-images.
4. The method according to claim 1, wherein before said determining the corner positions of the target product in each sub-image from the corner position areas contained in each sub-image, the method further comprises:
and matching each sub-image through a preset template to determine a corner position area in each sub-image.
5. The method according to claim 1, wherein determining the corner position of the target product in each sub-image according to the corner position area included in each sub-image comprises:
aiming at the corner position area contained in each sub-image, acquiring an interested area corresponding to the corner position area, and searching for a mark point according to the interested area;
and determining the corner position of the target product in the sub-image according to the plurality of the mark points searched in the sub-image.
6. The method according to claim 5, wherein the determining the corner position of the target product in the sub-image according to the plurality of found marker points in the sub-image comprises:
performing fitting calculation on the plurality of found mark points in the subimage to obtain two edges of the target product;
and taking the intersection point position corresponding to the two edges of the target product as the corner point position of the target product.
7. The method of claim 1, wherein determining a second center point position of the target product from the corner point positions in the plurality of sub-images comprises:
determining two diagonal lines of the target product according to the corner positions in the plurality of sub-images;
and taking the intersection point position of the two diagonal lines as the second central point position of the target product.
8. A center point locating device, the device comprising:
the acquisition module is used for acquiring a target product image;
the calculation module is used for determining a first central point of the target product according to the gray data of the target product image;
the segmentation module is used for segmenting the target product image according to the position of the first central point to obtain a plurality of sub-images, and each sub-image in the plurality of sub-images comprises an angular point position area of a target product;
the computing module is further used for determining the corner position of the target product in each sub-image according to the corner position area contained in each sub-image;
the calculation module is further configured to determine a second center point position of the target product according to the corner point positions in the plurality of sub-images.
9. An electronic device, characterized in that the electronic device comprises:
a memory;
a processor;
the memory stores a computer program executable by the processor, the computer program, when executed by the processor, performing the method of any of claims 1-7.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the method of any one of claims 1-7.
CN201911125424.XA 2019-11-15 2019-11-15 Center point positioning method and device, electronic equipment and storage medium Pending CN110866949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911125424.XA CN110866949A (en) 2019-11-15 2019-11-15 Center point positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911125424.XA CN110866949A (en) 2019-11-15 2019-11-15 Center point positioning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110866949A true CN110866949A (en) 2020-03-06

Family

ID=69654889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911125424.XA Pending CN110866949A (en) 2019-11-15 2019-11-15 Center point positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110866949A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724444A (en) * 2020-06-16 2020-09-29 中国联合网络通信集团有限公司 Method and device for determining grabbing point of target object and grabbing system
CN111754576A (en) * 2020-06-30 2020-10-09 广东博智林机器人有限公司 Rack measuring system, image positioning method, electronic device and storage medium
CN113989232A (en) * 2021-10-28 2022-01-28 广东利元亨智能装备股份有限公司 Battery cell defect detection method and device, electronic equipment and storage medium
CN114579808A (en) * 2022-01-17 2022-06-03 深圳市慧视通科技股份有限公司 Method and device for indexing position of target and electronic equipment
CN114648542A (en) * 2022-03-11 2022-06-21 联宝(合肥)电子科技有限公司 Target object extraction method, device, equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0533782A1 (en) * 1990-06-12 1993-03-31 University Of Florida Automated method for digital image quantitation
CN101839690A (en) * 2010-04-13 2010-09-22 河海大学常州校区 Visual inspection method for chip electronic component position error based on edge fitting
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
CN104981105A (en) * 2015-07-09 2015-10-14 广东工业大学 Detecting and error-correcting method capable of rapidly and accurately obtaining element center and deflection angle
CN105069799A (en) * 2015-08-13 2015-11-18 深圳市华汉伟业科技有限公司 Angular point positioning method and apparatus
CN106600653A (en) * 2016-12-30 2017-04-26 亿嘉和科技股份有限公司 Calibration method for optical center of zooming camera
CN108182383A (en) * 2017-12-07 2018-06-19 浙江大华技术股份有限公司 A kind of method and apparatus of vehicle window detection
CN108876842A (en) * 2018-04-20 2018-11-23 苏州大学 A kind of measurement method, system, equipment and the storage medium of sub-pixel edge angle
CN109002823A (en) * 2018-08-09 2018-12-14 歌尔科技有限公司 A kind of induction zone area determination method, device, equipment and readable storage medium storing program for executing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0533782A1 (en) * 1990-06-12 1993-03-31 University Of Florida Automated method for digital image quantitation
CN101839690A (en) * 2010-04-13 2010-09-22 河海大学常州校区 Visual inspection method for chip electronic component position error based on edge fitting
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
CN104981105A (en) * 2015-07-09 2015-10-14 广东工业大学 Detecting and error-correcting method capable of rapidly and accurately obtaining element center and deflection angle
CN105069799A (en) * 2015-08-13 2015-11-18 深圳市华汉伟业科技有限公司 Angular point positioning method and apparatus
CN106600653A (en) * 2016-12-30 2017-04-26 亿嘉和科技股份有限公司 Calibration method for optical center of zooming camera
CN108182383A (en) * 2017-12-07 2018-06-19 浙江大华技术股份有限公司 A kind of method and apparatus of vehicle window detection
CN108876842A (en) * 2018-04-20 2018-11-23 苏州大学 A kind of measurement method, system, equipment and the storage medium of sub-pixel edge angle
CN109002823A (en) * 2018-08-09 2018-12-14 歌尔科技有限公司 A kind of induction zone area determination method, device, equipment and readable storage medium storing program for executing

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724444A (en) * 2020-06-16 2020-09-29 中国联合网络通信集团有限公司 Method and device for determining grabbing point of target object and grabbing system
CN111724444B (en) * 2020-06-16 2023-08-22 中国联合网络通信集团有限公司 Method, device and system for determining grabbing point of target object
CN111754576A (en) * 2020-06-30 2020-10-09 广东博智林机器人有限公司 Rack measuring system, image positioning method, electronic device and storage medium
CN111754576B (en) * 2020-06-30 2023-08-08 广东博智林机器人有限公司 Frame body measurement system, image positioning method, electronic equipment and storage medium
CN113989232A (en) * 2021-10-28 2022-01-28 广东利元亨智能装备股份有限公司 Battery cell defect detection method and device, electronic equipment and storage medium
CN113989232B (en) * 2021-10-28 2022-12-16 广东利元亨智能装备股份有限公司 Battery cell defect detection method and device, electronic equipment and storage medium
CN114579808A (en) * 2022-01-17 2022-06-03 深圳市慧视通科技股份有限公司 Method and device for indexing position of target and electronic equipment
CN114648542A (en) * 2022-03-11 2022-06-21 联宝(合肥)电子科技有限公司 Target object extraction method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN110866949A (en) Center point positioning method and device, electronic equipment and storage medium
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN109961040B (en) Identity card area positioning method and device, computer equipment and storage medium
CN106780623A (en) A kind of robotic vision system quick calibrating method
CN112348765A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
CN111307039A (en) Object length identification method and device, terminal equipment and storage medium
CN112017232A (en) Method, device and equipment for positioning circular pattern in image
CN101739556A (en) Method for automatically identifying number of steel billet
CN113780201B (en) Hand image processing method and device, equipment and medium
CN115880296B (en) Machine vision-based prefabricated part quality detection method and device
CN112017231A (en) Human body weight identification method and device based on monocular camera and storage medium
WO2019001164A1 (en) Optical filter concentricity measurement method and terminal device
CN110807807A (en) Monocular vision target positioning pattern, method, device and equipment
CN112613107A (en) Method and device for determining construction progress of tower project, storage medium and equipment
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN115937203A (en) Visual detection method, device, equipment and medium based on template matching
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN110288040B (en) Image similarity judging method and device based on topology verification
CN113705564B (en) Pointer type instrument identification reading method
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN112800806B (en) Object pose detection tracking method and device, electronic equipment and storage medium
CN112418226A (en) Method and device for identifying opening and closing states of fisheyes
CN108564571B (en) Image area selection method and terminal equipment
CN111079752A (en) Method and device for identifying circuit breaker in infrared image and readable storage medium
CN115512381A (en) Text recognition method, text recognition device, text recognition equipment, storage medium and working machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306