CN106530315B - Target extraction system and method for medium and small objects under full angle - Google Patents

Target extraction system and method for medium and small objects under full angle Download PDF

Info

Publication number
CN106530315B
CN106530315B CN201611225981.5A CN201611225981A CN106530315B CN 106530315 B CN106530315 B CN 106530315B CN 201611225981 A CN201611225981 A CN 201611225981A CN 106530315 B CN106530315 B CN 106530315B
Authority
CN
China
Prior art keywords
image
detected
image sensor
directions
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611225981.5A
Other languages
Chinese (zh)
Other versions
CN106530315A (en
Inventor
李云鹏
石俊锋
张福根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Industrial Technology Research Institute of Zhejiang University
Original Assignee
Changzhou Industrial Technology Research Institute of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Industrial Technology Research Institute of Zhejiang University filed Critical Changzhou Industrial Technology Research Institute of Zhejiang University
Priority to CN201611225981.5A priority Critical patent/CN106530315B/en
Publication of CN106530315A publication Critical patent/CN106530315A/en
Application granted granted Critical
Publication of CN106530315B publication Critical patent/CN106530315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a target extraction system and a method for a medium and small object under full angle, wherein the target extraction system comprises: a plurality of stereoscopic vision image sensor modules are arranged with the object to be measured as the center to shoot the object to be measured at full angles, and the images are sent to an image processor module; the image processor module is suitable for carrying out target extraction on the object to be detected through the image so as to obtain a target image under the full angle; the invention adopts a reasonable space layout composed of the stereoscopic vision image sensor module and the light source, and can effectively realize the full-angle measurement of the object to be measured aiming at small and medium-sized objects; the number of the stereoscopic vision image sensor modules and the light sources can be increased or reduced according to the geometric characteristics of the object to be detected, so that the object to be detected can be better matched with the object to be detected to carry out target extraction from the required direction.

Description

Target extraction system and method for medium and small objects under full angle
Technical Field
The invention relates to the technical field of target extraction and three-dimensional measurement, in particular to a device and a method for extracting target textures and contours under full angles for small and medium-sized objects.
Background
The optical non-contact three-dimensional measurement technology generally needs to measure an object to be measured in a full angle range (the full angle refers to a 4 pi solid angle taking the object to be measured as a center) so as to obtain a complete three-dimensional outline of the object to be measured, and the optical non-contact three-dimensional measurement technology generally adopts an image sensor to measure the object to be measured, and needs to extract an object to be measured in an image for the purposes of reducing noise or improving calculation speed and the like.
The extraction of the target to be detected is carried out in the image, and three main types of extraction methods based on the global, the edge and the region can be classified according to the main technical route. Based on a global extraction algorithm (such as a gray threshold method), the image is subjected to threshold segmentation through the overall characteristics (such as a gray histogram) of the image so as to extract the target to be detected. According to the edge-based extraction method, an edge extraction operator is used for boundary extraction, and then target extraction is performed on the basis of the boundary extraction. Region-based extraction algorithms (e.g., watershed algorithms, etc.) perform target extraction based on local features (e.g., gray scale moment, etc.).
In application, the algorithms are easily affected by uncontrollable factors such as illumination, shadows, complex backgrounds and the like in the environment, and the robustness of target extraction is limited. In order to extract the target to be detected rapidly and accurately, active light illumination is generally used as an auxiliary material, so that the influence of uncontrollable factors in the environment is reduced. The illumination is classified into bright field illumination, dark field illumination, and the like according to the relative position of the light source and the image sensor. The surface texture information of the object to be detected can be effectively highlighted by bright field illumination, but the edge information of the object to be detected is not beneficial to extraction; dark field illumination can highlight the edge profile of the test object but can lose texture information on the surface of the test object.
Currently, dark field illumination is adopted in some three-dimensional measurement devices to improve the robustness of the extraction of the target to be measured in the image, such as a foot-type contour measuring instrument, but the device only realizes the extraction of the contour of the target to be measured in a certain direction. In addition, in order to realize the extraction of the surface texture and edge contour information of a target to be measured under the full angle, some devices place an object to be measured on a rotating device, and the object to be measured or a measuring probe continuously rotates to realize the full angle measurement, but the method introduces a rotating mechanism, so that the complexity of the device is greatly increased; meanwhile, the target extraction or three-dimensional measurement cannot be performed on the measured object in real time due to factors such as the rotation speed.
Disclosure of Invention
The invention aims to provide a target extraction system and method, which are used for solving the technical problems of complex structure and low measurement efficiency in the current full-angle target extraction.
In order to solve the above technical problems, the present invention provides a target extraction system, including: a plurality of stereoscopic vision image sensor modules are arranged with the object to be measured as the center to shoot the object to be measured at full angles, and the images are sent to an image processor module; the image processor module is suitable for carrying out target extraction on the object to be detected through the image so as to obtain a target image under the full angle.
Further, the number of the stereoscopic vision image sensor modules is six, and the stereoscopic vision image sensor modules are respectively positioned at the upper, lower, left, right, front and rear six directions of the object to be detected; the stereoscopic image sensor module includes: two image sensors forming a binocular are arranged, and a light source is arranged between the two eyes; the image processor module is suitable for controlling the light source combination in each stereoscopic vision image sensor module to be turned on or turned off so as to shoot illumination images in corresponding directions of the to-be-detected object.
Further, the light source combination in each stereoscopic image sensor module is turned on or off, i.e
All the light sources are respectively lightened, and bright field illumination images in all directions of the object to be detected are sequentially shot;
the light sources in the left, right, front and back directions are all lighted, the light sources in the upper and lower directions are turned off, and dark field illumination images of the object to be detected in the upper and lower directions are shot through the upper and lower stereoscopic vision image sensor modules;
the light sources in the upper, lower, front and rear directions are all lighted, the light sources in the left and right directions are turned off, and dark field illumination images of the object to be detected in the left and right directions are shot through the left and right stereoscopic vision image sensor modules;
the light sources in the upper, lower, left and right directions are all lighted, the light sources in the front and rear directions are turned off, and dark field illumination images of the object to be detected in the front and rear directions are shot through the front and rear stereoscopic vision image sensor modules; wherein the method comprises the steps of
Each illumination image is sent to an image processor module.
Further, the image processor module is suitable for obtaining a black-and-white mask image through a dark field illumination image, and fusing the bright field illumination image shot by the same image sensor with the black-and-white mask image to obtain an extraction image of the object to be detected, which simultaneously contains texture information and contour information; and extracting images by acquiring the objects to be detected at all angles to obtain a target image at all angles.
Further, the image processor module is adapted to obtain a black and white mask image from the dark field illumination image, i.e
Preprocessing the dark field illumination image by using an edge extraction operator to obtain a noisy edge image;
finding all edge contours in the edge image by using a seed filling method, removing contour lines with shorter lengths, and judging closed contours;
selecting boundary points of the edge image as initial seed points, and marking all connected areas in sequence by using an area growth algorithm;
blackening the initial communication area to be used as a first-stage communication area; the next level of communication area is whitewashed; this is alternated until all connected areas are accessed to form a black and white mask image of the test object.
In yet another aspect, the present invention further provides a target extraction method, including the steps of:
step S1, obtaining an all-angle shooting image of an object to be detected; and
and S2, extracting a target image under the full angle according to the images of the angles of the object to be detected.
Further, the method for obtaining the full-angle shooting of the object to be measured in the step S1 includes:
taking the object to be measured as a center, and arranging a plurality of stereoscopic vision image sensor modules to carry out full-angle shooting on the object to be measured; and
in step S2, the method for extracting the target image under the full angle according to the images of each angle of the object to be detected includes:
carrying out target extraction on the object to be detected according to the plurality of images by an image processor module so as to obtain a target image under the full angle; wherein the method comprises the steps of
The three-dimensional vision image sensor modules are six and are respectively positioned at the upper, lower, left, right, front and back six directions of the object to be detected;
the stereoscopic image sensor module includes: two image sensors forming a binocular are arranged, and a light source is arranged between the two eyes;
the image processor module is suitable for controlling the light source combination in each stereoscopic vision image sensor module to be turned on or turned off so as to shoot illumination images in corresponding directions of the to-be-detected object.
Further, the manner in which the light sources in each stereoscopic image sensor module are turned on or off in combination includes:
all the light sources are respectively lightened, and bright field illumination images in all directions of the object to be detected are sequentially shot;
the light sources in the left, right, front and back directions are all lighted, the light sources in the upper and lower directions are all turned off, and dark field illumination images of the object to be detected in the upper and lower directions are shot through the upper and lower stereoscopic vision image sensor modules;
the light sources in the upper, lower, front and rear directions are all lighted, the light sources in the left and right directions are turned off, and dark field illumination images of the object to be detected in the left and right directions are shot through the left and right stereoscopic vision image sensor modules;
the light sources in the upper, lower, left and right directions are all lighted, the light sources in the front and rear directions are turned off, and dark field illumination images of the object to be detected in the front and rear directions are shot through the front and rear stereoscopic vision image sensor modules; wherein the method comprises the steps of
Each illumination image is sent to an image processor module.
Further, the object to be detected is extracted according to the plurality of images by the image processor module so as to obtain an object image under the full angle, namely
The image processor module is suitable for obtaining a black-and-white mask image through a dark field illumination image, and fusing the bright field illumination image shot by the same image sensor with the black-and-white mask image to obtain an extraction image of an object to be detected, which simultaneously contains texture information and contour information;
and extracting images by acquiring the objects to be detected at all angles to obtain a target image at all angles.
Further, the method for obtaining a black-and-white mask image by dark field illumination image comprises:
s21, preprocessing a dark field illumination image by using an edge extraction operator to obtain a noisy edge image;
step S22, using a seed filling method to find all edge contours in the edge image, removing contour lines with shorter lengths, and judging closed contours;
s23, all objects to be detected are located in the central area of the edge image, boundary points of the edge image are selected as initial seed points, and all connected areas are marked sequentially by using an area growth algorithm;
step S24, blackening the initial communication area to be used as a first-stage communication area; the next level of communication area is whitewashed; this is alternated until all connected areas are accessed to form a black and white mask image of the test object.
The beneficial effects of the invention are as follows:
(1) The invention adopts a reasonable space layout composed of the stereoscopic vision image sensor module and the light source, and can effectively realize the full-angle measurement of the object to be measured aiming at small and medium-sized objects; the number of the stereoscopic vision image sensor modules and the light sources can be increased or reduced according to the geometric characteristics of the object to be detected, so that the object to be detected can be better matched with the object to be detected to carry out target extraction from the required direction.
(2) The brightness of the light source is controllable, and through rapid sequence flashing and synchronous snapshot combined with a stereoscopic vision image sensor, bright field illumination and dark field illumination can be organically combined on the premise of not obviously losing the measurement speed, so that the synchronous extraction of the surface texture and the edge contour of the target to be measured under the full angle is realized.
(3) The target extraction system of the invention has no moving mechanism and does not introduce extra mechanical errors and the like.
(4) The contrast of the target and the background is improved in an active light illumination mode, and the stability of target extraction can be greatly improved by combining a series of proposed processing algorithms.
(5) Because the fixed power light source is adopted, the algorithm parameters do not need to be manually adjusted and corrected during each measurement, and the use is convenient.
Drawings
The invention will be further described with reference to the drawings and examples.
FIG. 1 is a schematic diagram of the object extraction system of the present invention;
FIG. 2 is a workflow diagram of the present invention;
FIG. 3 is an example bright field illumination image;
FIG. 4 is an example of a dark field illumination image;
FIG. 5 is an initial edge image;
FIG. 6 is a processed edge image;
FIG. 7 is a black and white mask image;
fig. 8 is the final extraction result.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings. The drawings are simplified schematic representations which merely illustrate the basic structure of the invention and therefore show only the structures which are relevant to the invention.
The invention provides a full-angle target extraction system and method for small and medium-sized objects, which reasonably designs the layout of an image sensor and a light source, realizes the combination of bright field and dark field illumination modes by a rapid sequence flash photographing mode, and provides a set of target extraction algorithm according to image characteristics; the device and the method are stable and efficient, simultaneously extract the surface texture and edge contour information of the measured object, and have very wide application in the field of three-dimensional measurement.
Example 1
The present embodiment 1 provides a target extraction system including: a plurality of stereoscopic vision image sensor modules are arranged with the object to be measured as the center to shoot the object to be measured at full angles, and the images are sent to an image processor module; the image processor module is suitable for carrying out target extraction on the object to be detected through the image so as to obtain a target image under the full angle.
The number and the positions of the stereoscopic vision image sensor modules can be changed according to the specific characteristics of the object to be detected, and the number of the stereoscopic vision image sensor modules can be increased or decreased according to the geometric characteristics of the object to be detected; the law of increase or decrease is: if the surface area of the object to be detected in one of the upper direction, the lower direction, the left direction, the right direction, the front direction and the rear direction is smaller, the stereoscopic vision image sensor module in the direction can be removed; if the surface area of the object to be measured is larger in the direction between any two of the up, down, left, right, front and back directions, the stereoscopic vision image sensor module is added in the direction of the direction.
As an alternative embodiment of the target extraction system:
in fig. 1, 101, 102, 103, 104, 105, 106 are stereoscopic image sensor modules in up, down, left, right, front, back positions, respectively; 401 is an object to be measured; 201. 211, 202, 212, 203, 213, 204, 214, 205, 215, 206, 216 are image sensors, 301, 302, 303, 304, 305, 306 are light sources, respectively.
The three-dimensional vision image sensor modules are six and are respectively positioned at the upper, lower, left, right, front and back six directions of the object to be detected; the stereoscopic image sensor module includes: two image sensors forming a binocular are arranged, and a light source is arranged between the two eyes; the image processor module is suitable for controlling the light source combination in each stereoscopic vision image sensor module to be turned on or turned off so as to shoot illumination images in corresponding directions of the to-be-detected object.
Wherein the light source is, for example but not limited to, an LED light source, and the light emitting angle of the light source is equal to or close to the field angle of the image sensor, so as to improve the illumination efficiency.
The LED light sources can be programmed to be turned on or off by the image processor module, namely, each LED light source is controlled to be sequentially combined and flash, and meanwhile, all the image sensor modules synchronously shoot bright field illumination images and dark field illumination images of the object to be detected.
The light sources in each stereoscopic vision image sensor module are combined to be lighted or turned off, namely, all the light sources are respectively lighted, and bright field illumination images of the object to be detected in six directions are sequentially shot; the light sources in the left, right, front and back directions are all lighted, the light sources in the upper and lower directions are turned off, and dark field illumination images of the object to be detected in the upper and lower directions are shot through the upper and lower stereoscopic vision image sensor modules; the light sources in the upper direction, the lower direction, the front direction and the rear direction are all lighted, the light sources in the left direction and the right direction are turned off, and dark field illumination images of the object to be detected in the left direction and the right direction are shot through the left stereoscopic vision image sensor module and the right stereoscopic vision image sensor module; the light sources in the upper direction, the lower direction, the left direction and the right direction are all lighted, the light sources in the front direction and the rear direction are turned off, and dark field illumination images of the object to be detected in the front direction and the rear direction are shot through the front stereoscopic vision image sensor module and the rear stereoscopic vision image sensor module; wherein each illumination image is sent to an image processor module.
Because one stereoscopic image sensor module contains two image sensors, two images are obtained for each shooting of each stereoscopic image sensor module, so after the light source combination is turned on or turned off for shooting, 12 bright field illumination images and 12 dark field illumination images are obtained.
The image processor module is suitable for obtaining a black-and-white mask image through a dark field illumination image, and fusing the bright field illumination image shot by the same image sensor with the black-and-white mask image to obtain an extraction image of an object to be detected, which simultaneously contains texture information and contour information; and extracting images by acquiring the objects to be detected at all angles to obtain a target image at all angles.
The image processor module is suitable for obtaining a black-and-white mask image through a dark field illumination image, and preprocessing the dark field illumination image by using an edge extraction operator to obtain a noise-containing edge image; finding all edge contours in the edge image by using a seed filling method, removing contour lines with shorter lengths, and judging closed contours; selecting boundary points of the edge image as initial seed points, and marking all connected areas in sequence by using an area growth algorithm; blackening the initial communication area to be used as a first-stage communication area; the next level of communication area is whitewashed; this is alternated until all connected areas are accessed to form a black and white mask image of the test object.
Example 2
As shown in fig. 2, on the basis of embodiment 1, this embodiment 2 provides a target extraction method, which includes the following steps:
step S1, obtaining an all-angle shooting image of an object to be detected; and
and S2, extracting a target image under the full angle according to the images of the angles of the object to be detected.
Specifically, the method for obtaining the full-angle shooting of the object to be detected in the step S1 includes:
and a plurality of stereoscopic vision image sensor modules are arranged with the object to be measured as the center to carry out full-angle shooting on the object to be measured.
Specifically, in step S2, the method for extracting the target image under the full angle according to the images of each angle of the object to be detected includes: and carrying out target extraction on the object to be detected according to the plurality of images by the image processor module so as to obtain a target image under the full angle.
The three-dimensional vision image sensor modules are arranged on the upper part, the lower part, the left part, the right part, the front part and the back part of the object to be detected, for example but not limited to six; the number of the stereoscopic vision image sensor modules can be increased or decreased according to the geometric characteristics of the measured object; the law of increase or decrease is: if the surface area of the object to be detected in one of the upper direction, the lower direction, the left direction, the right direction, the front direction and the rear direction is smaller, the stereoscopic vision image sensor module in the direction can be removed; if the surface area of the object to be measured is larger in the direction between any two of the up, down, left, right, front and back directions, the stereoscopic vision image sensor module is added in the direction of the direction.
The stereoscopic image sensor module includes: two light sources forming a binocular image sensor and positioned between the binocular images; the image processor module is suitable for controlling the light source combination in each stereoscopic vision image sensor module to be turned on or turned off so as to shoot illumination images in corresponding directions of the to-be-detected object.
The combined lighting or turning off of the light sources in each stereoscopic image sensor module includes the following four ways:
mode one: all the light sources are respectively lightened, and bright field illumination images of the object to be detected in six directions are sequentially shot;
mode two: the light sources in the left, right, front and back directions are all lighted, the light sources in the upper and lower directions are turned off, and dark field illumination images of the object to be detected in the upper and lower directions are shot through the upper and lower stereoscopic vision image sensor modules;
mode three: the light sources in the upper direction, the lower direction, the front direction and the rear direction are all lighted, the light sources in the left direction and the right direction are turned off, and dark field illumination images of the object to be detected in the left direction and the right direction are shot through the left stereoscopic vision image sensor module and the right stereoscopic vision image sensor module;
mode four: the front light source, the rear light source, the left light source and the right light source are all lighted, the front light source and the rear light source are turned off, and dark field illumination images of the object to be detected in the front direction and the rear direction are shot through the front stereoscopic vision image sensor module and the rear stereoscopic vision image sensor module; wherein each illumination image is sent to an image processor module.
The object to be detected is extracted according to the plurality of images by the image processor module so as to obtain an object image under the full angle, namely
The image processor module is suitable for obtaining a black-and-white mask image through a dark field illumination image, and fusing the bright field illumination image shot by the same image sensor with the black-and-white mask image to obtain an extraction image of an object to be detected, which simultaneously contains texture information and contour information;
and extracting images by acquiring the objects to be detected at all angles to obtain a target image at all angles.
The method for obtaining the black-and-white mask image through the dark field illumination image comprises the following steps:
s21, preprocessing a dark field illumination image by using an edge extraction operator to obtain a noisy edge image;
step S22, using a seed filling method to find all edge contours in the edge image, dividing contour lines with shorter lengths, and judging closed contours;
s23, all objects to be detected are located in the central area of the edge image, boundary points of the edge image are selected as initial seed points, and all connected areas are marked sequentially by using an area growth algorithm;
step S24, blackening the initial communication area to be used as a first-stage communication area; the next level of communication area is whitewashed; this is alternated until all connected areas are accessed to form a black and white mask image of the test object.
Specifically, an edge extraction operator is used, and a Canny operator is used for preprocessing a dark field illumination image (shown in fig. 4) to obtain a noisy edge image (shown in fig. 5, black is background, and white is edge);
further processing the edge image by using a seed filling method, finding all edge contours in the image, and counting the pixel lengths of the contours; and removing the contour lines with shorter contour line length (such as less than 50 pixels);
further, it is determined whether the contours are closed contours. Specifically, the statistics of the number of the connected points in the 8 connected areas is sequentially performed on all the pixel points on the outline. If no point with the number of the communication points being 1 exists, judging that the contour is a closed contour; otherwise, a non-closed profile.
Thanks to the dark field illumination, the effective image contours are all closed contours (as shown in fig. 6).
Because the objects to be detected are all located in the central area of the image, selecting the boundary points of the image as initial seed points, and marking all the communicated areas in sequence by using an area growth algorithm;
blackening the initial communication area to be used as a first-stage communication area; the next level of communication area is whitewashed (the next level of communication area refers to the communication area adjacent to the previous level of communication area), and when the shape of an object to be detected is complex, a plurality of communication areas of the same level can exist; alternating until all the communication areas are accessed, and forming a black-and-white mask image (shown in fig. 7) of the object to be detected;
the bright field illumination image (as illustrated in fig. 3) photographed by the same image sensor is fused (as used and operation) with the black-and-white mask image obtained after processing the dark field illumination image, so as to obtain an extracted image (as illustrated in fig. 8) of the object to be detected, which contains both texture information and contour information.
And (3) carrying out the fusion operation on the 12 bright field illumination images of all the image sensors, so as to finish the extraction of the object to be detected containing texture and edge information under the full angle and realize the target image under the full angle.
With the above-described preferred embodiments according to the present invention as an illustration, the above-described descriptions can be used by persons skilled in the relevant art to make various changes and modifications without departing from the scope of the technical idea of the present invention. The technical scope of the present invention is not limited to the description, but must be determined according to the scope of claims.

Claims (3)

1. A target extraction system, comprising:
a plurality of stereoscopic vision image sensor modules are arranged with the object to be measured as the center to shoot the object to be measured at full angles, and the images are sent to an image processor module;
the image processor module is suitable for carrying out target extraction on the object to be detected through the image so as to obtain a target image under the full angle;
the three-dimensional vision image sensor modules are six and are respectively positioned at the upper, lower, left, right, front and back six directions of the object to be detected;
the stereoscopic image sensor module includes: two image sensors forming a binocular are arranged, and a light source is arranged between the two eyes;
the image processor module is suitable for controlling the light source combination in each stereoscopic vision image sensor module to be turned on or turned off so as to shoot illumination images in corresponding directions of the to-be-detected object;
the light source combination in each stereoscopic image sensor module is on or off, i.e
All the light sources are respectively lightened, and bright field illumination images in all directions of the object to be detected are sequentially shot;
the light sources in the left, right, front and back directions are all lighted, the light sources in the upper and lower directions are turned off, and dark field illumination images of the object to be detected in the upper and lower directions are shot through the upper and lower stereoscopic vision image sensor modules;
the light sources in the upper, lower, front and rear directions are all lighted, the light sources in the left and right directions are turned off, and dark field illumination images of the object to be detected in the left and right directions are shot through the left and right stereoscopic vision image sensor modules;
the light sources in the upper, lower, left and right directions are all lighted, the light sources in the front and rear directions are turned off, and dark field illumination images of the object to be detected in the front and rear directions are shot through the front and rear stereoscopic vision image sensor modules; wherein the method comprises the steps of
Transmitting each illumination image to an image processor module;
the image processor module is suitable for obtaining a black-and-white mask image through a dark field illumination image, and fusing the bright field illumination image shot by the same image sensor with the black-and-white mask image to obtain an extraction image of an object to be detected, which simultaneously contains texture information and contour information;
extracting images by acquiring the objects to be detected at all angles to obtain target images at all angles;
the image processor module is adapted to obtain a black and white mask image by dark field illumination image, i.e
Preprocessing the dark field illumination image by using an edge extraction operator to obtain a noisy edge image;
finding all edge contours in the edge image by using a seed filling method, removing contour lines with shorter lengths, and judging closed contours;
selecting boundary points of the edge image as initial seed points, and marking all connected areas in sequence by using an area growth algorithm;
blackening the initial communication area to be used as a first-stage communication area; the next level of communication area is whitewashed; alternating in this way until all the communication areas are accessed to form a black-and-white mask image of the object to be detected; and
the closed contour judgment is to count the number of the communication points in each communication area sequentially for all pixel points on the contour, and if no point with the number of the communication points being 1 exists, the contour is judged to be the closed contour; otherwise, a non-closed profile.
2. A target extraction method applied to the target extraction system of claim 1, comprising the steps of:
step S1, obtaining an all-angle shooting image of an object to be detected; and
and S2, extracting a target image under the full angle according to the images of the angles of the object to be detected.
3. The method for extracting a target according to claim 2, wherein,
the method for obtaining the full-angle shooting of the object to be detected in the step S1 comprises the following steps:
taking the object to be measured as a center, and arranging a plurality of stereoscopic vision image sensor modules to carry out full-angle shooting on the object to be measured; and
in step S2, the method for extracting the target image under the full angle according to the images of each angle of the object to be detected includes:
carrying out target extraction on the object to be detected according to the plurality of images by an image processor module so as to obtain a target image under the full angle; wherein the method comprises the steps of
The three-dimensional vision image sensor modules are six and are respectively positioned at the upper, lower, left, right, front and back six directions of the object to be detected;
the stereoscopic image sensor module includes: two image sensors forming a binocular are arranged, and a light source is arranged between the two eyes;
the image processor module is suitable for controlling the light source combination in each stereoscopic vision image sensor module to be turned on or turned off so as to shoot illumination images in corresponding directions of the to-be-detected object.
CN201611225981.5A 2016-12-27 2016-12-27 Target extraction system and method for medium and small objects under full angle Active CN106530315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611225981.5A CN106530315B (en) 2016-12-27 2016-12-27 Target extraction system and method for medium and small objects under full angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611225981.5A CN106530315B (en) 2016-12-27 2016-12-27 Target extraction system and method for medium and small objects under full angle

Publications (2)

Publication Number Publication Date
CN106530315A CN106530315A (en) 2017-03-22
CN106530315B true CN106530315B (en) 2024-02-27

Family

ID=58338770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611225981.5A Active CN106530315B (en) 2016-12-27 2016-12-27 Target extraction system and method for medium and small objects under full angle

Country Status (1)

Country Link
CN (1) CN106530315B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10841561B2 (en) * 2017-03-24 2020-11-17 Test Research, Inc. Apparatus and method for three-dimensional inspection
WO2020051747A1 (en) * 2018-09-10 2020-03-19 深圳配天智能技术研究院有限公司 Method of acquiring contour of object, image processing method and computer storage medium
CN109541186B (en) * 2018-11-29 2021-11-09 烟台大学 Coarse aggregate compactness calculation method based on shape parameters
CN111272774A (en) * 2020-01-21 2020-06-12 宁波舜宇仪器有限公司 Detection module and detection system for optical filter defect detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1375741A (en) * 2002-04-30 2002-10-23 北京理工大学 Complete object surface-restoring 3D device based on multiple stereo digital visual heads
CN101655347A (en) * 2009-08-20 2010-02-24 浙江工业大学 Driving three-dimensional omni-directional vision sensor based on laser diode light source
CN102221331A (en) * 2011-04-11 2011-10-19 浙江大学 Measuring method based on asymmetric binocular stereovision technology
CN206431690U (en) * 2016-12-27 2017-08-22 浙江大学常州工业技术研究院 Objective extraction system under middle-size and small-size object full angle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006011803A2 (en) * 2004-07-30 2006-02-02 Eagle Vision Systems B.V. Apparatus and method for checking of containers
US20150304629A1 (en) * 2014-04-21 2015-10-22 Xiuchuan Zhang System and method for stereophotogrammetry
US10145678B2 (en) * 2015-04-20 2018-12-04 Samsung Electronics Co., Ltd. CMOS image sensor for depth measurement using triangulation with point scan

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1375741A (en) * 2002-04-30 2002-10-23 北京理工大学 Complete object surface-restoring 3D device based on multiple stereo digital visual heads
CN101655347A (en) * 2009-08-20 2010-02-24 浙江工业大学 Driving three-dimensional omni-directional vision sensor based on laser diode light source
CN102221331A (en) * 2011-04-11 2011-10-19 浙江大学 Measuring method based on asymmetric binocular stereovision technology
CN206431690U (en) * 2016-12-27 2017-08-22 浙江大学常州工业技术研究院 Objective extraction system under middle-size and small-size object full angle

Also Published As

Publication number Publication date
CN106530315A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
US11525906B2 (en) Systems and methods for augmentation of sensor systems and imaging systems with polarization
US10914576B2 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photogrammetric and three-dimensional scanning functions
CN107945268B (en) A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN106530315B (en) Target extraction system and method for medium and small objects under full angle
CN111009007B (en) Finger multi-feature comprehensive three-dimensional reconstruction method
Biskup et al. A stereo imaging system for measuring structural parameters of plant canopies
CN105279372B (en) A kind of method and apparatus of determining depth of building
CN105184765B (en) Check equipment, inspection method and program
US10916025B2 (en) Systems and methods for forming models of three-dimensional objects
CN103993548A (en) Multi-camera stereoscopic shooting based pavement damage crack detection system and method
EP3382645A2 (en) Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
CN102855626A (en) Methods and devices for light source direction calibration and human information three-dimensional collection
CN106643555B (en) Connector recognition methods based on structured light three-dimensional measurement system
CN106996748A (en) Wheel diameter measuring method based on binocular vision
CN108230290A (en) Live pig body ruler detection method based on stereoscopic vision
CN112070889B (en) Three-dimensional reconstruction method, device and system, electronic equipment and storage medium
CN106524909A (en) Three-dimensional image acquisition method and apparatus
Kozak et al. Ranger: A ground-facing camera-based localization system for ground vehicles
US20130287293A1 (en) Active Lighting For Stereo Reconstruction Of Edges
CN114577805A (en) MiniLED backlight panel defect detection method and device
US9204130B2 (en) Method and system for creating a three dimensional representation of an object
CN115100104A (en) Defect detection method, device and equipment for glass ink area and readable storage medium
CN107044830B (en) Distributed multi-view stereoscopic vision system and target extraction method
CN106770322A (en) Calibration point depth detection method and temperature controller appearance detecting method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant