WO2021258273A1 - 基于三维成像的加工方法、装置、设备及存储介质 - Google Patents

基于三维成像的加工方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021258273A1
WO2021258273A1 PCT/CN2020/097573 CN2020097573W WO2021258273A1 WO 2021258273 A1 WO2021258273 A1 WO 2021258273A1 CN 2020097573 W CN2020097573 W CN 2020097573W WO 2021258273 A1 WO2021258273 A1 WO 2021258273A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
area
pixel
pixel value
Prior art date
Application number
PCT/CN2020/097573
Other languages
English (en)
French (fr)
Inventor
吕键
陈光磊
张飞豹
李明明
Original Assignee
广东省航空航天装备技术研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东省航空航天装备技术研究所 filed Critical 广东省航空航天装备技术研究所
Priority to PCT/CN2020/097573 priority Critical patent/WO2021258273A1/zh
Publication of WO2021258273A1 publication Critical patent/WO2021258273A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/24Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the invention relates to the field of numerical control processing, in particular to a processing method, device, equipment and storage medium based on three-dimensional imaging.
  • the non-contact 3D measurement technology represented by structured light 3D imaging and confocal 3D imaging technology has the advantages of fast scanning speed, non-contact, and dense point cloud, and has been applied in many fields.
  • the detection of the workpiece is often collected by structured light projection and photography.
  • the information of the surface of the workpiece is analyzed and processed on the collected images to reconstruct a three-dimensional model of the surface of the workpiece.
  • 3D imaging to perform 3D scanning and inspection of target workpieces is gradually popular, how to obtain a clearer and complete 3D model of the workpiece to achieve accurate processing of the workpiece has become the focus of attention in the industry. one.
  • a processing method, device, equipment, and storage medium based on three-dimensional imaging are provided.
  • a processing method based on three-dimensional imaging including the following steps:
  • the first image and the second image are images collected after projection light with two-dimensional pattern information is projected to the collection area;
  • the first The image and the second image both include the two-dimensional pattern information, the area where the maximum value of the pixel value of the two-dimensional pattern information in the first image is located is the first area, and the area in the second image
  • the area where the maximum value of the pixel value of the two-dimensional pattern information is located is the second area, and the first area and the second area do not overlap with each other;
  • the first characteristic image is determined according to the difference between the pixel value of each pixel of the first image and the pixel value of the corresponding pixel of the second image, and the pixel value of each pixel of the second image is compared with the first characteristic image.
  • the difference between the pixel values of corresponding pixels of an image determines the second characteristic image
  • the processing path of the processing cutter head is determined according to the coordinate information of the acquisition area, the coordinate information of the processing cutter head, and the allowance to be processed.
  • a processing device based on three-dimensional imaging including:
  • An acquisition module for acquiring a first image and a second image acquired within a preset time the first image and the second image are images acquired after projecting projection light with two-dimensional pattern information to the acquisition area
  • the first image and the second image both include the two-dimensional pattern information, the area where the maximum value of the pixel value of the two-dimensional pattern information in the first image is located is the first area, and the second The area where the maximum value of the pixel value of the two-dimensional pattern information in the two images is located is the second area, and the first area and the second area do not overlap each other, and the coordinate information of the collection area and the processing tool head are acquired Coordinate information;
  • the determining module is configured to determine the first characteristic image according to the difference between the pixel value of each pixel of the first image and the pixel value of the corresponding pixel of the second image, and according to the pixel value of each pixel of the second image
  • the difference between the pixel values of the pixel points corresponding to the first image determines a second characteristic image
  • the three-dimensional image of the acquisition area is determined according to the first characteristic image and the second characteristic image
  • the processing module is used to obtain the to-be-processed allowance of the acquisition area according to the three-dimensional image and the preset three-dimensional information of the acquisition area, according to the coordinate information of the acquisition area, the coordinate information of the processing tool head, and The machining allowance determines the machining path of the machining tool head.
  • a processing equipment based on three-dimensional imaging including:
  • Projector projecting projection light with two-dimensional pattern information to the workpiece
  • Both an image and the second image include the two-dimensional pattern information, the area where the maximum value of the pixel value of the two-dimensional pattern information in the first image is located is the first area, and the area in the second image is The area where the maximum value of the pixel value of the two-dimensional pattern information is located is the second area, and the first area and the second area do not overlap with each other;
  • the processor is configured to execute a computer program on the memory to realize: determining the first characteristic image according to the difference between the pixel value of each pixel of the first image and the pixel value of the corresponding pixel of the second image , And determining a second characteristic image according to the difference between the pixel value of each pixel of the second image and the pixel value of the corresponding pixel of the first image; and according to the first characteristic image and the second characteristic image Determine the three-dimensional image of the acquisition area; obtain the to-be-processed allowance of the acquisition area according to the three-dimensional image and the preset three-dimensional information of the acquisition area; acquire the coordinate information of the acquisition area and the coordinates of the machining tool head Information, determining the processing path of the processing tool head according to the coordinate information of the collection area, the coordinate information of the processing tool head, and the allowance to be processed; and
  • the processor includes a processing tool head, and the processor can drive the processing tool head to move along the processing path so as to eliminate the machining allowance in the collection area.
  • FIG. 1 is a method flowchart of a processing method based on three-dimensional imaging provided by an embodiment of the application;
  • FIG. 2 is a method flowchart of a processing method based on three-dimensional imaging provided by another embodiment of the application;
  • FIG. 3 is a schematic diagram of the projection light provided by an embodiment of the application when projecting onto a workpiece
  • FIG. 4 is a schematic diagram of another projection light corresponding to the same collection area in the embodiment of FIG. 3 when projecting onto the workpiece;
  • FIG. 5 is a schematic diagram of the amplitude intensity of a local area in the first image corresponding to FIG. 3;
  • FIG. 6 is a schematic diagram of the amplitude intensity of a local area in the second image corresponding to FIG. 4;
  • FIG. 7 is a schematic diagram of a first characteristic image formed by subtracting FIG. 6 from FIG. 5 and retaining parts greater than 0;
  • Fig. 8 is a schematic diagram of a second characteristic image formed by subtracting Fig. 5 from Fig. 6 and retaining a part greater than 0;
  • Fig. 9 is a three-dimensional image of the workpiece formed in Fig. 7 plus Fig. 8 in the frame;
  • FIG. 10 is a method flowchart of a processing method based on three-dimensional imaging under continuous scanning according to an embodiment of the application
  • FIG. 11 is a schematic diagram of a processing device based on three-dimensional imaging provided by an embodiment of the application.
  • FIG. 12 is an internal structure diagram of a processing device based on three-dimensional imaging provided by an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of a processing device based on three-dimensional imaging provided by an embodiment of the application.
  • some embodiments of the present application provide a processing method based on three-dimensional imaging, which can achieve accurate processing of a workpiece by obtaining a clearer and complete three-dimensional model.
  • Some embodiments of the present application provide a processing method based on three-dimensional imaging, which can obtain at least two contrast images of the same acquisition area within a very short preset time, wherein each contrast image corresponds to a two-dimensional image.
  • the feature image after the high reflection signal is eliminated is processed to reconstruct the corresponding acquisition area Three-dimensional image.
  • the first image and the second image corresponding to the same acquisition area will have a little non-overlapping acquisition area, but by controlling the preset time, the first image can be made The acquisition of the second image and the second image are completed in a very short time, thereby reducing the collection area that does not overlap, so that the collection area corresponding to the first image and the second image is almost the same, so that this difference is caused in the reconstruction calculation of the three-dimensional feature The deviation can be ignored.
  • the acquisition areas corresponding to the first image and the second image are almost the same, it can also be said that the acquisition areas corresponding to the two are the same.
  • the processing method based on three-dimensional imaging includes:
  • Step S110 Acquire a first image and a second image collected within a preset time, the first image and the second image are images collected after projecting projection light with two-dimensional pattern information to the collection area; the first image and the second image Both images include two-dimensional pattern information.
  • the area where the maximum value of the pixel value of the two-dimensional pattern information in the first image is located is the first area, and the area where the maximum value of the pixel value of the two-dimensional pattern information in the second image is located is the first area. In the second area, the first area and the second area do not overlap each other.
  • Step S120 Determine the first characteristic image according to the difference between the pixel value of each pixel of the first image and the pixel value of the corresponding pixel of the second image, and determine the first characteristic image according to the pixel value of each pixel of the second image and the corresponding pixel of the first image The difference between the pixel values determines the second feature image.
  • Step S130 Determine a three-dimensional image of the collection area according to the first characteristic image and the second characteristic image.
  • Step S140 According to the three-dimensional information of the three-dimensional image of the collection area and the preset three-dimensional information, the allowance to be processed in the collection area is obtained.
  • Step S150 Obtain the coordinate information of the collection area and the coordinate information of the machining tool head.
  • Step S160 Determine the processing path of the processing tool head according to the coordinate information of the collection area, the coordinate information of the processing tool head and the allowance to be processed.
  • the image obtained by the workpiece within a preset period of time can be regarded as one frame, each frame includes the first image and the second image, and the first image and the second image of each frame correspond to The same collection area on the surface of the workpiece.
  • the first image and the second image may also be collectively referred to as a contrast image.
  • the first image and the second image may be obtained separately in a short interval time, so as to avoid mutual interference during imaging.
  • the interval between acquiring the first image and the second image in the same frame should be avoided too long, so as to prevent the acquisition interval of the comparison image of the same frame from being too long, which may not ensure the first image and the second image of the same frame.
  • the second image is an imaging of the same area of the target workpiece.
  • the size of each contrast image should be the same or tend to be the same.
  • the coordinates of the corresponding pixels in each contrast image of the frame are the same.
  • the two-dimensional pattern information in the image includes the distribution information and pixel value information of the pixels that make up the two-dimensional pattern. That is, in the first image and the second image obtained after ignoring the high inverse signal and the three-dimensional characteristic signal of the workpiece surface, the pixel value of the pixel with the two-dimensional pattern information can be called the pixel value of the two-dimensional pattern information.
  • the area where the maximum value of the pixel value of the two-dimensional pattern information is located can be understood as: in at least one direction of the image plane, among the pixels of the two-dimensional pattern information, the pixel value of one pixel is greater than that of two adjacent pixels in that direction. If the pixel value of the pixel on the side is located, the area where the pixel value is located can be referred to as the area where the maximum value of the pixel value in a two-dimensional pattern information is located.
  • the two-dimensional pattern in the first image is composed of a plurality of circular light spots with a Gaussian distribution
  • the pixel value at the center of each circular light spot is larger than other areas of the light spot in each direction. Therefore, the area where the pixel value is located is the area where the maximum value of the pixel value of the light spot is located, and the area where the maximum value of the pixel value of each light spot is located constitutes the first area in the first image.
  • the second area in the second image is similar, and will not be repeated here.
  • the projection light with specific two-dimensional pattern information can be projected on the workpiece, and the projection light reflected from the surface of the workpiece can be imaged to obtain the first image and the second image.
  • the first image and the second image of the same frame are controlled to be acquired within a preset time, so that the first image and the second image that can be regarded as the same frame are both imaging of the same area of the workpiece.
  • the projection light projected on the workpiece may include at least one of a stripe spot, a circular spot, and an irregular spot.
  • the projection pattern of the projection light includes at least one of the corresponding fringe spot information, circular spot information, and irregular spot information. A sort of.
  • the first image and the second image both include the reflected two-dimensional pattern information, and because in the first image and the second image of the same frame, the first area and the second area are similar but do not overlap each other, so the two The area of the workpiece surface corresponding to the maximum value area of the pixel value of the two-dimensional pattern information in the two-dimensional pattern information is different, so that the two-dimensional pattern information obtained by the two comparison images of the same frame contains a large amount of feature information of the workpiece surface.
  • the features such as recesses and protrusions of the workpiece will cause changes in the amplitude and distribution of the projected light irradiated and reflected on it, resulting in corresponding changes in the amplitude and distribution of the two-dimensional pattern information in the contrast image.
  • step S120 the pixel value of each corresponding pixel in the first image and the second image of the same frame is subjected to difference processing, so as to obtain the first characteristic image and the second characteristic image of the same frame. Since the first image and the second image of the same frame are acquired within the preset time, the acquisition areas of the workpiece surface corresponding to the two contrast images almost overlap. Therefore, when there is a highly reflective part in the acquisition area, the first image and the second image Not only there is a corresponding highlight area (high anti-signal), and the position of the high anti-signal produced by the workpiece high reflection phenomenon in the two contrast images almost overlaps.
  • the first feature image retains the two-dimensional pattern carrying the three-dimensional feature information of the workpiece in the first image, and eliminates the high-amplitude signal caused by the high reflection phenomenon on the surface of the workpiece in the first image.
  • the second feature image retains the two-dimensional pattern carrying the three-dimensional feature information of the workpiece in the second image, and eliminates the high-amplitude signal caused by the high reflection phenomenon on the surface of the workpiece in the second image. That is, the first feature image and the second feature image of the same frame respectively carry the three-dimensional features of different sub-regions of the workpiece in the same acquisition area.
  • step S130 the first feature image and the second feature image of the same frame are used to obtain a three-dimensional image of the collection area corresponding to the workpiece in the frame, and the three-dimensional image can reflect the three-dimensional model of the collection area.
  • the highly reflective signal on the surface of the workpiece can be eliminated, and the highly reflective signal can avoid interference with the reconstruction calculation of the 3D point cloud.
  • the final 3D image can be retained at the high altitude. The three-dimensional feature information in the signal area is reflected, so that a clear and complete three-dimensional image of the acquisition area can be reconstructed through the above steps, and then accurate comparison information can be provided for the subsequent processing process.
  • the projection light can be projected twice on the target workpiece, and each projection light is projected separately, that is, the first projection light After the projection, the second projection light is projected, and the projection light projected each time corresponds to the reception of a contrast image.
  • the shape, size and intensity of the two-dimensional pattern presented by each projection light irradiated to the surface of the workpiece are the same or approximately the same, and the maximum light intensity distribution area of the two-dimensional pattern formed on the workpiece by each projection light corresponding to the same frame of imaging is different from each other coincide.
  • Each contrast image in the same frame can be regarded as an imaging of the same area of the workpiece.
  • the method before step S110, further includes the step of projecting two-dimensional projection light with a two-dimensional pattern to the workpiece in a preset time, and the maximum light intensity distribution of the two-dimensional pattern formed on the workpiece by the two projection lights
  • the regions do not overlap each other.
  • the two-dimensional pattern information of the projection light includes, but is not limited to, rectangular light spots, circular light spots, elliptical light spots, irregular light spots, etc., and the intensity of each light spot may be the same or different.
  • the projection light may be monochromatic light in the range of visible light and infrared light. The first image and the second image of the same frame are obtained through two projection lights projected within the preset time.
  • the first image and the second image of the same frame are obtained within the preset time.
  • the first image and the second image The images all include two-dimensional pattern information, and in the first image and the second image of the same frame, the first area in the first image and the second area in the second image do not overlap with each other.
  • the first image and the second image of the same frame should be acquired within a preset time to ensure that the first image and the second image of the same frame are imaging of the same area of the target workpiece.
  • the aforementioned preset time can be understood as the interval time from the exposure of the first contrast image to the end of the exposure of the last contrast image.
  • the preset time satisfies 0 ⁇ t ⁇ 200ms.
  • the interval time of each projection light projected corresponding to the same frame of imaging is less than or equal to 50ms, and specifically may be 1ms, 5ms, 15ms, 25ms, 30ms, 40ms, or 50ms.
  • the projection interval of the projection light corresponding to the same frame of imaging can be controlled in a very short time, so as to ensure that each contrast image in the same frame is an imaging of the same area of the workpiece.
  • the exposure time of each contrast image in the same frame can also be controlled to ensure that the acquisition time of the contrast image will not be too long.
  • the exposure time of the contrast image is controlled within 100ms, and specifically may be 30ms, 50ms, 65ms, 80ms, or 100ms. In this way, the exposure time when obtaining the contrast image can be controlled within an extremely short range, avoiding the exposure time from being too long, thereby further ensuring that each contrast image in the same frame is an imaging of the same region of the workpiece.
  • the method before step S110, further includes the step of: successively projecting projection light with a two-dimensional pattern on the workpiece twice within a preset time, and the maximum light intensity distribution area of the two-dimensional pattern formed on the workpiece by the two projection lights Do not overlap each other, the duration of each projection light projection is within 100ms, and the interval time is within 50ms.
  • the first image and the second image of the same frame are respectively obtained through two projection lights projected within a preset time, and both the first image and the second image include two-dimensional pattern information, And in the first image and the second image in the same frame, the first area in the first image and the second area in the second image do not overlap each other, and the first image and the second image in the same frame are within the preset time Acquire, the interval time between acquiring the first image and the second image of the same frame satisfies 0 ⁇ t1 ⁇ 50ms, the exposure time of the first image and the second image satisfies 0 ⁇ t2 ⁇ 100ms, and the preset time satisfies 0 ⁇ t ⁇ 200ms, In this way, the total time for acquiring the first image and the second image of the same frame can be controlled in a very short time, so as to ensure that each contrast image in the same frame is an imaging of the same region of the workpiece. Further, in some embodiments, t1 ⁇ 1ms
  • the shape, size, and intensity of the two-dimensional patterns projected to the workpiece are the same or substantially the same.
  • the difference is that for each projection light corresponding to the same frame of imaging but different contrast images, each projection The areas illuminated by the light projected onto the surface of the workpiece do not overlap.
  • they can be in a complementary relationship (refer to Figures 3 and 4). Therefore, the two-dimensional contrast images formed by receiving these projection lights are finally The area where the maximum value of the pixel value of the pattern information is located also does not overlap with each other.
  • a corresponding complementary relationship may be present.
  • the size of each contrast image of the same frame is the same, and the two-dimensional pattern of the projection light is composed of a plurality of parallel and the same size elongated light spots, and the corresponding projection lights of the same frame project
  • the area illuminated by the surface of the workpiece is complementary (refer to Figures 3 and 4), that is, the illuminated area of the projected light corresponding to the first image on the workpiece and the non-illuminated area of the projected light corresponding to the second image on the workpiece Overlap, the illuminated area on the workpiece corresponding to the projection light of the second image coincides with the non-illuminated area on the workpiece corresponding to the projection light of the first image.
  • the patterns of each projection light corresponding to the same frame can completely cover the collection area of the workpiece surface corresponding to the frame (refer to FIG. 3 and FIG. 4). And because the three-dimensional features (protrusions, depressions, etc.) in the collection area will cause the reflected projection light to change after being irradiated by the projection light (such as causing the intensity and distribution of the reflected projection light pattern to change non-uniformly), so The three-dimensional information in the illuminated area can be reflected in the reflected projection light, and then finally reflected in the corresponding contrast image.
  • the contrast image After receiving each projection light reflected by the surface of the workpiece, the contrast image will also show the two-dimensional pattern of the reflected projection light, and the three-dimensional characteristic information in the collection area is included in the reflected projection light, and In each contrast image of the same frame, the pixel positions of the two-dimensional patterns in different contrast images will also show a complementary relationship. Therefore, when the area illuminated by the corresponding projection lights of the same frame projected to the surface of the workpiece is complementary, various three-dimensional information in the collection area will be fully reflected in the contrast image, for example, part of the three-dimensional information is reflected in the first image. The remaining part of the three-dimensional information is reflected in the second image.
  • the maximum value area of the two-dimensional pattern information in the first image that is, the area where the maximum value of the pixel value in the two-dimensional pattern information is located
  • the maximum value regions of the two-dimensional pattern information in the image do not overlap each other, which will help make the three-dimensional information of the workpiece surface more completely reflected in each contrast image, thereby improving the integrity of the three-dimensional imaging.
  • step S120 may specifically be step S121: subtracting the first image from the pixel value of each pixel of the first image.
  • the pixel value of the corresponding pixel of the second image is used to form the first characteristic image by using the pixel value difference greater than or equal to the first threshold value; the pixel value of each pixel of the second image is used to subtract the pixel value of the corresponding pixel of the first image , And use the part where the pixel value difference is greater than or equal to the second threshold to form a second characteristic image.
  • the pixels with the same coordinates in the first image and the second image can be referred to as corresponding pixels of the two images.
  • the size of the first threshold and the second threshold can be controlled to completely eliminate the signal related to the second area in the first feature image, and to eliminate the signal related to the first area in the second feature image.
  • the feature of the maximum value area in the second image can still be retained in the second feature image, and the high inverse signal feature that may originally exist will be eliminated.
  • the maximum value areas in the first image and the second image corresponding to the three-dimensional features of the workpiece are deformed. Therefore, when the maximum value area is retained, the three-dimensional feature information is also retained in the first feature image and the second image.
  • the first feature image and the second feature image obtained in step S120 can effectively eliminate the high reflection signal originally located in the first image and the second image due to the high reflection phenomenon on the workpiece surface, thereby effectively eliminating environmental noise and avoiding
  • the high anti-signal overwhelms the effective signal (the deformed two-dimensional pattern information), thereby effectively improving the integrity of the three-dimensional imaging.
  • the elimination of the high anti-signal is also beneficial to improve the accuracy and completeness of the three-dimensional image obtained in step S130.
  • the first threshold and the second threshold may be collectively referred to as a preset value.
  • the preset value is greater than or equal to 0 and less than the first image or The maximum value among the pixel values of each pixel of the two-dimensional pattern information in the second image.
  • the first threshold is greater than or equal to 0 and less than the maximum value of the pixel values of each pixel of the two-dimensional pattern information in the first image.
  • the second threshold is greater than or equal to 0 and less than the maximum value of the pixel values of each pixel of the two-dimensional pattern information in the second image.
  • the size of the three-dimensional feature information in the first feature image and the second feature image after the difference processing can be selectively retained.
  • the preset value can be set to 0, and the two contrast images After subtraction, the high-inverse signal can be completely removed, and the projected light signal carrying three-dimensional characteristic information reflected by the workpiece can be better preserved.
  • the two The subtraction of a contrast image will not be able to eliminate the high inverse signal well, so that the first feature image and the second feature image still retain the low intensity high inverse signal.
  • the first feature image can be increased by increasing the preset value.
  • the second characteristic image only retains the part larger than the preset value, so as to ensure that the high inverse signal is completely eliminated.
  • the pattern of the reflected light will be deformed, and the maximum signal in the pattern will move to the minimum value area in the original pattern, that is, the three-dimensional
  • the feature information is contained in the original maximum value area and minimum value area in the pattern at the same time.
  • the high inverse signals in the first image and the second image may have residuals of small intensity after subtraction, this causes greater interference to the above-mentioned three-dimensional feature information distributed to the minimum value area.
  • the size of the preset value in step S121 is used to retain this part of the three-dimensional feature information.
  • the preset value is greater than or equal to one-tenth of the maximum pixel value in the two-dimensional pattern information in the first image or the second image, thereby helping to eliminate the residual high reflection signal; and
  • the value is set to be less than or equal to one-fifth of the maximum pixel value in the two-dimensional pattern information in the first image or the second image, so as to better avoid the elimination of the three-dimensional feature information that is located in the minimum value area and has low intensity.
  • the step of projecting projection light with two-dimensional pattern information to the collection area of the workpiece is specifically: projecting the first projection to the collection area of the workpiece within a preset time of 60ms
  • the duration of the light 1110 and the second projection light 1120, the first projection light 1110 and the second projection light 1120 are both within 30ms, and the interval between the first projection light 1110 and the second projection light 1120 is within 1ms.
  • the first projection light 1110 and the second projection light 1120 have the same two-dimensional projection pattern, wherein the two-dimensional projection pattern is composed of a plurality of mutually parallel rectangular light spots, the shape, size, and interval of each rectangular light spot in the first projection light 1110
  • the distance and intensity are the same as the rectangular light spots in the second projection light 1120, and the intensity of the same rectangular light spot is constant.
  • the area illuminated by the two projection lights projected to the surface of the workpiece is complementary. Since the preset time is extremely short, the two projection lights can be regarded as The illumination of the same collection area 10 of the workpiece is only that the two projected lights are complementary in the illuminated area of the collection area 10, or described as the spot area of the two projected lights on the workpiece. Therefore, the illuminated area of the first projection light 1110 and the second projection light 1120 can completely cover the collection area 10, half of the collection area 10 can be illuminated by the first projection light 1110, and the other half of the collection area 10 can be illuminated by the first projection light 1110.
  • the second projection light 1120 It is illuminated by the second projection light 1120, so that the three-dimensional feature information in the collection area 10 can be fully recorded in the reflected projection light.
  • the two-dimensional pattern spot of the first projection light 1110 is covered, and the second three-dimensional feature 103 is illuminated by the second projection light 1120, that is, is covered by the two-dimensional pattern spot of the second projection light 1120.
  • the light spot covering the first three-dimensional feature 102 will change in shape, light intensity distribution, etc., so that the information of the first three-dimensional feature 102 is recorded in the reflected light.
  • the information of the second three-dimensional feature 103 is recorded in the corresponding light spot of the second projection light 1120 after reflection.
  • the target workpiece reflects not only the two-dimensional pattern information 1100 of the projection light, but also the high reflective information 104.
  • the final first image 1210 and the second image 1220 both include the superposition of the two-dimensional pattern information 1100 and the high reflective information 104.
  • the shape and size of the first image 1210 and the second image 1220 in this embodiment are the same.
  • the first three-dimensional feature 102 is in one of the high reflection areas, and the second The three-dimensional feature 103 is located in another highly reflective area.
  • the two-dimensional pattern information 1100 located in the area of the high-inverse information 104 carries the first three-dimensional feature information 1021; in the second image 1220, the two-dimensional pattern information located in the area of the other high-inverse information 104 carries the first Two and three-dimensional feature information 1031.
  • the effective two-dimensional pattern information 1100 that can be used for algorithm analysis is superimposed by the high-inverse information 104, which will cause the system to fail to analyze the area with high-inverse phenomenon, thereby losing the three-dimensional feature information of the area, leading to three-dimensional imaging Is incomplete.
  • FIG. 5 shows the pixel value of each pixel in the pixel row 105 in the X direction of the acquisition area 10 in FIG. 3
  • FIG. 6 shows the pixel value of each pixel in the pixel row 105 in the X direction in the acquisition area 10 of FIG. 4
  • the pixel value size of FIG. 3 and FIG. 4 both reflect the same acquisition area 10, so FIG. 5 and FIG. 6 are the pixel value information of the same imaging pixel row of the acquisition area 10.
  • FIG. 7 is a first characteristic image 1310 formed by subtracting the pixel value of each pixel of the first image 1210 from the pixel value of the corresponding pixel of the second image 1220 in the frame, and retaining the part greater than 0;
  • FIG. 8 The pixel value of each pixel of the second image 1220 in the frame is subtracted from the pixel value of the corresponding pixel of the first image 1210, and the second characteristic image 1320 formed by the part greater than 0 is retained.
  • the corresponding pixel point can be understood as the smallest computable area of the same position in the first image 1210 and the second image 1220.
  • the obtained first characteristic image 1310 will only retain the two-dimensional pattern information 1100 and the first three-dimensional characteristic information 1021 of the first projection light 1110 reflected by the target workpiece; the obtained second characteristic image 1320 will only retain The two-dimensional pattern information 1100 and the second three-dimensional characteristic information 1031 of the second projection light 1120 reflected by the target workpiece.
  • the first characteristic image 1310 and the second characteristic image 1320 carry information about complementary regions in the same acquisition area 10, referring to FIG. 9, in some embodiments, for example, in this embodiment, the first characteristic image 1310 The pixel value of each pixel of the second characteristic image 1320 is added to obtain a three-dimensional image 1400 of the workpiece in the acquisition area 10 of the frame. The pixel value distribution of each pixel of the three-dimensional image 1400 of the frame is used to calculate the acquisition area 10 The three-dimensional feature information of the workpiece, and then the three-dimensional image 1400 of the workpiece in the collection area 10 is obtained.
  • the method of reconstructing the three-dimensional image 1400 is not limited to adding the first feature image 1310 and the second feature image 1320.
  • the three-dimensional image in the first feature image 1310 and the second feature image 1320 can also be directly used.
  • the feature information independently calculates the three-dimensional features in each complementary area.
  • the method of passing the first feature image 1310 and the second feature image 1320 of the same frame to the three-dimensional image 1400 of the frame may be: comparing the first feature image 1310 and the three-dimensional image 1400 of the same frame (same acquisition area)
  • the second characteristic image 1320 is processed separately.
  • Image processing methods include but are not limited to cross-correlation method, least square method, etc., and finally a dense 3D point cloud of the corresponding collection area of the target workpiece under the frame is obtained. The collection can be reflected by the 3D point cloud The depth information of the area.
  • the processing method based on three-dimensional imaging further includes step S131: obtaining three-dimensional images 1400 of two adjacent frames Each frame of the three-dimensional image 1400 is obtained by the above corresponding method through the first image and the second image collected within a preset period of time, and the feature matching process is performed on the three-dimensional image 1400 of two adjacent frames to obtain the matching result; As a result of the matching, the three-dimensional images 1400 of two adjacent frames are spliced to obtain a continuous three-dimensional image after splicing.
  • the previous frame of three-dimensional image 1400 is obtained from the first image and the second image obtained within a preset period of time
  • the next adjacent frame of three-dimensional image 1400 is obtained from the adjacent first image and the second image obtained within a preset period of time.
  • the second image is obtained.
  • the feature matching process in step S131 may be: by performing Iterative Closest Point (ICP) on the three-dimensional point cloud information in the three-dimensional images at two adjacent scanning moments. ) Processing to match the three-dimensional point clouds in the three-dimensional images of two adjacent frames to obtain the matching result, and then stitch the collection area 10 of the adjacent frames according to the matching result to reconstruct the continuous three-dimensional image corresponding to the scanning area .
  • ICP Iterative Closest Point
  • the to-be-processed allowance of the acquisition area is obtained according to the three-dimensional image of the acquisition area and the preset three-dimensional information.
  • the system will obtain the continuous three-dimensional image corresponding to the continuous acquisition area by the above method, and the continuous three-dimensional image is spliced by two or more adjacent frames of three-dimensional images 1400. get.
  • the continuous acquisition area can be understood as a splicing of acquisition areas corresponding to multiple adjacent frames, and a continuous three-dimensional image is an image corresponding to the continuous acquisition area.
  • step S140 is to obtain the to-be-processed allowance of the continuous acquisition area according to the spliced continuous three-dimensional image and the preset three-dimensional information of the continuous acquisition area.
  • the three-dimensional image of the acquisition area contains the three-dimensional feature data in the area.
  • the spatial position of each point in the area can be embodied as a set of coordinate values (x1, y1, z1); the preset three-dimensional information of the acquisition area contains the workpiece in the area.
  • the theoretical model data of the area, and the three-dimensional information of each point in the area can be embodied as a set of coordinate values (x0, y0, z0).
  • the difference between the two can be obtained.
  • the non-overlapping area in space can be embodied as a set of coordinate values (x', y', z'), and the set of coordinate values can be set by subtracting the set (x0, y0, z0) Obtained, the spatial information embodied in the coordinate value set can be used to obtain the allowance to be processed.
  • the three-dimensional shape of the three-dimensional image and the theoretical model can be freely selected.
  • the three-dimensional shape of a region has more than 90% similarity, it can be considered that the two correspond to the same region on the surface of the workpiece, so that the two can be compared to obtain the machining allowance.
  • the similarity of more than 90% can be understood as two three-dimensional shapes (such as three-dimensional curved surface shapes) after various rotation and translation operations, more than 90% of the regions can be completely overlapped.
  • step S150 is also required to obtain the coordinate information of the collection area and the machining tool head. It should be noted that step S150 does not have to be performed after step S140, and step S150 can be performed before step S160 Of any node. And in some embodiments, if the spatial coordinate information of the collection area has been obtained in the process of obtaining the machining allowance in step S140, step S150 in these embodiments may only obtain the coordinate information of the machining tool head.
  • step S160 can be performed to obtain the machining path of the machining tool head through the above information.
  • the space coordinates enable the machining tool head to remove the machining allowance at the collection area, and then obtain a workpiece that meets the expected morphology requirements.
  • some embodiments of the present application also provide a processing device 30 based on three-dimensional imaging (hereinafter referred to as processing device 30 ).
  • the processing device 30 includes an acquisition module 31, a determination module 32, and a processing module 33.
  • the functions of the acquiring module 31, the determining module 32, and the processing module 33 can be combined with the solutions of FIGS. 3 to 9 and the foregoing embodiments.
  • the obtaining module 31 is configured to obtain a first image and a second image collected within a preset time, the first image and the second image are images collected after projection light with two-dimensional pattern information is projected to the collection area; Both the image and the second image include two-dimensional pattern information, the area where the maximum value of the pixel value of the two-dimensional pattern information in the first image is located is the first area, and the maximum value of the pixel value of the two-dimensional pattern information in the second image is located The area is the second area, and the first area and the second area do not overlap each other, and the coordinate information of the collection area and the coordinate information of the machining tool head are obtained;
  • the determining module 32 is configured to determine the first characteristic image according to the difference between the pixel value of each pixel of the first image and the pixel value of the corresponding pixel of the second image, and the pixel value of each pixel of the second image corresponds to the first image
  • the difference between the pixel values of the pixel points determines the second characteristic image, and determines the three-dimensional image of the collection area based on the first characteristic image and the second characteristic image;
  • the processing module 33 is used to obtain the to-be-processed allowance of the collection area according to the three-dimensional image and the preset three-dimensional information of the collection area, and to determine the position of the processing tool head according to the coordinate information of the collection area, the coordinate information of the processing tool head, and the to-be-processed allowance Processing path.
  • the way that the acquiring module 31 acquires the image may be: the exposure time of each contrast image in the same frame is controlled within 100ms, specifically it may be 30ms, 50ms, 65ms, 80ms, or 100ms; For each contrast image, the interval time is less than or equal to 50ms, specifically it can be 1ms, 5ms, 15ms, 25ms, 30ms, 40ms or 50ms; the time from the first contrast image exposure of the same frame to the end of the last contrast image exposure is 200ms Within.
  • the determining module 32 subtracts the pixel value of the corresponding pixel of the second image 1220 from the pixel value of each pixel of the first image 1210, and forms the first image according to the portion of the pixel value difference greater than or equal to the first threshold.
  • a characteristic image 1310; and subtracting the pixel value of the corresponding pixel of the first image 1210 from the pixel value of each pixel of the second image 1220, and forming a second characteristic image 1320 according to the part where the pixel value difference is greater than or equal to the second threshold For the specific solutions of the first threshold and the second threshold used, reference may be made to the description of the above embodiment, which will not be repeated here.
  • the processing module 33 obtains a three-dimensional image 1400 by processing the contrast image.
  • algorithms such as cross-correlation matching method and least square method may be used to process the first feature image 1310 and the second feature image 1320 of the same frame respectively to obtain the three-dimensional image of the acquisition area corresponding to the frame. Image 1400.
  • the processing module 33 is also used to obtain the three-dimensional image 1400 of two adjacent frames, perform feature matching processing on the three-dimensional image 1400 of the two adjacent frames to obtain the matching result, and combine the two adjacent frames of the image according to the matching result.
  • the three-dimensional image 1400 is spliced to obtain a continuous three-dimensional image after splicing, so that the processing device 30 can achieve the effect of continuous scanning of the workpiece.
  • the processing device 30 further includes a projection module, and the projection module can project projection light with a two-dimensional pattern (such as the first projection light 1110 and the second projection light 1120) to the workpiece at least twice within a preset time. ), within the preset time, the maximum light intensity distribution area of the two-dimensional pattern formed on the workpiece by each projection light does not overlap with each other.
  • the way that the projection module projects the projection light to the workpiece is to project each projection light at intervals.
  • the method of projecting light projected by the projection module to the workpiece, the two-dimensional pattern information 1100, etc. can refer to the method in the above embodiments, and will not be repeated here.
  • the two-dimensional pattern of the projection light projected on the workpiece may include at least one of a stripe spot, a circular spot, and an irregular spot.
  • the projection pattern of the projection light includes corresponding fringe spot information, circular spot information, and irregular spot information. At least one of the information.
  • the projection module projects each projection light separately, for example, the projection of the second projection light 1120 is performed after the projection of the first projection light 1110 is completed.
  • the area where the maximum value of the brightness of the projection light projected by the projection module within the preset time is formed on the workpiece does not overlap with each other, so the first projection formed by the projection light reflected within the preset time
  • the image and the second image together carry a large amount of three-dimensional feature information in the acquisition area.
  • the processing module 33 can eliminate the high reflection signal in the contrast image by performing difference processing on the contrast image, avoiding the interference of the high reflection signal from interfering with the reconstruction calculation of the three-dimensional feature, and also enables the final three-dimensional image to retain the original high-reflection signal area.
  • Three-dimensional feature information which can reconstruct a clear and complete three-dimensional image of the collection area, so that the processing tool head can more accurately and comprehensively eliminate the margin to be processed in the corresponding collection area, and achieve accurate processing of the workpiece to obtain Workpieces that meet the expected requirements.
  • processing device a processing device based on three-dimensional imaging (hereinafter referred to as processing device) is also provided, and the processing device includes:
  • Projector projecting projection light with two-dimensional pattern information to the workpiece
  • the receiver acquires a first image and a second image collected within a preset time, the first image and the second image are images collected after projecting projection light with two-dimensional pattern information to the collection area; the first image and the second image Both images include two-dimensional pattern information.
  • the area where the maximum value of the pixel value of the two-dimensional pattern information in the first image is located is the first area, and the area where the maximum value of the pixel value of the two-dimensional pattern information in the second image is located is the first area.
  • the second area, the first area and the second area do not overlap each other;
  • the processor is configured to execute a computer program on the memory to realize: determining the first characteristic image according to the difference between the pixel value of each pixel of the first image and the pixel value of the corresponding pixel of the second image, and according to the second image
  • the difference between the pixel value of each pixel and the pixel value of the corresponding pixel of the first image determines the second feature image; and determines the three-dimensional image of the collection area based on the first feature image and the second feature image; Preset three-dimensional information to obtain the machining allowance of the collection area; obtain the coordinate information of the collection area and the coordinate information of the machining tool head, and determine the machining tool head according to the coordinate information of the collection area, the coordinate information of the machining tool head and the machining allowance Processing path; and
  • the processor including the processing tool head, can drive the processing tool head to move along the processing path to eliminate the margin to be processed in the collection area.
  • the processing device 40 includes a projector 410, a receiver 420, a processor 430, and a processor 440.
  • the projector 410 may be any structured light capable of projecting desired two-dimensional pattern information. Type of light source, the receiver 420 is a camera, the processor 430 can be used to control the projection period, duration, etc.
  • the projector 410 can analyze and process the first image and the second image of the acquisition area 10 obtained by the receiver 420 to This acquires the three-dimensional image corresponding to the acquisition area 10, which contains the depth information 170 of the area (indicated by the solid line), and then is combined with the preset three-dimensional information 180 of the acquisition area 10 (indicated by the dashed line) to obtain the information of the acquisition area. Allowance to be processed.
  • the processor 430 determines the processing path of the processing tool head according to the coordinate information of the collection area, the coordinate information of the processing tool head, and the allowance to be processed, and controls the processing tool head 442 in the processor 440 to move along the processing path to move the collection area
  • the superfluous part (the part between the solid line 170 and the dashed line 180 in the figure) is cut off, so as to obtain the workpiece with the expected morphology (the morphology shown by the dashed line 180 in the figure).
  • the processing equipment not only a large amount of feature information on the surface of the workpiece can be obtained, but also high reflection signals on the surface of the workpiece can be eliminated, and high reflection signals can avoid interference with the reconstruction calculation of three-dimensional features.
  • the final three-dimensional image can retain the original high-resolution image. The three-dimensional feature information in the signal area is reflected, so that a clear and complete three-dimensional model of the workpiece can be reconstructed through the above-mentioned three-dimensional imaging-based processing method.
  • the projection method of the projector to the projection light and the processing method of the image by the processor can refer to the corresponding solutions in the above embodiments, and should be regarded as being within the scope of the description of this application.
  • the projector may be a laser projection device equipped with an optical shaping element.
  • the receiver may be an image sensor, and the image sensor may be a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor).
  • the projector, receiver, memory and processor are connected through the system bus.
  • the definition of the projector, the receiver, and the processor can refer to the above definition of the processing method based on three-dimensional imaging.
  • the projector can project projection light with two-dimensional pattern information to the object at least twice within a preset time. Within the preset time, the maximum value of the brightness formed on the object by each projection light is located The regions do not overlap each other. In some embodiments, the projector projects each projection light separately. In some embodiments, the two-dimensional pattern information projected by the projector includes at least one of fringe spot information, circular spot information, and irregular spot information.
  • the first image and the second image in the same frame are both less than or equal to 200ms. Obtained within a preset time.
  • the projection light corresponding to the first image and the second image projected by the projector is also projected within 200 ms.
  • the processor is used to: subtract the pixel value of the corresponding pixel of the second image from the pixel value of each pixel of the first image, and form the part whose pixel value difference is greater than or equal to the first threshold.
  • a first characteristic image and subtracting the pixel value of the corresponding pixel of the first image from the pixel value of each pixel of the second image, and forming a second characteristic image according to the part where the pixel value difference is greater than or equal to the second threshold.
  • the processor of some embodiments may also obtain three-dimensional images of two adjacent frames of the object, perform feature matching processing on the three-dimensional images of the two adjacent frames, and obtain the matching result, and According to the matching result, the three-dimensional images of two adjacent frames are spliced to obtain a continuous three-dimensional image after the splicing, and the continuous three-dimensional image and the preset three-dimensional information of the acquisition area are used to obtain the information of the acquisition area corresponding to the continuous three-dimensional image. Allowance to be processed.
  • a computer program is stored in the memory, and the processor can implement the steps in the foregoing method embodiments when the processor executes the computer program.
  • FIG. 12 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the processing equipment to which the solution of the present application is applied.
  • the specific processing equipment may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer-readable storage medium is provided, and a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
  • the first image with two-dimensional information on the surface of the workpiece and the three-dimensional information image with three-dimensional information on the surface of the workpiece are used to determine the Taking a three-dimensional model of the surface of the workpiece can effectively improve the accuracy of each frame of three-dimensional imaging, which is conducive to the accurate processing of the workpiece.
  • the above-mentioned processing method, processing device, processing equipment and storage medium based on three-dimensional imaging can also be applied to continuous three-dimensional scanning, by combining the above-mentioned splicing processing of the first image and the three-dimensional information image, the continuous scanning of the photographed workpiece A stable and accurate continuous three-dimensional model can be obtained, thereby facilitating accurate processing of continuous areas on the surface of the workpiece.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present invention, "a plurality of” means at least two, such as two, three, etc., unless otherwise specifically defined.
  • the terms “installed”, “connected”, “connected”, “fixed” and other terms should be understood in a broad sense, for example, it can be a fixed connection or a detachable connection. , Or integrated; it can be a mechanical connection or an electrical connection; it can be directly connected, or indirectly connected through an intermediate medium, it can be the internal connection of two components or the interaction relationship between two components, unless otherwise specified The limit.
  • installed can be a fixed connection or a detachable connection. , Or integrated; it can be a mechanical connection or an electrical connection; it can be directly connected, or indirectly connected through an intermediate medium, it can be the internal connection of two components or the interaction relationship between two components, unless otherwise specified The limit.
  • the first feature “on” or “under” the second feature may be in direct contact with the first and second features, or the first and second features may be indirectly through an intermediary. get in touch with.
  • the "above”, “above” and “above” of the first feature on the second feature may mean that the first feature is directly above or obliquely above the second feature, or it simply means that the level of the first feature is higher than that of the second feature.
  • the “below”, “below” and “below” of the second feature of the first feature may mean that the first feature is directly below or obliquely below the second feature, or it simply means that the level of the first feature is smaller than the second feature.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Mechanical Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于三维成像的加工方法,包括步骤:获取在预设时间内采集的第一图像(1210)和第二图像(1220);第一图像(1210)和第二图像(1220)包括投影光的二维图案信息(1100),第一图像(1210)和第二图像(1220)中二维图案信息(1100)的像素极大值区域互不重合;由第一图像(1210)各像素点的像素值与第二图像(1220)相应像素点的像素值相互之间的差值确定第一特征图像(1310)和第二特征图像(1320),并确定三维图像(1400);由三维图像(1400)和预设三维信息获得待加工余量;由采集区域(10)、加工刀头(442)的坐标信息及待加工余量确定加工刀头(442)的加工路径。

Description

基于三维成像的加工方法、装置、设备及存储介质 技术领域
本发明涉及数控加工领域,特别是涉及一种基于三维成像的加工方法、装置、设备及存储介质。
背景技术
以结构光三维成像和共聚焦三维成像技术为代表的非接触式三维测量技术具有扫描速度快、非接触以及点云密集等优点,且被应用在多个领域中。例如在工业制造的过程中,需要对工件进行检测以判断工件表面是否存在凹陷、凸起等非预期的三维特征,而一般地,对工件的检测常常会通过结构光投影并摄像的方式以采集工件表面的信息,进而对所采集的图像进行分析处理以重建出工件表面的三维模型。对此,随着利用三维成像以对目标工件进行三维扫描检测的逐渐普及,如何获得更为清晰且完整的工件的三维模型,以对工件实现准确的加工,已成为了目前业界所关注的重点之一。
发明内容
根据本申请的各种实施例,提供一种基于三维成像的加工方法、装置、设备及存储介质。
一种基于三维成像的加工方法,包括如下步骤:
获取在预设时间内采集的第一图像和第二图像,所述第一图像和所述第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;所述第一图像和所述第二图像均包括所述二维图案信息,所述第一图像中所述二维图案信息的像素值的极大值所在区域为第一区域,所述第二图像中所述二维图案信息的像素值的极大值所在区域为第二区域,所述第一区域与所述第二区域互不重合;
根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,以及根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值确定第二特征图像;
根据所述第一特征图像和所述第二特征图像确定所述采集区域的三维图像;
根据所述三维图像和所述采集区域的预设三维信息以获得所述采集区域的待加工余量;
获取所述采集区域的坐标信息及加工刀头的坐标信息;及
根据所述采集区域的坐标信息、所述加工刀头的坐标信息及所述待加工余量确定所述加工刀头的加工路径。
一种基于三维成像的加工装置,包括:
获取模块,用于获取在预设时间内采集的第一图像和第二图像,所述第一图像和所述第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;所述第一图像和所述第二图像均包括所述二维图案信息,所述第一图像中所述二维图案信息的像素值的极大值所在区域为第一区域,所述第二图像中所述二维图案信息的像素值的极大值所在区域为第二区域,所述第一区域与所述第二区域互不重合,获取所述采集区域的坐标信息及加工刀头的坐标信息;
确定模块,用于根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值确定第二特征图像,以及根据所述第一特征图像和所述第二特征图像确定所述采集区域的三维图像;及
处理模块,用于根据所述三维图像和所述采集区域的预设三维信息以获得所述采集区域的待加工余量,根据所述采集区域的坐标信息、所述加工刀头的坐标信息及所述待加工余量确定所述加工刀头的加工路径。
一种基于三维成像的加工设备,包括:
投射器,向工件投射具有二维图案信息的投影光;
存储器,存储有计算机程序;
接收器,获取在预设时间内采集的第一图像和第二图像,所述第一图像和第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;所述第一图像和所述第二图像均包括所述二维图案信息,所述第一图像中所述二维图案信息的像素值的极大值所在区域为第一区域,所述第二图像中所述二维图案信息的像素值的极大值所在区域为第二区域,所述第一区域与所述第二区域互不重合;
处理器,被配置为执行所述存储器上的计算机程序,以实现:根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,以及根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值确定第二特征图像;及根据所述第一特征图像和所述第二特征图像确定所述采集区域的三维图像;根据所述三维图像和所述采集区域的预设三维信息以获得所述采集区域的待加工余量;获取所述采集区域的坐标信息及加工刀头的坐标信息,根据所述采集区域的坐标信息、所述加工刀头的坐标信息及所述待加工余量确定所述加工刀头的加工路径;及
加工器,包括加工刀头,所述加工器能够驱使所述加工刀头沿所述加工路径移动,以消除所述采集区域的待加工余量。
一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一项所述的基于三维成像的加工方法的步骤。
本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更好地描述和说明这里公开的那些发明的实施例和/或示例,可以参考一幅或多幅附图。用于描述附图的附加细节或示例不应当被认为是对所公开的发明、目前描述的实施例和/或示例以及目前理解的这些发明的最佳模式中的任何一者的范围的限制。
图1为本申请一实施例提供的基于三维成像的加工方法的方法流程图;
图2为本申请另一实施例提供的基于三维成像的加工方法的方法流程图;
图3为本申请一实施例提供的投影光投射至工件时的示意图;
图4为图3实施例中对应相同采集区域的另一次投影光投射至工件时的示意图;
图5为图3所对应的第一图像中局部区域的幅值强度示意图;
图6为图4所对应的第二图像中局部区域的幅值强度示意图;
图7为图5减去图6且保留大于0的部分所形成的第一特征图像的示意图;
图8为图6减去图5且保留大于0的部分所形成的第二特征图像的示意图;
图9为图7加上图8所形成的工件于该帧的三维图像;
图10为本申请一实施例提供的连续扫描下的基于三维成像的加工方法的方法流程图;
图11为本申请一实施例提供的基于三维成像的加工装置的示意图;
图12为本申请一实施例提供的基于三维成像的加工设备的内部结构图;
图13为本申请一实施例提供的基于三维成像的加工设备的结构示意图。
具体实施方式
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的较佳实施方式。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施方式。相反地,提供这些实施方式的目的是使对本发明的公开内容理解的更加透彻全面。
需要说明的是,当元件被称为“固定于”另一个元件,它可以直接在另一个元件上或者也可以存在居中的元件。当一个元件被认为是“连接”另一个元件,它可以是直接连接到另一个元件或者可能同时存在居中元件。本文所使用的术语“内”、“外”、“左”、“右”以及类似的表述只是为了说明的目的,并不表示是唯一的实施方式。
随着利用三维成像以对目标工件进行三维扫描检测的逐渐普及,如何获得更为清晰且完整的工件的三维模型,以对工件实现准确的加工,已成为了目前业界所关注的重点之一。为此,本申请的一些 实施例提供一种基于三维成像的加工方法,通过获得更为清晰且完整的三维模型以实现对工件的准确加工。
本申请的一些实施例提供了一种基于三维成像的加工方法,该方法能够在极短的预设时间内获取相同采集区域的至少两个对比图像,其中每个对比图像分别对应一次具有二维图案信息的投影光的接收成像,其中不同对比图像中的二维图案信息的像素值的极大值所在区域互不重合,随后对相同采集区域的不同对比图像进行差值处理以得到相应的特征图像,从而能够有效地消除对比图像中可能存在的工件的高反射信号,并在特征图像中保留预期的有效信息,最后通过处理消除了高反射信号后的各特征图像以重建出相应采集区域的三维图像。
在一些实施例中,由于设备抖动、工件抖动等原因,原预期对应相同采集区域的第一图像和第二图像会存在少许不重合的采集区域,但通过控制预设时间即可使第一图像和第二图像的获取在极短时间内完成,从而减少不重合的采集区域,使得第一图像和第二图像所对应的采集区域几乎相同,使得这种差异在三维特征的重建计算中所引起的偏差可以忽略。为了方便描述,当第一图像和第二图像所对应的采集区域几乎相同时,也可称两者所对应的采集区域相同。
具体参考图1,在一些实施例中,基于三维成像的加工方法包括:
步骤S110:获取在预设时间内采集的第一图像和第二图像,第一图像和第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;第一图像和第二图像均包括二维图案信息,第一图像中二维图案信息的像素值的极大值所在区域为第一区域,第二图像中二维图案信息的像素值的极大值所在区域为第二区域,第一区域与第二区域互不重合。
步骤S120:根据第一图像各像素点的像素值与第二图像相应像素点的像素值的差值确定第一特征图像,以及根据第二图像各像素点的像素值与第一图像相应像素点的像素值的差值确定第二特征图像。
步骤S130:根据第一特征图像和第二特征图像确定该采集区域的三维图像。
步骤S140:根据采集区域的三维图像的三维信息和预设的三维信息以获得该采集区域的待加工余量。
步骤S150:获取采集区域的坐标信息及加工刀头的坐标信息。
步骤S160:根据采集区域的坐标信息、加工刀头的坐标信息及待加工余量确定加工刀头的加工路径。
在以下描述中,对被摄工件于一段预设时间内所获得的图像可视为一帧,每一帧均包括第一图像和第二图像,每一帧的第一图像和第二图像对应工件表面的相同采集区域。另外,由于第一图像与第二图像之间需要进行对比分析,因此第一图像和第二图像也可统称为对比图像。
对于步骤S110,在一些实施例中,可在一段较短的间隔时间内分开获取第一图像和第二图像,以此避免成像时相互干扰。但相应地,也应避免在同一帧中获取第一图像与第二图像的间隔时间过长,防止由于同一帧的对比图像的获取间隔时间过长,而导致无法确保同一帧的第一图像和第二图像是目标工件同一区域的成像。另外,由于需要对同一帧的各对比图像之间的相应像素点进行计差值处理,因此为了便于对各对比图像进行差值处理,在一些实施例中应使各对比图像的大小一致或趋于一致,例如使第一图像的形状与第二图像的形状相同,且使第一图像的尺寸与第二图像的尺寸相等(例如长度相等且宽度相等的两个矩形图像),以此确保同一帧的各对比图像中的相应像素点坐标相同。
当采集区域不存在三维特征及环境光反射的情况时,对比图像中将仅存在反射后的投影光的二维图案,其中该二维图案在图像中由各像素点组成,二维图案的形状由相应像素点的分布反映,二维图案的亮度由相应像素点的像素值大小反映,图像中的二维图案信息包括组成二维图案的各像素点的分布信息及像素值大小信息。即,在忽略高反信号及工件表面三维特征信号后所获得的第一图像和第二图像中,具有二维图案信息的像素点的像素值即可称为二维图案信息的像素值。
二维图案信息的像素值的极大值所在区域可理解为:在图像平面的至少一个方向上,在该二维图案信息的各像素点中,一个像素点的像素值大于该方向相邻两侧的像素点的像素值,则该像素值所在区域可称为一个二维图案信息中的像素值的极大值所在区域。
例如,在一些实施例中,当第一图像中的二维图案由多个亮度呈高斯分布的圆形光斑组成时,每 个圆形光斑的中心的像素值大于该光斑于各方向的其他区域的像素值,因此该像素值所在区域为该光斑的像素值的极大值所在区域,每个光斑的像素值的极大值所在区域组成第一图像中的第一区域。对于第二图像中的第二区域类似,此处不加以赘述。
在上述基于三维成像的加工方法中,可对工件投射具有特定二维图案信息的投影光,并对从工件表面反射回来的投影光进行成像,以此获取第一图像和第二图像,通过将同一帧的第一图像和第二图像控制在预设时间内获取,从而可视为同一帧的第一图像和第二图像均是对工件同一区域范围的成像。对工件投射的投影光可以包括条形光斑、圆形光斑、不规则光斑中的至少一种,投影光的投影图案即包括相应的条纹光斑信息、圆形光斑信息、不规则光斑信息中的至少一种。
另外,第一图像和第二图像均包括反射回来的二维图案信息,且由于在同一帧的第一图像和第二图像中,第一区域和第二区域相近但互不重合,因此两者中的二维图案信息的像素值的极大值区域所对应的工件表面的区域不同,从而使得同一帧的两个对比图像所获得的二维图案信息共同包含了工件表面的大量特征信息,其中工件的凹陷、凸起等特征会引起照射至其上并反射的投影光发生幅值及分布的变化,从而导致对比图像中的二维图案信息的幅值及分布也发生相应改变。
进一步地,对于步骤S120,将同一帧的第一图像与第二图像中各相应像素点的像素值做差值处理,从而得到同一帧的第一特征图像和第二特征图像。由于同一帧的第一图像和第二图像在预设时间内获取,两个对比图像所对应的工件表面的采集区域几乎重合,因此当采集区域存在高反射部分时,第一图像和第二图像不仅也存在相应的高亮区域(高反信号),且两个对比图像中由工件高反射现象产生的高反信号的位置几乎重合。此时通过差值处理,第一特征图像保留了第一图像中携带有工件三维特征信息的二维图案,且消去了第一图像中由于工件表面高反射现象而引起的高幅值信号。同理,第二特征图像保留了第二图像中携带有工件三维特征信息的二维图案,且消去了第二图像中由于工件表面高反射现象而引起的高幅值信号。即,同一帧的第一特征图像和第二特征图像分别携带了工件于同一采集区域中不同子区域的三维特征。随后,通过步骤S130,利用同一帧的第一特征图像和第二特征图像获得工件于该帧所对应的采集区域的三维图像,该三维图像即能反映出该采集区域的三维模型。
在上述步骤中,不仅可以获得工件表面的大量特征信息,且可以消除工件表面的高反射信号,避免高反射信号干扰三维点云的重建计算,另外也能使最终的三维图像能够保留原本位于高反射信号区域中的三维特征信息,从而通过上述步骤能够重建出采集区域的清晰且完整的三维图像,进而能够为后续的加工过程提供准确的对比信息。
在一些实施例中,对于步骤S110而言,当需要获取同一帧的第一图像和第二图像时,可以对目标工件投射两次投影光,每次投影光分开投射,即第一次投影光投射后再进行第二次投影光的投射,每次投射的投影光对应一个对比图像的接收。每次投影光照射至工件表面所呈现的二维图案形状、大小及强度相同或大致相同,且对应同一帧成像的每次投影光在工件上形成的二维图案的最大光强分布区域互不重合。同一帧中的各对比图像均可视为对工件同一区域范围的成像。因此在一些实施例中,在步骤S110之前还包括步骤:在预设时间内向工件依次投射两次具有二维图案的投影光,两次投影光在工件上形成的二维图案的最大光强分布区域互不重合。投影光所具有的二维图案信息包括但不限于矩形光斑、圆形光斑、椭圆光斑、非规则光斑等,且各光斑的强度可以相同,也可以不同。且在一些实施例中,投影光可以为可见光和红外光波段范围内的单色光。通过在预设时间内投射的两次投影光,以分别获取同一帧的第一图像和第二图像,同一帧的第一图像和第二图像在预设时间内获取,第一图像和第二图像均包括二维图案信息,且在同一帧的第一图像和第二图像中,第一图像中的第一区域与第二图像中的第二区域互不重合。
同一帧的第一图像和第二图像应在一预设时间内完成获取,以确保同一帧的第一图像和第二图像是针对目标工件同一区域的成像。当同一帧仅包括两个对比图像仅包括,上述预设时间可理解为从第一个对比图像曝光开始至最后一个对比图像曝光结束时的间隔时间。在一些实施例中,预设时间满足0<t≤200ms。
在一些实施例中,对应同一帧成像所投射的各投影光的间隔时间小于等于50ms,具体可以为1ms、5ms、15ms、25ms、30ms、40ms或50ms。此时,对应同一帧成像的投影光的投射间隔能够被控制在极 短的时间内,从而可确保同一帧中的各对比图像均是对工件同一区域范围的成像。
进一步地,在一些实施例中,也可通过控制同一帧中的各对比图像的曝光时间以确保对比图像的获取时间不会过长。一些实施例中的对比图像的曝光时间控制在100ms以内,具体可以为30ms、50ms、65ms、80ms或100ms。以此能够使获得对比图像时的曝光时间控制在极短的范围内,避免曝光时间过长,从而进一步确保同一帧中的各对比图像均是对工件同一区域范围的成像。
在一些实施例中,在步骤S110之前还包括步骤:在预设时间内向工件依次投射两次具有二维图案的投影光,两次投影光在工件上形成的二维图案的最大光强分布区域互不重合,每次投影光的投射持续时间在100ms以内,间隔时间在50ms以内。
相应地,在这些实施例中,通过在预设时间内投射的两次投影光,以分别获取同一帧的第一图像和第二图像,第一图像和第二图像均包括二维图案信息,且在同一帧的第一图像和第二图像中,第一图像中的第一区域与第二图像中的第二区域互不重合,同一帧的第一图像和第二图像在预设时间内获取,获取同一帧的第一图像和第二图像的间隔时间满足0<t1≤50ms,第一图像和第二图像的曝光时间满足0<t2≤100ms,预设时间满足0<t≤200ms,以此能够将获取同一帧的第一图像和第二图像的总时长控制在极短的时间内,从而确保同一帧中的各对比图像均是对工件同一区域范围的成像。进一步地,在一些实施例中,t1≤1ms,t2≤30ms,t≤60ms。
对于步骤S110,在一些实施例中,向工件投射的二维图案的形状、大小及强度相同或大致相同,不同之处在于,对应同一帧成像但不同对比图像的各投影光而言,各投影光投射至工件表面所照亮的区域不重合,例如在一些实施例中可以呈互补关系(可参考图3和图4),因此最终通过接收这些投影光所形成的各对比图像中的二维图案信息的像素值的极大值所在区域也互不重合,例如在一些实施例中可以呈相应的互补关系。
具体地,在一些实施例中,同一帧的各对比图像尺寸相同,且其中投影光具有的二维图案由多条相互平行且大小相同的长条形光斑组成,同一帧的相应各投影光投射至工件表面所照亮的区域互补(可参考图3和图4),即对应第一图像的投影光在工件上的照亮区域与对应第二图像的投影光在工件上的非照亮区域重合,对应第二图像的投影光在工件上的照亮区域与对应第一图像的投影光在工件上的非照亮区域重合。通过互补的照射关系,对应同一帧的各投影光的图案能够完整覆盖该帧所对应的工件表面的采集区域(可参考图3和图4)。且由于采集区域中的三维特征(凸起、凹陷等)在被投影光照射后会导致反射后的投影光发生变化(如导致反射后的投影光图案发生强度、分布的非均匀变化),从而使照亮区域中的三维信息能够被体现在反射后的投影光中,进而最终体现在相应的对比图像中。在接收经工件表面反射回的各投影光后,对比图像中将同样呈现反射后的投影光所具有的二维图案,而采集区域中的三维特征信息即包含于反射后的投影光中,且在同一帧的各对比图像中,不同对比图像中的二维图案的像素位置也将呈现互补的关系。因此,当同一帧的相应各投影光投射至工件表面所照亮的区域互补时,采集区域中的各种三维信息将被完整体现在对比图像中,例如部分三维信息体现在第一图像中,剩下的另一部分三维信息体现在第二图像中。
由以上可知,在同一帧的第一图像和第二图像中,第一图像中的二维图案信息的极大值区域(即二维图案信息中像素值的极大值所在区域)与第二图像中的二维图案信息的极大值区域互不重合,将有利于使工件表面的三维信息更完整地被体现在各对比图像中,从而提高三维成像的完整性。
可参考图2,由于第一图像中的二维图案信息的极大值区域与第二图像中的二维图案信息的极大值区域互不重合,例如当第一图像和第二图像在图像边界完全重合的情况下,图像中的第一区域与第二区域将互不重合,因此在一些实施例中,步骤S120具体可以为步骤S121:利用第一图像各像素点的像素值减去第二图像相应像素点的像素值,并利用像素值差大于或等于第一阈值的部分以形成第一特征图像;利用第二图像各像素点的像素值减去第一图像相应像素点的像素值,并利用像素值差大于或等于第二阈值的部分以形成第二特征图像。第一图像和第二图像中坐标相同的像素点可称为两个图像的相应像素点。此时可通过控制第一阈值和第二阈值的大小以完全消除第一特征图像中涉及第二区域的信号,以及消除第二特征图像中涉及第一区域的信号。
利用同一帧的第一图像各像素点的像素值减去第二图像相应像素点的像素值,仅保留大于第一阈 值的部分,且原本在第一图像中的高反信号与第二图像中的高反信号相减后将接近完全抵消,因此第一图像中的极大值区域的特征依然能够保留在第一特征图像中,而原本可能存在的高反信号特征将被消除。同理,利用同一帧的第二图像各像素点的像素值减去第一图像相应像素点的像素值,仅保留大于第二阈值的部分,且原本在第二图像中的高反信号与第一图像中的高反信号相减后将接近完全抵消,因此第二图像中的极大值区域的特征依然能够保留在第二特征图像中,而原本可能存在的高反信号特征将被消除。而工件的三维特征所对应的第一图像和第二图像中的极大值区域存在变形,因此当极大值区域被保留下来时,这些三维特征信息也随之保留在第一特征图像和第二特征图像中。通过步骤S120所得到的第一特征图像和第二特征图像,能够有效消除原本位于第一图像和第二图像中由于工件表面高反射现象而引起的高反信号,从而能够有效消除环境噪声,避免高反信号淹没有效信号(发生变形的二维图案信息),进而有效提升三维成像的完整性。另外,高反信号的消除也有利于提升通过步骤S130所获得的三维图像的精确性及完整性。
第一阈值和第二阈值可统称为预设值,在一些实施例中,在获得第一特征图像和第二特征图像的步骤S121中,预设值大于或等于0,且小于第一图像或第二图像中二维图案信息的各像素点的像素值中的最大值。
例如在一些实施例中,在确定第一特征图像的步骤中,第一阈值大于或等于0,且小于第一图像中二维图案信息的各像素点的像素值中的最大值。在一些实施例中,在确定第二特征图像的步骤中,第二阈值大于或等于0,且小于第二图像中二维图案信息的各像素点的像素值中的最大值。
通过控制步骤S121中的预设值大小可以选择性地保留差值处理后的第一特征图像和第二特征图像中的三维特征信息的大小。特别地,在一些实施例中,当同一帧的第一图像中的高反信号的大小和分布均与第二图像中的相同时,该预设值可定为0,此时两个对比图像相减后能够完全去除高反信号,并较好地保留工件反射回来的携带三维特征信息的投影光信号。而在一些实施例中,同一帧的第一图像和第二图像中的高反信号的大小和分布存在细微差异时(如两个对比图像中的高反信号沿一个方向存在微小位移),两个对比图像的相减将无法很好地消除高反信号,使得第一特征图像和第二特征图像依然残留强度较小的高反信号,此时可通过提高预设值,使第一特征图像和第二特征图像仅保留大于预设值的部分,从而确保高反信号被完全消除。
需要注意的是,在一些情况下,在经工件表面的反射后,反射光的图案将发生形变,图案中的极大值信号会向原本图案中的极小值区域移动,即此时的三维特征信息同时包含于图案中的原始的极大值区域和极小值区域中。且由于第一图像和第二图像中的高反信号在相减后可能存在较小强度的残留,从而对上述分布至极小值区域的三维特征信息造成较大的干扰。为了更好地在第一特征图像和第二特征图像中保留这部分位于极小值区域的三维特征信息,避免高反信号对三维特征信息的分离造成干扰,在一些实施例中,可通过控制步骤S121中的预设值的大小以保留这部分三维特征信息。具体地,在一些实施例中,预设值大于或等于第一图像或第二图像中的二维图案信息中最大像素值的十分之一,从而有利于消除残留的高反信号;且预设值小于或等于第一图像或第二图像中的二维图案信息中最大像素值的五分之一,从而较好地避免位于极小值区域且强度较小的三维特征信息被消除。
可结合参考图3和图4,在一个实施例中,向工件的采集区域投射具有二维图案信息的投影光的步骤具体为:在60ms的预设时间内向工件的采集区域依次投射第一投影光1110和第二投影光1120,第一投影光1110和第二投影光1120的持续时长均在30ms以内,第一投影光1110与第二投影光1120的间隔时长在1ms以内。第一投影光1110和第二投影光1120具有相同的二维投影图案,其中二维投影图案由多条相互平行的矩形光斑组成,第一投影光1110中的各矩形光斑的形状、大小、间隔距离、强度均与第二投影光1120中各矩形光斑相同,且同一矩形光斑的各处强度为常数。
且对应同一帧成像的第一投影光1110和第二投影光1120而言,两次投影光投射至工件表面所照亮的区域互补,由于预设时间极短,两次投影光能够被视为对工件同一采集区域10的照射,只是两次投影光在该采集区域10的照亮区域互补,或者描述为两次投影光于工件上的光斑区域互补。因此,第一投影光1110和第二投影光1120的照亮区域能够完整覆盖采集区域10,其中一半面积的采集区域10能够被第一投影光1110照亮,而另一半面积的采集区域10能够被第二投影光1120照亮,从而该采集 区域10中的各三维特征信息能够被全面记录于反射后的投影光中。在该实施例中,一帧所对应的采集区域10存在两处三维特征,分别为第一三维特征102和第二三维特征103,且第一三维特征102被第一投影光1110照射,即被第一投影光1110的二维图案光斑覆盖,而第二三维特征103被第二投影光1120照射,即被第二投影光1120的二维图案光斑覆盖。照射至第一三维特征102的第一投影光1110被反射后,覆盖该第一三维特征102的光斑将发生形状、光强分布等变化,从而将第一三维特征102的信息记录于反射后的相应光斑中;同样地,第二三维特征103的信息被记录于反射后的第二投影光1120的相应光斑中。
参考图5和图6,在一些情况下,由于目标工件的部分表面存在高反射现象,导致目标工件反射回的不仅仅是投影光的二维图案信息1100,同时还包括了高反信息104,从而导致最终的第一图像1210和第二图像1220均包括了二维图案信息1100和高反信息104的叠加。该实施例中的第一图像1210和第二图像1220的形状及尺寸相同。在该实施例中,根据第一图像1210和第二图像1220可知,该帧所对应的采集区域10存在两处明显的高反射区域,第一三维特征102处于其中一个高反射区域中,第二三维特征103处于其中的另一个高反射区域中。在第一图像1210中,位于高反信息104区域的二维图案信息1100携带了第一三维特征信息1021;在第二图像1220中,位于另一高反信息104区域的二维图案携带了第二三维特征信息1031。其中,原本能够用于算法分析的有效的二维图案信息1100被高反信息104叠加后,将导致***对该存在高反现象的区域分析失效,从而丢失该区域的三维特征信息,导致三维成像的不完整。因此,在该实施例中,通过对同一帧中的第一图像1210和第二图像1220进行差值处理以消除高反信息104。需要注意的是,图5为图3的采集区域10于X方向的像素行105中各像素点的像素值大小,图6为图4的采集区域10于X方向的像素行105中各像素点的像素值大小,由于图3和图4所体现的均是同一采集区域10,因此图5和图6均是对该采集区域10的同一成像像素行的像素值信息。
具体地,图7为该帧下的第一图像1210各像素点的像素值减去第二图像1220相应像素点的像素值,且保留大于0的部分所形成的第一特征图像1310;图8为该帧下的第二图像1220各像素点的像素值减去第一图像1210相应像素点的像素值,且保留大于0的部分所形成的第二特征图像1320。需要注意的是,相应像素点可理解为第一图像1210和第二图像1220中的相同位置的最小可计算区域。通过差值处理,所得到第一特征图像1310将仅保留经目标工件反射后的第一投影光1110的二维图案信息1100及第一三维特征信息1021;所得到第二特征图像1320将仅保留经目标工件反射后的第二投影光1120的二维图案信息1100及第二三维特征信息1031。
由于第一特征图像1310和第二特征图像1320携带的是同一采集区域10中互补区域的信息,因此可参考图9,在一些实施例中,例如在该实施例中,将第一特征图像1310与第二特征图像1320各像素点的像素值相加以得到工件于该帧的采集区域10的三维图像1400,通过对该帧的三维图像1400各像素点的像素值分布以计算出采集区域10中的三维特征信息,进而得工件于该采集区域10中的三维图像1400。当然,重建出三维图像1400的方法并不限于将第一特征图像1310和第二特征图像1320相加,在一些实施例中也可以直接通过第一特征图像1310和第二特征图像1320中的三维特征信息分别独立计算出各互补区域中的三维特征。具体地,在一些实施例中,通过同一帧第一特征图像1310和第二特征图像1320到该帧的三维图像1400的方法可以为:对同一帧(同一采集区域)的第一特征图像1310和第二特征图像1320分别进行图像处理,图像处理方法包括但不限于互相关法、最小二乘法等,最终得到该帧下目标工件相应采集区域的密集三维点云,通过三维点云可反映该采集区域的深度信息。
参考图10,在一些实施例中,为了对目标工件进行连续移动的三维扫描,以获得扫描区域的三维特征信息,基于三维成像的加工方法还包括步骤S131:获取相邻两帧的三维图像1400,每一帧三维图像1400分别通过在一段预设时间内采集到的第一图像和第二图像通过以上相应方法得到,将相邻两帧的三维图像1400进行特征匹配处理,得到匹配结果;根据匹配结果将相邻两帧的三维图像1400进行拼接,得到拼接后的连续三维图像。其中,前一帧三维图像1400由一段预设时间内获得的第一图像和第二图像得到,相邻的后一帧三维图像1400由相邻的后一段预设时间内获得的第一图像和第二图像得到。
具体地,当连续三维图像通过三维点云体现时,步骤S131的特征匹配处理可以为:通过对相邻两个扫描时刻的三维图像中的三维点云信息进行最近点迭代(Iterative Closest Point,ICP)处理,以对相邻两帧的三维图像中的三维点云进行匹配处理,得到匹配结果,进而根据匹配结果对相邻帧的采集区域10进行拼接,从而重建出对应扫描区域的连续三维图像。
对于步骤S140,根据采集区域的三维图像和预设的三维信息以获得采集区域的待加工余量。在一些实施例中,当对目标工件进行连续移动的三维扫描时,***将由以上方法得到该连续采集区域对应的连续三维图像,连续三维图像由两个或以上的相邻帧的三维图像1400拼接获得。连续采集区域可理解为由多个相邻帧所对应的采集区域拼接而成,连续三维图像为对应连续采集区域的图像。此时,步骤S140将为:根据拼接后的连续三维图像和连续采集区域的预设三维信息以获得连续采集区域的待加工余量。
采集区域的三维图像包含了该区域中的三维特征数据,区域中各点的空间位置具体可体现为坐标值集合(x1,y1,z1);采集区域的预设的三维信息包含了工件于该区域的理论模型数据,区域中各点的三维信息具体可体现为坐标值集合(x0,y0,z0)。在一些实施例中,通过使采集区域的实际测量模型的集合(x1,y1,z1)与该区域相应的理论设计模型的集合(x0,y0,z0)进行比对,可得到两者之间的非重叠区域,空间中的非重叠区域具体可体现为坐标值集合(x’,y’,z’),该坐标值集合可由集合(x1,y1,z1)减去集合(x0,y0,z0)得到,该坐标值集合所体现的空间信息可用于得到待加工余量。
为了能够在预设的理论三维模型中找到与采集区域对应的参考区域以实现比对,在一些实施例中,当采集区域的特征结构较多时,可在三维图像的三维形状与理论模型中任一区域的三维形状具有超过90%的相似性时,即可认为两者对应工件表面的同一区域,从而可以将两者进行比对以获得待加工余量。具体地,超过90%的相似性可理解为两个三维形状(如三维曲面形状)在经过各种旋转平移操作后有超过90%的区域能够完全重合。当然,也可以通过任意已知的其他方法将采集区域与理论模型中的区域进行比对,以获得理论模型中与该采集区域对应的区域,此处不加以赘述。
除了获取待加工余量外,还需进行步骤S150,以获取采集区域和加工刀头的坐标信息,需要注意的是,步骤S150并不一定要在步骤S140之后进行,步骤S150可在步骤S160之前的任意节点进行。且在一些实施例中,如在步骤S140获取待加工余量的过程中已经获取了采集区域的空间坐标信息的前提下,这些实施例中的步骤S150可以仅获取加工刀头的坐标信息。
在获得采集区域的坐标信息、该区域的待加工余量及加工刀头的坐标信息后,即可进行步骤S160,通过上述信息获得加工刀头的加工路径,加工路径覆盖存在待加工余量的空间坐标,使得加工刀头能够削除采集区域处的待加工余量,进而得到符合预期形貌要求的工件。
参考图11,本申请的一些实施例还提供了一种基于三维成像的加工装置30(以下简称加工装置30),加工装置30包括获取模块31、确定模块32和处理模块33。其中获取模块31、确定模块32和处理模块33的功能可结合图3至图9及上述各实施例的方案。
获取模块31,用于获取在预设时间内采集的第一图像和第二图像,第一图像和第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;第一图像和第二图像均包括二维图案信息,第一图像中二维图案信息的像素值的极大值所在区域为第一区域,第二图像中二维图案信息的像素值的极大值所在区域为第二区域,第一区域与第二区域互不重合,获取采集区域的坐标信息及加工刀头的坐标信息;
确定模块32,用于根据第一图像各像素点的像素值与第二图像相应像素点的像素值的差值确定第一特征图像,根据第二图像各像素点的像素值与第一图像相应像素点的像素值的差值确定第二特征图像,以及根据第一特征图像和第二特征图像确定采集区域的三维图像;及
处理模块33,用于根据三维图像和采集区域的预设三维信息以获得采集区域的待加工余量,根据采集区域的坐标信息、加工刀头的坐标信息及待加工余量确定加工刀头的加工路径。
在一些实施例中,获取模块31获取图像的方式可以为:同一帧中的各对比图像的曝光时间控制在 100ms以内,具体可以为30ms、50ms、65ms、80ms或100ms;分开获取同一帧中的各对比图像,间隔时间小于等于50ms,具体可以为1ms、5ms、15ms、25ms、30ms、40ms或50ms;从获取同一帧的第一个对比图像曝光开始至最后一个对比图像曝光结束的时间在200ms以内。
在一些实施例中,确定模块32通过利用第一图像1210各像素点的像素值减去第二图像1220相应像素点的像素值,并根据像素值差大于或等于第一阈值的部分以形成第一特征图像1310;以及利用第二图像1220各像素点的像素值减去第一图像1210相应像素点的像素值,并根据像素值差大于或等于第二阈值的部分以形成第二特征图像1320。其中所采用的第一阈值和第二阈值的具体方案可参考以上实施例的描述,此处不加以赘述。
在一些实施例中,处理模块33通过处理对比图像以获得三维图像1400。具体地,在一些实施例中可以采用互相关匹配法、最小二乘法等算法分别对同一帧的第一特征图像1310和第二特征图像1320进行处理,以得到该帧所对应的采集区域的三维图像1400。
在一些实施例中,处理模块33还用于获取相邻两帧的三维图像1400,将相邻两帧的三维图像1400进行特征匹配处理,得到匹配结果,并根据匹配结果将相邻两帧的三维图像1400进行拼接,得到拼接后的连续三维图像,以此加工装置30能够对工件实现连续扫描的效果。具体地,对相邻帧的三维图像1400进行特征匹配处理可参考以上实施例涉及三维点云的匹配方法,此处不加以赘述。
具体地,在一些实施例中,加工装置30还包括投影模块,投影模块能够在预设时间内向工件投射至少两次具有二维图案的投影光(如第一投影光1110和第二投影光1120),在预设时间内,每次投影光在工件上形成的二维图案的最大光强分布区域互不重合。投影模块向工件投射投影光的方式为间隔投射每次投影光。在另一些实施例中,投影模块向工件投射的投影光的方式、二维图案信息1100等可参考以上实施例中的方式,此处不加以赘述。对工件投射的投影光的二维图案可以包括条形光斑、圆形光斑、不规则光斑中的至少一种,投影光的投影图案即包括相应的条纹光斑信息、圆形光斑信息、不规则光斑信息中的至少一种。
另外,在一些实施例中,投影模块分开投射每次投影光,例如第一投影光1110投射结束后再进行第二投影光1120的投射。
上述加工装置30中,投影模块在预设时间内所投射的投影光在工件上形成的亮度的极大值所在区域互不重合,因此在预设时间内反射回的投影光所形成的第一图像和第二图像一共携带采集区域中的大量三维特征信息。处理模块33通过对对比图像进行差值处理后能够消除对比图像中的高反射信号,避免高反射信号干扰三维特征的重建计算,另外也能够使最终的三维图像保留原本位于高反射信号区域中的三维特征信息,从而能够重建出采集区域的清晰且完整的三维图像,进而使加工刀头能够更准确且全面地对消除相应采集区域的待加工余量,,实现对工件的准确加工,以得到符合预期要求的工件。
参考图12,在本申请的一些实施例中,还提供了一种基于三维成像的加工设备(以下简称加工设备),加工设备包括:
投射器,向工件投射具有二维图案信息的投影光;
存储器,存储有计算机程序;
接收器,获取在预设时间内采集的第一图像和第二图像,第一图像和第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;第一图像和第二图像均包括二维图案信息,第一图像中二维图案信息的像素值的极大值所在区域为第一区域,第二图像中二维图案信息的像素值的极大值所在区域为第二区域,第一区域与第二区域互不重合;
处理器,被配置为执行存储器上的计算机程序,以实现:根据第一图像各像素点的像素值与第二图像相应像素点的像素值的差值确定第一特征图像,以及根据第二图像各像素点的像素值与第一图像相应像素点的像素值的差值确定第二特征图像;及根据第一特征图像和第二特征图像确定采集区域的三维图像;根据三维图像和采集区域的预设三维信息以获得采集区域的待加工余量;获取采集区域的坐标信息及加工刀头的坐标信息,根据采集区域的坐标信息、加工刀头的坐标信息及待加工余量确定加工刀头的加工路径;及
加工器,包括加工刀头,加工器能够驱使加工刀头沿加工路径移动,以消除采集区域的待加工余 量。
具体可参考图13,在一些实施例中,加工设备40包括投射器410、接收器420、处理器430及加工器440,投射器410可以为能够投射具有预期二维图案信息的结构光的任意种类光源,接收器420为摄像头,处理器430能够用于控制投射器410的投影周期、时长等,且能够分析处理接收器420所获得的针对采集区域10的第一图像和第二图像,以此获取该采集区域10对应的三维图像,该三维图像即包含了该区域的深度信息170(实线示意),随后与该采集区域10的预设三维信息180(虚线示意)以获得采集区域的待加工余量。最后,处理器430根据采集区域的坐标信息、加工刀头的坐标信息及待加工余量确定加工刀头的加工路径,并控制加工器440中的加工刀头442沿加工路径移动以将采集区域上的多余部分(图中实线170与虚线180之间的部分)削除,从而获得预期形貌(如图中虚线180所示形貌)的工件。
在上述加工设备中,不仅可以获得工件表面的大量特征信息,且可以消除工件表面的高反射信号,避免高反射信号干扰三维特征的重建计算,另外也能使最终的三维图像能够保留原本位于高反射信号区域中的三维特征信息,从而通过上述基于三维成像的加工方法能够重建出工件的清晰且完整的三维模型。
在一些实施例中,投射器对投影光的投射方式以及处理器对图像处理的方式均可参考以上实施例中的相应方案,均应视为在本申请的记载范围之内。投射器可以为搭载光学整形元件的激光投影器件。接收器可以为图像传感器,图像传感器可以为CCD(Charge Coupled Device,电荷耦合器件)或CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)。投射器、接收器、存储器及处理器通过***总线连接。
在一些实施例中,对投射器、接收器、处理器的限定可以参见以上对于基于三维成像的加工方法的限定。
例如在一些实施例中,投射器能够在预设时间内向物体投射至少两次具有二维图案信息的投影光,在预设时间内,每次投影光在物体上形成的亮度的极大值所在区域互不重合。在一些实施例中,投射器分开投射每次投影光。在一些实施例中,投射器所投射的二维图案信息包括条纹光斑信息、圆形光斑信息、不规则光斑信息中的至少一种。
对于接收器而言,为保证同一帧的第一图像和第二图像在极短时间内获得,因此在一些实施例中,处于同一帧的第一图像和第二图像均在小于或等于200ms的预设时间内获取。对应地,投射器所投射的对应第一图像和第二图像的投影光也在200ms内投射。
对于处理器而言,具体地,处理器用于:利用第一图像各像素点的像素值减去第二图像相应像素点的像素值,并根据像素值差大于或等于第一阈值的部分以形成第一特征图像;以及利用第二图像各像素点的像素值减去第一图像相应像素点的像素值,并根据像素值差大于或等于第二阈值的部分以形成第二特征图像。
且当电子设备需要对物体进行连续三维扫描时,一些实施例的处理器还可获取物体的相邻两帧的三维图像,将相邻两帧的三维图像进行特征匹配处理,得到匹配结果,并根据匹配结果将相邻两帧的三维图像进行拼接,得到拼接后的连续三维图像,并根据所述连续三维图像和采集区域的预设三维信息以获得所述连续三维图像所对应的采集区域的待加工余量。
另外,存储器中存储有计算机程序,而处理器执行计算机程序时可以实现上述各方法实施例中的步骤。
本领域技术人员可以理解,图12中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的加工设备的限定,具体的加工设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程 序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
上述基于三维成像的加工方法、加工装置、加工设备及存储介质在应用于三维成像时,利用具有被摄工件表面二维信息的第一图像和具有工件表面三维信息的三维信息图像,以确定被摄工件表面的三维模型,从而可有效提升每一帧三维成像的精确性,进而有利于对工件的准确加工。且进一步地,上述基于三维成像的加工方法、加工装置、加工设备及存储介质也能应用于连续三维扫描,通过结合上述对第一图像和三维信息图像的拼接处理,对被摄工件的连续扫描能够得到稳定且精确连续三维模型,进而有利于对工件表面的连续区域实现准确加工。
在本发明的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本发明中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
在本发明中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (25)

  1. 一种基于三维成像的加工方法,包括如下步骤:
    获取在预设时间内采集的第一图像和第二图像,所述第一图像和所述第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;所述第一图像和所述第二图像均包括所述二维图案信息,所述第一图像中所述二维图案信息的像素值的极大值所在区域为第一区域,所述第二图像中所述二维图案信息的像素值的极大值所在区域为第二区域,所述第一区域与所述第二区域互不重合;
    根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,以及根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值确定第二特征图像;
    根据所述第一特征图像和所述第二特征图像确定所述采集区域的三维图像;
    根据所述三维图像和所述采集区域的预设三维信息以获得所述采集区域的待加工余量;
    获取所述采集区域的坐标信息及加工刀头的坐标信息;及
    根据所述采集区域的坐标信息、所述加工刀头的坐标信息及所述待加工余量确定所述加工刀头的加工路径。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,包括:
    利用所述第一图像各像素点的像素值减去所述第二图像相应像素点的像素值,并利用像素值差大于或等于第一阈值的部分以形成第一特征图像。
  3. 根据权利要求2所述的方法,其特征在于,在确定第一特征图像的步骤中,所述第一阈值大于或等于0,且小于所述第一图像中所述二维图案信息的各像素点的像素值中的最大值。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值确定第二特征图像,包括:
    利用所述第二图像各像素点的像素值减去所述第一图像相应像素点的像素值,并利用像素值差大于或等于第二阈值的部分以形成第二特征图像。
  5. 根据权利要求4所述的方法,其特征在于,在确定第二特征图像的步骤中,所述第二阈值大于或等于0,且小于所述第二图像中所述二维图案信息的各像素点的像素值中的最大值。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取相邻两帧的所述三维图像,每一帧的三维图像分别在一段所述预设时间内获取的所述第一图像和所述第二图像得到;
    将相邻两帧的所述三维图像进行特征匹配处理,得到匹配结果;
    根据所述匹配结果将所述相邻两帧的所述三维图像进行拼接,得到拼接后的连续三维图像;及
    根据所述连续三维图像和采集区域的预设三维信息以获得所述连续三维图像所对应的采集区域的待加工余量。
  7. 根据权利要求5所述的方法,其特征在于,将所述相邻两帧的所述三维图像进行特征匹配处理,得到匹配结果,包括:
    采用迭代最近点方法将所述相邻两帧的所述三维图像进行点云匹配处理,得到匹配结果。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述预设时间满足0<t≤200ms。
  9. 一种基于三维成像的加工装置,包括:
    获取模块,用于获取在预设时间内采集的第一图像和第二图像,所述第一图像和所述第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;所述第一图像和所述第二图像均包括所述二维图案信息,所述第一图像中所述二维图案信息的像素值的极大值所在区域为第一区域,所述第二图像中所述二维图案信息的像素值的极大值所在区域为第二区域,所述第一区域与所述第二区域互不重合,获取所述采集区域的坐标信息及加工刀头的坐标信息;
    确定模块,用于根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值 确定第二特征图像,以及根据所述第一特征图像和所述第二特征图像确定所述采集区域的三维图像;及
    处理模块,用于根据所述三维图像和所述采集区域的预设三维信息以获得所述采集区域的待加工余量,根据所述采集区域的坐标信息、所述加工刀头的坐标信息及所述待加工余量确定所述加工刀头的加工路径。
  10. 根据权利要求9所述的加工装置,其特征在于,所述三维成像装置还包括投影模块,所述投影模块能够在预设时间内向物体投射至少两次具有二维图案信息的投影光,在预设时间内,每次所述投影光在物体上形成的亮度的极大值所在区域互不重合。
  11. 根据权利要求10所述的加工装置,其特征在于,所述投影模块所投射的二维图案信息包括条纹光斑信息、圆形光斑信息、不规则光斑信息中的至少一种。
  12. 根据权利要求10所述的加工装置,其特征在于,所述投影模块分开投射每次投影光。
  13. 根据权利要求9所述的加工装置,其特征在于,所述确定模块用于:
    利用所述第一图像各像素点的像素值减去所述第二图像相应像素点的像素值,并根据像素值差大于或等于第一阈值的部分以形成第一特征图像;以及
    利用所述第二图像各像素点的像素值减去所述第一图像相应像素点的像素值,并根据像素值差大于或等于第二阈值的部分以形成第二特征图像。
  14. 根据权利要求13所述的加工装置,其特征在于,所述第一阈值大于或等于0,且小于所述第一图像中所述二维图案信息的各像素点的像素值中的最大值。
  15. 根据权利要求13所述的加工装置,其特征在于,所述第二阈值大于或等于0,且小于所述第二图像中所述二维图案信息的各像素点的像素值中的最大值。
  16. 根据权利要求9所述的加工装置,其特征在于,所述处理模块还用于获取相邻两帧的所述三维图像,将相邻两帧的所述三维图像进行特征匹配处理,得到匹配结果,并根据所述匹配结果将所述相邻两帧的所述三维图像进行拼接,得到拼接后的连续三维图像,并根据所述连续三维图像和所述采集区域的预设三维信息以获得所述连续三维图像所对应的采集区域的待加工余量。
  17. 根据权利要求9-16任意一项所述的加工装置,其特征在于,所述预设时间满足0<t≤200ms。
  18. 一种基于三维成像的加工设备,包括:
    投射器,向工件投射具有二维图案信息的投影光;
    存储器,存储有计算机程序;
    接收器,获取在预设时间内采集的第一图像和第二图像,所述第一图像和第二图像为向采集区域投射具有二维图案信息的投影光后采集到的图像;所述第一图像和所述第二图像均包括所述二维图案信息,所述第一图像中所述二维图案信息的像素值的极大值所在区域为第一区域,所述第二图像中所述二维图案信息的像素值的极大值所在区域为第二区域,所述第一区域与所述第二区域互不重合;
    处理器,被配置为执行所述存储器上的计算机程序,以实现:根据所述第一图像各像素点的像素值与所述第二图像相应像素点的像素值的差值确定第一特征图像,以及根据所述第二图像各像素点的像素值与所述第一图像相应像素点的像素值的差值确定第二特征图像;及根据所述第一特征图像和所述第二特征图像确定所述采集区域的三维图像;根据所述三维图像和所述采集区域的预设三维信息以获得所述采集区域的待加工余量;获取所述采集区域的坐标信息及加工刀头的坐标信息,根据所述采集区域的坐标信息、所述加工刀头的坐标信息及所述待加工余量确定所述加工刀头的加工路径;及
    加工器,包括加工刀头,所述加工器能够驱使所述加工刀头沿所述加工路径移动,以消除所述采集区域的待加工余量。
  19. 根据权利要求18所述的加工设备,其特征在于,所述投射器能够在预设时间内向物体投射至少两次具有二维图案信息的投影光,在预设时间内,每次所述投影光在物体上形成的亮度的极大值所在区域互不重合。
  20. 根据权利要求18所述的加工设备,其特征在于,所述投射器所投射的二维图案信息包括条纹光斑信息、圆形光斑信息、不规则光斑信息中的至少一种。
  21. 根据权利要求18所述的加工设备,其特征在于,所述投射器分开投射每次投影光。
  22. 根据权利要求18所述的加工设备,其特征在于,所述处理器用于:
    利用所述第一图像各像素点的像素值减去所述第二图像相应像素点的像素值,并根据像素值差大于或等于第一阈值的部分以形成第一特征图像;以及
    利用所述第二图像各像素点的像素值减去所述第一图像相应像素点的像素值,并根据像素值差大于或等于第二阈值的部分以形成第二特征图像。
  23. 根据权利要求18所述的加工设备,其特征在于,所述处理器还用于获取物体的相邻两帧的所述三维图像,将所述相邻两帧的所述三维图像进行特征匹配处理,得到匹配结果,并根据所述匹配结果将所述相邻两帧的所述三维图像进行拼接,得到拼接后的连续三维图像,并根据所述连续三维图像和采集区域的预设三维信息以获得所述连续三维图像所对应的采集区域的待加工余量。
  24. 根据权利要求18-23所述的加工设备,其特征在于,所述预设时间满足0<t≤200ms。
  25. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2020/097573 2020-06-23 2020-06-23 基于三维成像的加工方法、装置、设备及存储介质 WO2021258273A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/097573 WO2021258273A1 (zh) 2020-06-23 2020-06-23 基于三维成像的加工方法、装置、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/097573 WO2021258273A1 (zh) 2020-06-23 2020-06-23 基于三维成像的加工方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021258273A1 true WO2021258273A1 (zh) 2021-12-30

Family

ID=79282633

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097573 WO2021258273A1 (zh) 2020-06-23 2020-06-23 基于三维成像的加工方法、装置、设备及存储介质

Country Status (1)

Country Link
WO (1) WO2021258273A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704015A (zh) * 2023-08-07 2023-09-05 中国科学院合肥物质科学研究院 一种噪声光斑图像自适应窗口预处理质心算法及***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008032608A (ja) * 2006-07-31 2008-02-14 Aisin Seiki Co Ltd 三次元形状測定装置及び三次元形状測定方法
JP2011095131A (ja) * 2009-10-30 2011-05-12 Dainippon Screen Mfg Co Ltd 画像処理方法
CN104792277A (zh) * 2014-01-17 2015-07-22 佳能株式会社 三维形状测量装置和三维形状测量方法
CN109822575A (zh) * 2019-03-25 2019-05-31 华中科技大学 一种利用投影特征图像进行移动加工的机器人***及方法
CN110293404A (zh) * 2019-07-25 2019-10-01 安徽行者智能科技股份有限公司 一种针对带有随机尺寸误差的工件的智能加工***
CN110766767A (zh) * 2019-10-17 2020-02-07 中国科学院自动化研究所 获取格雷码结构光图像的方法、***、装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008032608A (ja) * 2006-07-31 2008-02-14 Aisin Seiki Co Ltd 三次元形状測定装置及び三次元形状測定方法
JP2011095131A (ja) * 2009-10-30 2011-05-12 Dainippon Screen Mfg Co Ltd 画像処理方法
CN104792277A (zh) * 2014-01-17 2015-07-22 佳能株式会社 三维形状测量装置和三维形状测量方法
CN109822575A (zh) * 2019-03-25 2019-05-31 华中科技大学 一种利用投影特征图像进行移动加工的机器人***及方法
CN110293404A (zh) * 2019-07-25 2019-10-01 安徽行者智能科技股份有限公司 一种针对带有随机尺寸误差的工件的智能加工***
CN110766767A (zh) * 2019-10-17 2020-02-07 中国科学院自动化研究所 获取格雷码结构光图像的方法、***、装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704015A (zh) * 2023-08-07 2023-09-05 中国科学院合肥物质科学研究院 一种噪声光斑图像自适应窗口预处理质心算法及***
CN116704015B (zh) * 2023-08-07 2023-11-14 中国科学院合肥物质科学研究院 一种实现噪声光斑图像自适应窗口预处理质心方法及***

Similar Documents

Publication Publication Date Title
JP7282317B2 (ja) 3次元計測システム及び3次元計測方法
Kriegel et al. Efficient next-best-scan planning for autonomous 3D surface reconstruction of unknown objects
US10288418B2 (en) Information processing apparatus, information processing method, and storage medium
JP6426968B2 (ja) 情報処理装置およびその方法
US9182221B2 (en) Information processing apparatus and information processing method
US9275461B2 (en) Information processing apparatus, information processing method and storage medium
JP5043023B2 (ja) 画像処理方法および装置
Zhou et al. Extrinsic calibration of a camera and a lidar based on decoupling the rotation from the translation
US20130127998A1 (en) Measurement apparatus, information processing apparatus, information processing method, and storage medium
JP5484133B2 (ja) 鏡面反射物体の3d姿勢を推定する方法
JP5133626B2 (ja) 表面反射特性測定装置
WO2012096747A1 (en) Forming range maps using periodic illumination patterns
WO2013061976A1 (ja) 形状検査方法およびその装置
WO2019177539A1 (en) Method for visual inspection and apparatus thereof
JP2013186100A (ja) 形状検査方法およびその装置
WO2021258273A1 (zh) 基于三维成像的加工方法、装置、设备及存储介质
JP7353757B2 (ja) アーチファクトを測定するための方法
JP2007508557A (ja) 三次元物体を走査するための装置
US9245375B2 (en) Active lighting for stereo reconstruction of edges
Rosman et al. Information-driven adaptive structured-light scanners
Sansoni et al. In-field performance of an optical digitizer for the reverse engineering of free-form surfaces
WO2021258276A1 (zh) 三维成像方法、三维成像装置、电子设备及存储介质
TWI659390B (zh) 應用於物件檢測之攝影機與雷射測距儀的數據融合方法
Rianmora et al. Structured light system-based selective data acquisition
Isgro et al. An open system for 3D data acquisition from multiple sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20941899

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC - FORM 1205A (22.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20941899

Country of ref document: EP

Kind code of ref document: A1