CN108616698B - Image forming apparatus - Google Patents

Image forming apparatus Download PDF

Info

Publication number
CN108616698B
CN108616698B CN201810935087.XA CN201810935087A CN108616698B CN 108616698 B CN108616698 B CN 108616698B CN 201810935087 A CN201810935087 A CN 201810935087A CN 108616698 B CN108616698 B CN 108616698B
Authority
CN
China
Prior art keywords
light source
image
unit
camera unit
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810935087.XA
Other languages
Chinese (zh)
Other versions
CN108616698A (en
Inventor
朱炳强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810935087.XA priority Critical patent/CN108616698B/en
Publication of CN108616698A publication Critical patent/CN108616698A/en
Application granted granted Critical
Publication of CN108616698B publication Critical patent/CN108616698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An imaging apparatus for photographing a photographed object having a plurality of areas; the imaging device comprises a camera unit, a structural light source and a motion and processing unit, wherein the structural light source is used for projecting generated structural light to a shot object; the movement unit is used for changing the focusing height of the camera unit in the area of the shot object; the camera unit is used for shooting a plurality of first alternative images corresponding to the area of the shot object under the projection of the structured light under a plurality of focusing heights; the processing unit is connected with the motion unit and the camera unit and is used for receiving a plurality of first alternative images sent by the camera unit, selecting an area image corresponding to the area from the plurality of first alternative images, and synthesizing an image of the shot object by utilizing the plurality of area images. The application adopts structured light illumination, is not only suitable for objects with obvious textures, but also suitable for objects with weak textures and without textures, and greatly expands the application scene of imaging.

Description

Image forming apparatus
Technical Field
The present disclosure relates to the field of imaging technologies, and in particular, to an imaging apparatus.
Background
When the camera shoots, because the camera lens has the physical limit of the depth of field, only objects within a certain range from the camera lens can be clearly imaged, and the imaging is more blurred the farther the depth of field is exceeded. In general, as the resolution of the lens is greater, the depth of field is smaller. In microscopic application, the depth of field corresponding to a microscope objective of 10 times is only about 20 microns, and the distance between a lens and a shot object needs to be carefully adjusted when the lens is used, so that the use is inconvenient; meanwhile, if the height fluctuation of the surface of the shot object exceeds the depth of field, the situation that the local area is out of focus and blurred due to exceeding the depth of field exists in a single image, and a globally clear image can not be obtained.
Fig. 1 is a schematic diagram of a photographed object and a depth of field in the prior art. Fig. 2 and 3 are schematic views of a photographed object under a microscope. As shown in fig. 1, 2 and 3, for the measured object 100, the surface area a and the area B are at different heights, when the camera 10 'focuses on the area a, the focusing height matches with the best focusing plane 20', the area a in the obtained image is clear, and the area B is in an out-of-focus state; when the camera 10 'is focused on region B, the best focus plane 20' of a has been deviated, and region a in the resulting image is in an out-of-focus state. Region a and region B cannot be in focus at the same time.
In order to solve the problems existing in the industry, some technical solutions are proposed in the industry, and attempts are made to solve the problems existing in the prior art. For example, US 2012/0050562 A1 proposes a solution for obtaining a depth-of-field extended image using a microarray lens and a light field imaging principle. In the scheme, an array unit consisting of a plurality of tiny lenses, called a micro-lens array, is arranged between an image sensor and a camera lens; each micro lens may have a different aperture, focal length, etc.; each micro lens and the camera lens form a sub-optical path, and each sub-optical path has the characteristics of different equivalent focal length, viewing angle and the like; through the micro lens array, the whole optical system is equivalent to the comprehensive result of imaging of a plurality of sub-optical paths; and through a proper image processing algorithm, the local information imaged by each sub-optical path in the corresponding image sensor is synthesized, and the three-dimensional information of the object surface and the depth-of-field extended synthetic image can be reconstructed. The scheme has the advantage that only one image needs to be shot, and the depth-of-field extended image can be comprehensively obtained through imaging information of all sub-optical paths.
However, this solution has the following drawbacks:
first, the solution essentially synthesizes sub-optical path information formed by each microlens array to comprehensively judge the distance between the photographed object and the camera, so that high requirements are placed on the resolution of the image sensor.
Second, the reconstruction algorithm essentially falls within the principles of multi-view imaging. For objects without obvious textures, such as a completely uniform single material surface, the problem that corresponding points cannot be distinguished in multi-view imaging can occur, and the method still fails.
Thirdly, the imaging method has high requirements on the installation precision of the micro lens array, and meanwhile, the camera internal parameters are required to be calibrated in a relatively complex process, so that the manufacturing cost is high.
Fourth, this solution places high demands on the resolution of the image sensor, while at the same time placing physical limitations on the range of relief of the surface of the object that can be imaged clearly.
Fifth, in view of the fact that the depth of field of the high-resolution lens is small, the scheme is not suitable for high-resolution imaging use scenes.
Sixth, the depth of field extension range is determined by hardware information such as focal length of the microlens array unit, and is only about 6 times of the depth of field of the camera lens, so that for high-magnification microscopic application, the depth of field extension range is very limited, the use field is limited, and the method is not suitable for high-definition imaging application.
Disclosure of Invention
In view of the foregoing, an embodiment of the present application provides an imaging device to solve the problems of the prior art.
In order to solve the above-described problems, an embodiment of the present application discloses an imaging apparatus for photographing a photographed object having a plurality of areas; the imaging device comprises a camera unit, a structural light source, a motion unit and a processing unit, wherein:
the structure light source is used for projecting the generated structure light to a photographed object;
the movement unit is used for changing the focusing height of the camera unit in the area of the shot object;
the camera unit is used for shooting a plurality of first alternative images corresponding to the area of the shot object under the projection of the structured light under a plurality of focusing heights;
the processing unit is connected to the motion unit and the camera unit, and is configured to receive a plurality of first candidate images sent by the camera unit, select an area image corresponding to the area from the plurality of first candidate images, and synthesize an image of the photographed object by using the plurality of area images.
In an embodiment of the imaging apparatus, the processing unit is further configured to select an area image corresponding to the area from the plurality of first candidate images according to sharpness, calculate height information of the area according to a focusing height corresponding to the area image, and synthesize an image of the object to be photographed, for example, a depth-of-field extended image, using the height information of the plurality of areas.
In an embodiment of the imaging device, the processing unit is further configured to select an area image corresponding to the area from the plurality of first candidate images according to sharpness, calculate height information of the area according to a focusing height corresponding to the area image, and synthesize an image of the photographed object, for example, a three-dimensional image, by using the height information of the plurality of areas, the pixel information of the area image, and the position information of the area image.
In an embodiment of the imaging device, the motion unit is connected to at least one of the camera unit and the photographed object, and the imaging device further includes a driving unit, configured to drive the motion unit to drive the camera unit or the photographed object to move.
The moving unit is connected with the light path device in the structure light source and is used for changing the focusing height of the camera unit in the area of the shot object by adjusting the light path device.
In an embodiment of the imaging device, the camera unit includes a lens and an image sensor, and the motion unit is connected to one of the lens and the image sensor and is used for adjusting a relative position between the image sensor and the lens of the camera unit.
In one embodiment of the imaging device of the present application, the structured light source comprises: the light emitting device comprises a light emitting device, a structural pattern module and a projection light path assembly, wherein the structural light pattern module comprises a carrier provided with a fixed or variable structural pattern.
The embodiment of the application also provides an imaging device for shooting a shot object with a plurality of areas; the imaging device is characterized by comprising a camera unit, a synchronous triggering unit, a motion unit, a processing unit, a first light source and a second light source, wherein:
the first light source is used for projecting the generated first light to the shot object, and comprises a structural light source;
the second light source is used for projecting the generated second light to the shot object;
the movement unit is used for changing the focusing height of the camera unit in the area of the shot object;
the camera unit is used for shooting the area of the shot object projected by the first light source under a plurality of focusing heights to obtain a plurality of corresponding first alternative images; the first light source is used for shooting the area of the shot object projected by the first light source under a plurality of focusing heights, obtaining a plurality of corresponding second alternative images, and sending the first alternative images and the second alternative images to the processing unit;
the synchronous triggering unit is connected with the first light source, the second light source and the camera unit and is used for keeping the first light source on when the camera unit shoots the first alternative image and keeping the second light source on when the camera unit shoots the second alternative image;
the processing unit is connected to the motion unit and the camera unit, and is configured to receive a plurality of first candidate images sent by the camera unit, select a reference image corresponding to the region from the plurality of first candidate images, acquire a corresponding region image from a plurality of second candidate images according to the selected reference image, and synthesize an image of the photographed object by using the region image.
In an embodiment of the imaging device, the processing unit is further configured to select a reference image corresponding to the region from the plurality of first candidate images according to sharpness, calculate height information of the region according to a focusing height corresponding to the region image, and synthesize an image of the photographed object using the height information and the region image.
In an embodiment of the imaging device, the processing unit is further configured to select a reference image corresponding to the region from the plurality of first candidate images according to sharpness, calculate height information of the region according to a focusing height corresponding to the region image, and synthesize an image of the object to be shot by using the height information, the region image, and position information of the region image.
In an embodiment of the imaging device, the motion unit is connected to at least one of the camera unit and the photographed object, and the imaging device further includes a driving unit, configured to drive the motion unit to drive the camera unit or the photographed object to move.
In an embodiment of the imaging apparatus of the present application, the movement unit is connected to an optical path device in the first light source, for changing a focusing height of the camera unit in a region of the photographed object by adjusting the optical path device.
In an embodiment of the imaging device, the camera unit includes a lens and an image sensor, and the motion unit is connected to one of the lens and the image sensor and is used for adjusting a relative position between the image sensor and the lens of the camera unit.
In one embodiment of the imaging device of the present application, the structured light source comprises: the light emitting device comprises a light emitting device, a structural pattern module and a projection light path assembly, wherein the structural light pattern module comprises a carrier provided with a fixed or variable structural pattern.
In an embodiment of the imaging device of the present application, the processing unit is further configured to: and selecting a reference image corresponding to the region from the plurality of first alternative images according to sharpness, calculating the shooting height of the reference image, and acquiring a region image with the corresponding height from the plurality of second alternative images by using the shooting height.
In an embodiment of the imaging device of the present application, the first light source further comprises a natural light source, and the second light source comprises a natural light source.
In an embodiment of the imaging device of the present application, the camera unit includes an image sensor, and the synchronization triggering unit is specifically configured to:
turning on the first light source at a first time sequence and turning on the second light source at a second time sequence; or alternatively
The method comprises the steps of starting an optical filter used for filtering a second light source on the image sensor at a first time sequence, and starting the optical filter used for filtering a first light source on the image sensor at a second time sequence; or alternatively
Turning on a polarizer on the image sensor for passing through a first light source at a first timing and turning on a polarizer on the image sensor for passing through a second light source at a second timing; or alternatively
Respectively starting different imaging areas of the image sensor at the first time sequence and the second time sequence; or alternatively
Different image sensors of the camera unit are turned on at the first timing and the second timing, respectively.
As can be seen from the above, the imaging device provided in the embodiments of the present application includes the following advantages:
the depth-of-field extended three-dimensional imaging scheme for obtaining a plurality of pieces of image information by fusing a plurality of focusing positions has no limitation on the height fluctuation range of the surface of an object with imaging, and is particularly suitable for high-resolution imaging use scenes. The application adopts structured light illumination, is not only suitable for objects with obvious textures, but also suitable for objects with weak textures and without textures, and greatly expands the application scene of imaging.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a photographed object and a depth of field in the prior art.
Fig. 2 and 3 are schematic views of a photographed object under a microscope.
Fig. 4 shows a schematic diagram of the present application.
Fig. 5 is a schematic diagram of a composite image obtained by shooting using the scheme proposed in the embodiment of the present application.
Fig. 6 is a schematic diagram showing the structure of a depth-of-field extended imaging apparatus according to the first embodiment of the present application.
Fig. 7 is a schematic diagram of a 3D image synthesized by the present application.
Fig. 8 is a schematic diagram showing the structure of a depth-of-field extended imaging apparatus according to a second embodiment of the present application.
Fig. 9 is a schematic diagram showing a compact structure of a depth-of-field extended imaging apparatus according to a second embodiment of the present application.
Fig. 10 is a schematic diagram showing the structure of a depth-of-field extended imaging apparatus according to a third embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
One of the core ideas of the present application is to provide an imaging device, which is an optical imaging device that collects a plurality of images captured at different focus positions, digitally synthesizes a plurality of partial out-of-focus images into one out-of-focus, clear image, and calculates three-dimensional information of a subject by the focus positions of the respective images. The method has the advantages of high imaging precision, stable imaging effect, simple reconstruction algorithm, wide application field and the like compared with the traditional method by using the structured light projection technology.
The first embodiment of the present application proposes a depth-of-field extended imaging apparatus. Fig. 4 shows a schematic diagram of the principle of the present application. As shown in fig. 4, the camera is fixed in a height-adjustable motion platform, maintains a constant structured light illumination environment, and the mobile camera captures a series of images (images 1-6) at different heights (positions 1-6) and records height information at the time of each image capture by a position sensor. Only a part of the area of each image within the depth of field of the lens can be clear, and the rest areas are in an out-of-focus state. By changing the distance between the camera and the shot object, shooting a plurality of pictures, extracting a clearly focused area in each picture, and synthesizing a globally clear image by a digital processing technology; further, by analyzing the photographing position corresponding to each focusing area of the image, three-dimensional height information of each area can be obtained.
Specifically, for each image, one sharpness may be calculated for each pixel thereof by digital image technology. For example, one commonly used sharpness is the difference between the luminance of the current pixel and the average luminance value of surrounding neighboring pixels. When the image is focused more clearly, the details of the image are more abundant, the sharpness is higher, otherwise, the image is more blurred, the current pixel is closer to the average value of surrounding neighborhood pixels, and the sharpness is lower. There are various ways of calculating sharpness, such as based on edge analysis, based on image texture feature calculation, etc. The definition of these sharpness mostly depends on a principle that the greater the brightness fluctuation of an image pixel in a local range, the higher the sharpness; conversely, the more uniform the luminance distribution in the local area, the lower the sharpness.
It should be noted that the above manner of determining the clear image according to sharpness is merely an example, and those skilled in the art may determine the clear image in various manners, which will not be described herein.
In the solution proposed in the first embodiment of the present application, the illumination light may be structured light. The structured light is projected by a projection method, and the texture features are projected on the surface of the photographed object. For the relevant content of structured light reference may be made to US patent application US8208719.
In the scheme provided by the application, through introducing the structured light imaging information, structured light can be stably projected on the surface of the weak texture area, so that the texture information is generated in an artificial mode, and therefore clearer imaging can be better obtained at which focusing height, namely, the height information of the area can be conveniently and accurately obtained, and then the height information and plane information are combined to obtain the image of the shot object. The focal height described above may be the position of the focal plane that is optimal for imaging. I.e. at this focal plane on the object to be photographed, the image is the sharpest or sharpest.
Compared with the common light imaging, the scheme provided by the embodiment of the application uses the structured light to shoot, so that a clear image of the textured surface can be found as soon as possible, and the suitability of the textured surface to a region with weak texture is improved; on the other hand, the area of the shot object is shot through a plurality of focusing heights, and a clear image of the area is obtained, so that the height information of the area is determined by utilizing the focusing heights, the calculation accuracy is improved, and the calculation processing is simplified.
The following describes the method of operation of the concepts presented in this application with reference to fig. 4:
first, the camera lens is adjusted to a plurality of different focusing heights by a motion mechanism, a series of images are photographed and acquired, and the information is represented by a three-dimensional function I (x, y, z), that is, for each position (x, y) in the camera field of view, the pixel value (brightness, chromaticity, etc.) corresponding to the image acquired by the different focusing heights z is I (x, y, z).
Next, the sharpness f (x, y, z) is calculated from I (x, y, z).
Then, for each position (x, y), the focus height corresponding to the maximum sharpness is selected and necessary sum fitting calculation is performed, so that the imaging point at the position (x, y) can be obtained by the back-extrusion to obtain the optimal focus position z (x, y).
Again, for each location (x, y) and its corresponding best focus height z (x, y), the corresponding pixel value in the obtained image is found back by the necessary interpolation and fitting calculations, obtaining a composite image I (x, y, z (x, y)) that is focused everywhere.
Finally, the height h (x, y) of the object itself, i.e. the three-dimensional information of the object surface, can be calculated optionally by z (x, y) and imaging parameters (focal length, working distance, etc.) of the camera, lens.
The interpolation and fitting calculations described above are within the ability of those skilled in the art and will not be described in detail herein.
In alternative embodiments, the image may be de-structured, if necessary, to form a de-structured composite image. Because structured light is generally information such as regular textures, the processing of the unstructured light is a processing method known to those skilled in the art, and will not be described in detail herein.
Fig. 5 is a schematic diagram of a composite image obtained by shooting using the scheme proposed in the embodiment of the present application. As shown in fig. 5, the image obtained is a clear, well-defined image at each focal height.
Specific structures for implementing the embodiments of the present application are illustrated below by three examples.
First embodiment
Fig. 6 is a schematic diagram of a depth-of-field extended imaging apparatus according to an embodiment of the present application. As shown in fig. 6, the depth-of-field extended imaging apparatus is used to photograph a subject 100. The depth-of-field extended imaging apparatus includes a camera unit 10, a processing unit 20, a position sensor 40, a motion unit 50, a driving mechanism 60, and a structured light source 70. Optionally, in an embodiment, the depth extended imaging apparatus may further comprise a trigger unit 30 for intermittently turning on and off the structured light source 70.
The camera unit 10 includes a lens 11, an image sensor 12, and a shutter 13. The lens 11 may receive light emitted by the structural light source 70, and the shutter 13 may be a physical mechanical shutter or an electronic shutter integrated by the image sensor 12; the shutter 13 is connected to the triggering unit 30 and is controlled by the triggering unit 30.
The camera unit 10 is used for photographing and imaging a photographed object 100, and comprises an image sensor 12, a shutter 13 and a lens 11, wherein the precision is determined by imaging requirements. The image sensor 12 may select a black-and-white sensor that senses brightness, a color sensor that senses color information, a multispectral sensor, or the like. The image sensor 12 is connected to the processing unit 20, and the generated digital image is transmitted to the processing unit 20. The structured light source 70 may be either normally illuminated or strobed, and may be controlled by the trigger unit 30.
In one embodiment, the structured light source 70 is composed of a light emitting device 71, a structured pattern module 72, and a projection light path device 73; the light emitting device 71 provides light rays, which pass through the structural pattern module 72 and the projection light path device 73, generating structural light. The projection light path device 73 may include a convex lens, a mirror, or the like, and in fig. 6, the projection light path device 73 is denoted by a mirror.
The structural pattern module 72 may be an inherent, unalterable pattern, such as a coded pattern formed by coating a substrate of glass or the like; it is also possible to design a pattern generator which can be modified, switched by an external signal, such as a programmable code pattern realized by liquid crystal display technology, different pattern information being stored in the storage medium, a certain given pattern being controlled and selected by the triggering unit 30.
The structural pattern module 72 may be selected to be transmissive or reflective. The light emitted from the light emitting device 71 passes through the structural pattern module 72, and then, the structural pattern is projected onto the subject 100 through the projection light path device 73. The triggering unit 30 controls the lighting, off-time of the lighting means 71, and optionally different pattern types of the structural pattern module 72.
Although in this embodiment the structured light source 70 is composed of the light emitting device 71, the structured pattern module 72 and the projection light path device 73, this is only for example, and it will be clear to those skilled in the art that in other embodiments, devices or components capable of providing structured light are all within the scope of the present application.
The movement unit 50 is used to dynamically change the focusing plane of the camera unit 10 on the photographed object 100, and is implemented by the overall movement of the camera unit 10 and the structured light source 70 in the present embodiment. By the driving mechanism 60, the movement unit 50 changes the focal position of the camera lens 11 in the focal height direction along the subject 100, obtaining a plurality of images corresponding to different focal heights; a position sensor 40, such as a grating scale, linear Variable Differential Transformer (LVDT), etc., is used to measure the position of the moving structure so that the focal height can be determined by the processing unit 20 from the position of the moving structure.
In an alternative embodiment comprising a triggering unit 30, the output signal of the triggering unit 30 may also trigger the shutter 13 in the camera unit 10 and the light emitting means 71 in the structured light source 70 to open and close and the kind of structured light pattern. The input signals to the trigger unit 30 may be a clock signal (timing trigger), a position signal (position trigger) within the motion unit 50 and an event signal (event trigger) provided by the processing unit 20.
The processing unit 20 receives the digital images transmitted from the camera unit 10 and sequentially pairs with corresponding photographing positions (focusing heights). By analyzing the focal height information in the digital image, three-dimensional information of the subject 100 is estimated, and a three-dimensional image, such as a depth-of-field extended image, is synthesized therefrom.
In one embodiment, the processor determines the focusing heights of the cameras corresponding to the position signals according to the position signals recorded by the position sensor 40; the processing unit receives the digital image transmitted from the camera unit 10, acquires the sharpest image at a plurality of focal heights, calculates height information (z-axis coordinate information) of the photographed object using the focal heights, and combines coordinate information of (x, y) planes of the image of the region to synthesize a three-dimensional image. The three-dimensional image is, for example, a depth-extended image, as shown in fig. 7.
The above height information, coordinate information of the z-axis of the coordinate system and coordinate information of the (x, y) plane of the coordinate system are described with respect to the coordinate system perpendicular to each other in the three-dimensional space, in which the (x, y) plane represents a plane in which the object to be photographed is placed, and the z-axis of the coordinate system is perpendicular to the (x, y) plane; however, those skilled in the art will recognize that, under other coordinate systems, the solution proposed in the present application is still true, and will not be described herein.
From the above, the solution proposed by the embodiment of the present application has no requirement on the range of height variation of the object in the field of view in principle, and the extended depth of field is determined by the travel of the camera moving in the z direction and the number of acquired images, instead of parameters and placement positions of the lens or the camera itself. In addition, the scheme preferably works in the occasion of microscopic application so as to fully utilize the characteristic of small depth of field of the microscope objective, and the three-dimensional information is measured while the depth of field is extended.
The scheme provided by the embodiment of the application can generate at least three images:
first, in the solution of the present application, the sharpest image in each focusing layer of the photographed object 100 can be found and combined into one image, and in some cases, the user can obtain a clear image of each region. That is, if an image is generated from only information of x, y axis coordinates of a pixel (x, y) of each layer and a focus height corresponding to the pixel (x, y) is recorded, but the height information z of the region of the subject 100 is not determined using the focus height, it can be considered that a clear focus image containing no height information is generated, as shown in fig. 5.
Second, if the height information z of the pixel point (x, y) of each region calculated from the focal height of the region is added according to the information of the x, y axis coordinates of the pixel point (x, y) of each region, a three-dimensional image of the photographed object 100 can be generated using the x, y axis coordinate information and the height information z of a plurality of regions; in some scenes where it is not necessary to obtain pixel information of a pixel, it is also significant to obtain only surface height information of each pixel.
Third, if an image is generated based on the above three-dimensional image by adding pixel information of the pixel points (x, y) based on information of x, y axis coordinates and height information of the pixel points (x, y) of the region, a three-dimensional image including the pixel information is generated as shown in fig. 7.
As can be seen from the above, the imaging device according to the first embodiment of the present application has at least the following technical effects:
the depth-of-field extended three-dimensional imaging scheme for obtaining a plurality of pieces of image information by fusing a plurality of focusing positions has no limitation on the height fluctuation range of the surface of an object with imaging, and is very suitable for high-resolution imaging use scenes. The application adopts structured light illumination, is not only suitable for objects with obvious textures, but also suitable for objects with weak textures and without textures, and greatly expands the application scene of imaging.
Second embodiment
A second embodiment of the present application proposes a depth-of-field extended imaging apparatus. Fig. 8 is a schematic diagram of a depth-of-field extended imaging apparatus according to an embodiment of the present application. As shown in fig. 8, the depth-of-field extended imaging apparatus is used to photograph a subject 100. The depth-of-field extended imaging apparatus includes a camera unit 10, a processing unit 20, a synchronization trigger unit 301, a position sensor 40, a motion unit 50, a driving mechanism 60, a structure generator 70, and an assist light 80.
The embodiment shown in fig. 8 adds an auxiliary light source 80 with respect to fig. 6, while the triggering unit 30 in fig. 3 is replaced by a synchronous triggering unit 301. In the embodiment shown in fig. 8, the coaxial illumination is achieved by means of the structured light source 70 via the beam splitting prism 73, while the camera unit 10, the structured light source 70, the beam splitting prism 73 are kept in a fixed relative position with respect to each other by means of a mechanical structure. By the movement unit 50 and the driving mechanism 60, they can be moved up and down as a whole, and height information can be obtained by the position sensor 40.
During the movement, the synchronization triggering unit 301 obtains information of the position sensor 40. At a predetermined series of focusing heights, the structured light source 70 is turned on and the camera shutter 13 is triggered, and the processing unit 20 reads out a corresponding series of structured light images from the image sensor 12, and obtains surface height information of the photographed object 100 by analyzing the image sharpness and the focusing height at the time of photographing. At another set of preset heights, the structured light source 70 is turned off, the auxiliary light source 80 (for example, normal light) is turned on, and the camera shutter 13 is triggered, the processing unit 20 reads out a corresponding series of images from the image sensor, and the image information of each point on the object surface in the focusing state is calculated by combining the object surface height information obtained through the structured light information, so as to complete the depth-of-field extended image synthesis under the illumination of the auxiliary light source 80.
In fig. 8, the camera unit 10 includes a lens 11, an image sensor 12, and a shutter 13. The lens 11 may receive light emitted by the structural light source 70, and the shutter 13 may be a physical mechanical shutter or an electronic shutter integrated by the image sensor 12; the shutter 13 is connected with the synchronous triggering unit 301 and is controlled by the synchronous triggering unit 301;
the camera unit 10 includes an image sensor 12 for photographing an object 100 to be photographed, a shutter 13, and a lens 11, the accuracy depending on the imaging requirement. The image sensor 12 may select a black-and-white sensor that senses brightness, a color sensor that senses color information, and a multispectral sensor. The image sensor 12 is connected to the processing unit 20, and the generated digital image is transmitted to the processing unit 20. The auxiliary light source 80 may be controlled by the synchronous trigger unit 301 in a normally bright manner or in a stroboscopic manner.
It should be noted that if the structural pattern module 72 is a programmable code pattern implemented by a liquid crystal technology, the structural light pattern may be set in a non-pattern manner such as full-pass in a specific case. In this case, it is considered that the structured light illumination is degraded into the non-structured light illumination (for example, the normal light illumination), and the hardware configuration of the same light source as that of the auxiliary light source 80 may be adopted in whole or in part.
The output signal of the synchronization triggering unit 301 triggers the shutter 13 and the auxiliary light source 80 in the camera unit 10, the light emitting device 71 of the structured light source 70 to open and close, and the kind of structured light pattern. The input signals to the synchronization trigger unit 301 may be a clock signal (timing trigger), a position signal (position trigger) within the movement unit 50 and an event signal (event trigger) provided by the processing unit 20.
The processing unit 20 receives the digital images transmitted from the camera unit 10 and sequentially mates with the corresponding photographing positions. By analyzing focusing information in the digital image and combining the corresponding photographing positions, three-dimensional information of the photographed object is estimated, and a depth-of-field extended image is synthesized according to the three-dimensional information.
In the embodiment of the present application, the structural light source 70 and the camera unit 10 may share the same lens, as shown in fig. 9, achieving the effect of compactness.
The working principle of the depth-of-field extended imaging device proposed in the second embodiment of the present application is described as follows:
1. adjusting the object focusing plane of the camera lens to a plurality of preset height positions by a motion unit, and using the series of height positions as a set { z } i I e a; further, dividing the set a into two parts B, C, such that a=b=c, allows B and C to have overlapping elements (in extreme cases B, C may overlap completely, i.e. a=b=c)
2. At each height position { z } corresponding to set B i The illumination of the structural light source is used or the illumination of the structural light source 70 and the auxiliary light source 80 are used together, the camera shutter is triggered to shoot, the image is transmitted back to the processing unit, and the processing unit records the shooting height position of each image, and the image information is recorded as J (x, y, z)
3. At each height position { z } corresponding to set C i Illumination of i epsilon C by using auxiliary light source 80, triggering camera shutter shooting, image return to processing unit, recording shooting height position of each image by processing unit, recording image information as J (x, y, z)
4. The structured light image J (x, y, z) has rich texture information, sharpness f (x, y, z) is calculated through J (x, y, z), necessary interpolation and fitting calculation are carried out, so that the optimal focusing position z (x, y) of an imaging point at the position (x, y) can be obtained through reverse deduction, and object height information h (x, y) is calculated from z (x, y) according to camera lens parameters; the interpolation and fitting calculations described above are within the ability of those skilled in the art and will not be described in detail herein.
5. For each position (x, y) and the corresponding best focusing height z (x, y), reversely finding the corresponding pixel value in the obtained image through necessary interpolation and fitting calculation to obtain a common light synthesized image I (x, y, z (x, y))
It should be noted that, in step 5, in some application scenarios, the pixel value corresponding to the point is not required to be obtained, and the three-dimensional image can be generated only through the three-dimensional coordinate information of the point.
In addition, the above-described structure light source and auxiliary light source are only illustrative in the photographing process. In use, any two light sources are possible, as long as the first light source comprises a structural light source, which is the scope of the present application. The first light source and the second light source of the two light sources can be alternately turned on and off according to the division of time slices; if it is desired to take a picture in the same time slice, the information of the normal light and the structured light can be distinguished on the image sensor by, but not limited to, the following ways: for example, the first light source and the second light source are of different wavelengths and are distinguished on the image sensor by corresponding filters; for example, the first light source and the second light source employ different polarization modes, and are distinguished on the image sensor by corresponding polarizers; for example, the first light source and the second light source are present in different imaging areas, divided by area on the image sensor; for example, the first light source and the second light source correspond to different image sensors, respectively.
As can be seen from the above, the imaging device according to the second embodiment of the present application has at least the following technical effects:
the depth-of-field extended three-dimensional imaging scheme for obtaining a plurality of pieces of image information by fusing a plurality of focusing positions has no limitation on the height fluctuation range of the surface of an object with imaging, and is very suitable for high-resolution imaging use scenes. The application adopts structured light illumination, is not only suitable for objects with obvious textures, but also suitable for objects with weak textures and without textures, and greatly expands the application scene of imaging.
The scheme provided by the preferred embodiment of the application can utilize one or more of the coordinate information, the height information and the pixel information to generate various images, reduces the use of data on the basis of not increasing the processing difficulty, and meets the requirements of users.
The function of the motion unit 50 in this embodiment is to dynamically modify the focusing plane of the camera on the object, and in a general camera structure, changing the focusing plane of the camera on the object can also be accomplished by modifying the relative position between the image sensor and the lens, i.e. by conventional focusing means. In the first and second embodiments of the present application, the focal plane of the camera on the object can be changed by the overall movement of the camera unit, the structured light. In the third embodiment described below, this can also be accomplished by a built-in optical path movement mechanism.
Third embodiment
The third embodiment of the present application proposes an image forming apparatus. The same or similar points of the third embodiment as those of the first and second embodiments will not be described again, and only differences will be described.
Fig. 10 is a schematic diagram of a depth-of-field extended imaging apparatus according to a third embodiment of the present application. As shown in fig. 10, in the present embodiment, the movement unit 50 is capable of driving the movement of the reflection prism 74 connected thereto, and when the reflection prism moves in the left-right direction of fig. 10, the reflection point reflected by the subject 100 into the camera unit 10 changes, resulting in a change in the focal height. As previously described, the focal height may be the position of the focal plane where the image is relatively sharpest. I.e. at this focal plane on the object to be photographed, the image is the sharpest or sharpest.
Therefore, the third embodiment of the present application is different from the previous embodiments in that the movement of the reflecting prism 74 of the structured light source 70 can be driven by the moving unit 50, the focusing height of the camera on the object is changed, the effect of the moving unit 50 to adjust the focusing height of the camera unit 10 on the object is also achieved, and the solution proposed in the present application is achieved with a compact structure.
In the preferred embodiment of the application, by adopting the structured light and the common light illumination, the method is not only suitable for objects with obvious textures, but also suitable for objects with weak textures and non-textures, and greatly expands the application scene of imaging. Under the condition of the same camera and lens configuration, the measurement precision in the height direction is higher than that of a conventional depth-of-field extension scheme. When three-dimensional information measurement is performed, the camera lens is not required to be subjected to small depth of field, multiple lenses with different depth of field can be adapted, and three-dimensional information of a given area can be accurately measured. The scheme provided by the preferred embodiment of the application can simplify the hardware structure, realize the compact structure and reduce the occupation of space.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal device that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal device. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has outlined a detailed description of an imaging device provided herein, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided only to assist in understanding the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. An imaging apparatus for photographing a photographed object having a plurality of areas; the imaging device is characterized by comprising a camera unit, a synchronous triggering unit, a motion unit, a processing unit, a first light source and a second light source, wherein:
the first light source is used for projecting the generated first light to the shot object, and comprises a structural light source;
the second light source is used for projecting the generated second light to the shot object;
the movement unit is used for changing the focusing height of the camera unit in the area of the shot object;
the camera unit is used for shooting the area of the shot object projected by the first light source under a plurality of focusing heights to obtain a plurality of corresponding first alternative images; the first light source is used for shooting the area of the shot object projected by the first light source under a plurality of focusing heights, obtaining a plurality of corresponding second alternative images, and sending the first alternative images and the second alternative images to the processing unit;
the synchronous triggering unit is connected with the first light source, the second light source and the camera unit and is used for keeping the first light source on when the camera unit shoots the first alternative image and keeping the second light source on when the camera unit shoots the second alternative image;
the processing unit is connected to the motion unit and the camera unit, and is configured to receive a plurality of first candidate images sent by the camera unit, select a reference image corresponding to the region from the plurality of first candidate images, acquire a corresponding region image from a plurality of second candidate images according to the selected reference image, and synthesize an image of the object to be shot by using a plurality of region images corresponding to the plurality of regions.
2. The imaging apparatus according to claim 1, wherein the processing unit is further configured to select a reference image corresponding to the region from the plurality of first candidate images according to sharpness, calculate height information of the region according to a focus height corresponding to the region image, and synthesize an image of the subject using the height information of the plurality of regions and the position information of the region image.
3. The imaging apparatus according to claim 1, wherein the processing unit is further configured to select a reference image corresponding to the region from the plurality of first candidate images according to sharpness, calculate height information of the region according to a focus height corresponding to the region image, and synthesize an image of the subject using the height information, pixel information of the region image, and position information of the region image.
4. The imaging apparatus according to claim 1, wherein the movement unit is connected to at least one of the camera unit and the object, the imaging apparatus further comprising a driving unit for driving the movement unit to move the camera unit or the object.
5. The imaging apparatus according to claim 1, wherein the moving unit is connected to an optical path device in the first light source for changing a focal height of the camera unit in a region of the subject by adjusting the optical path device.
6. The imaging apparatus of claim 1, wherein the camera unit includes a lens and an image sensor, and the motion unit is coupled to one of the lens and the image sensor for adjusting a relative position between the image sensor and the lens of the camera unit.
7. The imaging apparatus of claim 1, wherein the structured light source comprises: the light emitting device comprises a light emitting device, a structural pattern module and a projection light path assembly, wherein the structural pattern module comprises a carrier provided with a fixed or variable structural pattern.
8. Imaging device according to claim 1, characterized in that the processing unit is specifically adapted to: and selecting a reference image corresponding to the region from the plurality of first alternative images according to sharpness, calculating the shooting height of the reference image, and acquiring a region image with the corresponding height from the plurality of second alternative images by using the shooting height.
9. The imaging device of claim 1, wherein the first light source further comprises a natural light source and the second light source comprises a natural light source.
10. Imaging device according to claim 1, characterized in that the camera unit comprises an image sensor, the synchronization triggering unit being in particular adapted to:
turning on the first light source at a first time sequence and turning on the second light source at a second time sequence; or alternatively
The method comprises the steps of starting an optical filter used for filtering a second light source on the image sensor at a first time sequence, and starting the optical filter used for filtering a first light source on the image sensor at a second time sequence; or alternatively
Turning on a polarizer on the image sensor for passing through a first light source at a first timing and turning on a polarizer on the image sensor for passing through a second light source at a second timing; or alternatively
Respectively starting different imaging areas of the image sensor at the first time sequence and the second time sequence; or alternatively
Different image sensors of the camera unit are turned on at the first timing and the second timing, respectively.
CN201810935087.XA 2018-08-16 2018-08-16 Image forming apparatus Active CN108616698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810935087.XA CN108616698B (en) 2018-08-16 2018-08-16 Image forming apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810935087.XA CN108616698B (en) 2018-08-16 2018-08-16 Image forming apparatus

Publications (2)

Publication Number Publication Date
CN108616698A CN108616698A (en) 2018-10-02
CN108616698B true CN108616698B (en) 2024-04-16

Family

ID=63666994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810935087.XA Active CN108616698B (en) 2018-08-16 2018-08-16 Image forming apparatus

Country Status (1)

Country Link
CN (1) CN108616698B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10893183B1 (en) * 2019-11-18 2021-01-12 GM Global Technology Operations LLC On-vehicle imaging system
CN113467033A (en) * 2021-06-24 2021-10-01 南昌欧菲光电技术有限公司 Camera module and lens positioning method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074044A (en) * 2011-01-27 2011-05-25 深圳泰山在线科技有限公司 System and method for reconstructing surface of object
CN103606181A (en) * 2013-10-16 2014-02-26 北京航空航天大学 Microscopic three-dimensional reconstruction method
CN104394323A (en) * 2014-12-04 2015-03-04 厦门大学 Photographing method of enlarged microscopic image
CN106412426A (en) * 2016-09-24 2017-02-15 上海大学 Omni-focus photographing apparatus and method
CN208461946U (en) * 2018-08-16 2019-02-01 朱炳强 Imaging device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074044A (en) * 2011-01-27 2011-05-25 深圳泰山在线科技有限公司 System and method for reconstructing surface of object
CN103606181A (en) * 2013-10-16 2014-02-26 北京航空航天大学 Microscopic three-dimensional reconstruction method
CN104394323A (en) * 2014-12-04 2015-03-04 厦门大学 Photographing method of enlarged microscopic image
CN106412426A (en) * 2016-09-24 2017-02-15 上海大学 Omni-focus photographing apparatus and method
CN208461946U (en) * 2018-08-16 2019-02-01 朱炳强 Imaging device

Also Published As

Publication number Publication date
CN108616698A (en) 2018-10-02

Similar Documents

Publication Publication Date Title
US7859588B2 (en) Method and apparatus for operating a dual lens camera to augment an image
US7683962B2 (en) Camera using multiple lenses and image sensors in a rangefinder configuration to provide a range map
US7676146B2 (en) Camera using multiple lenses and image sensors to provide improved focusing capability
EP2135442B1 (en) Multiple lens camera operable in various modes
BE1022488B1 (en) TIME-OF-FLIGHT VIEWING APPARATUS SYSTEM
CN107181897B (en) Handheld device and method, equipment and readable medium for capturing image by handheld device
US8289377B1 (en) Video mode hidden autofocus
JP4115801B2 (en) 3D imaging device
US20040125205A1 (en) System and a method for high speed three-dimensional imaging
JP2002318104A (en) Optical imaging device and optical ranging device
EP1348148A1 (en) Camera that combines the best focused parts from different exposures to an image
JP2002525685A (en) Programmable lens assembly and optical system incorporating the same
KR102053316B1 (en) Digital photographing apparatus
CN108616698B (en) Image forming apparatus
KR20160115682A (en) Method of enabling spatially varying auto focusing of objects and an image capturing system thereof
JP2020153865A (en) Three-dimensional information acquisition device, information processor, and system
CN208461946U (en) Imaging device
JP2009151154A (en) Photodetector, focus detector and imaging apparatus
US20100103276A1 (en) Split aperture capture of rangemap for 3d imaging
JP2009294301A (en) Light receiving device and focus detector
JP2024513936A (en) Depth data measurement head, calculation device and measurement method
JP2005037490A (en) Digital camera
RU186634U1 (en) Device for obtaining two stereoscopic images of small objects in one digital frame
US20050041133A1 (en) Video-camera unit and adapter for a video-camera unit
CN112615979B (en) Image acquisition method, image acquisition apparatus, electronic apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant