WO2021115301A1 - Appareil d'acquisition 3d de cible rapprochée - Google Patents

Appareil d'acquisition 3d de cible rapprochée Download PDF

Info

Publication number
WO2021115301A1
WO2021115301A1 PCT/CN2020/134763 CN2020134763W WO2021115301A1 WO 2021115301 A1 WO2021115301 A1 WO 2021115301A1 CN 2020134763 W CN2020134763 W CN 2020134763W WO 2021115301 A1 WO2021115301 A1 WO 2021115301A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image acquisition
image
acquisition device
synthesis
Prior art date
Application number
PCT/CN2020/134763
Other languages
English (en)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2021115301A1 publication Critical patent/WO2021115301A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the invention relates to the technical field of shape measurement, in particular to the technical field of 3D shape measurement.
  • 3D information needs to be collected first.
  • commonly used methods include using machine vision to collect pictures of objects from different angles, and match these pictures to form a 3D model.
  • multiple cameras can be set at different angles of the object to be measured, or pictures can be collected from different angles by rotating a single or multiple cameras.
  • the Digital Emily project of the University of Southern California uses a spherical bracket to fix hundreds of cameras at different positions and angles on the bracket to realize 3D collection and modeling of the human body. But even with such a device, it can only collect 3D information on objects the size of a human body, and it can only be used indoors.
  • the use of a large number of cameras makes the installation and debugging of the entire equipment very difficult, and the equipment is very expensive. If shooting a smaller volume target (such as a fingerprint or even an object under a microscope), because the target volume is too small, the space left for the camera is relatively limited, and it is difficult to install such a large number of cameras. Moreover, the collection equipment is designed for a single size, once the size of the object changes greatly, it will not work.
  • the current prior art proposes to use empirical formulas including rotation angle, target size, and object distance to limit the camera position, so as to take into account the synthesis speed and effect.
  • measuring the size of the target itself is a difficult task. If the target measurement needs to be performed before each 3D acquisition and synthesis, it will bring additional burden and the accuracy is difficult to guarantee.
  • the measurement error leads to the camera position setting error, which will affect the acquisition and synthesis speed and effect; the accuracy and speed need to be further improved.
  • the present invention is proposed to provide a collection device that overcomes the above-mentioned problems or at least partially solves the above-mentioned problems.
  • the present invention provides a 3D acquisition device for a close-range target
  • the acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target;
  • An image acquisition device for acquiring a set of images of the target object through the above-mentioned relative motion
  • the collection position of the image collection device meets the following conditions:
  • the present invention also provides a 3D acquisition device for a short-distance target
  • Multiple image acquisition devices arranged around the target, used to acquire multiple images of the target in different directions;
  • the collection position of the image collection device meets the following conditions:
  • a background board is provided on the opposite side of the image acquisition device.
  • the collection area moving device is a rotating device that drives the image collection device and/or the target to rotate.
  • the rotating device is a turntable and/or a rotating arm.
  • the lens of the image acquisition device is a macro lens or a microscope lens.
  • the stage is a concentric structure that can be lifted and lowered in different areas.
  • ⁇ 0.412 preferably, ⁇ 0.335.
  • the present invention also provides a 3D synthesis device or method using any of the equipment, or a 3D recognition/comparison device or method.
  • the invention also provides a method or device for making an accessory using any of the equipment.
  • a stage structure is set up to facilitate macro collection, which makes it suitable for targets of various sizes.
  • FIG. 1 is a schematic structural diagram of a rotation mode of an image acquisition device of an image acquisition device provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the structure of a concentric circle stage provided by an embodiment of the present invention.
  • FIG. 3 is a top view of the concentric circle stage provided by the embodiment of the present invention in a retracted state
  • FIG. 4 is a schematic structural diagram of a target rotation mode of an image acquisition device provided by an embodiment of the present invention.
  • Figure 5 is a schematic diagram of the structure of the image acquisition device with multiple cameras
  • FIG. 6 is a schematic structural diagram of a background board provided by an image acquisition device according to an embodiment of the present invention.
  • an embodiment of the present invention provides a close-range target 3D acquisition device, which includes an image acquisition device and a rotating device.
  • the image acquisition device is used to acquire a set of images of the target through the relative movement of the acquisition area of the image acquisition device and the target; the acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target.
  • the acquisition area is the effective field of view range of the image acquisition device.
  • the equipment includes a circular stage 1 for carrying tiny objects;
  • the rotating device 2 can be a rotating arm, the rotating arm is in a bent shape, and the horizontal lower section is rotated and fixed to the base 3 to make it vertical
  • the upper section rotates around the stage 1;
  • the image acquisition device 4 is used to acquire images of the target and is installed on the upper section of the rotating arm.
  • the special image acquisition device 4 can be rotated up and down along the rotating arm to adjust the acquisition angle.
  • the target is fixed on the stage 1, and the rotating device 2 drives the image acquisition device 4 to rotate around the target.
  • the rotating device 2 can drive the image acquisition device 4 to rotate around the target through a rotating arm.
  • this kind of rotation is not necessarily a complete circular motion, and it can only be rotated by a certain angle according to the collection needs.
  • this rotation does not necessarily have to be a circular motion, and the motion trajectory of the image acquisition device 4 can be other curved trajectories, as long as it is ensured that the camera shoots the object from different angles.
  • the rotating device 2 can also be in various forms such as a turntable, a track, etc., so that the image acquisition device 4 can move.
  • the image acquisition device 4 is used to acquire an image of the target object, and it can be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera.
  • the lens of the image acquisition device 4 is a macro lens, and the distance to the target is very short when shooting. In particular, the lens of the image acquisition device 4 may be a microscope lens, so that the device can synthesize a 3D model of a micro-sized target.
  • the surface of the stage 1 is a concentric circle structure, as shown in Figure 2- Figure 3.
  • the size of the stage can be selected according to the size of the target. For example, when the size of the target is 1 cm, the table top with a diameter of 2 cm is raised, and the table top with the outer periphery greater than 2 cm is lowered to the base. Since the image capture device 4 is relatively close to the object, this arrangement can leave enough space for the image capture device to rotate.
  • the table can be set with concentric circles of various diameters, such as 1cm, 2cm, 5cm, 10cm and so on. This is also one of the invention points of the present invention.
  • the rotating arm includes at least two sections, a horizontal lower section and a vertical upper section.
  • the top of the horizontal lower section is mounted on the base through a bearing, and is used to rotate around the center of the base.
  • the horizontal lower section can be a telescopic structure, which is convenient for adjusting the turning radius of the rotating arm.
  • the upper vertical section is driven by the lower horizontal section and rotates around the stage 1, thereby driving the image acquisition device 4 on it to perform acquisition.
  • the vertical upper section can also be a telescopic structure to facilitate adjustment of the collection height.
  • the horizontal lower section and the vertical upper section are not limited to strictly horizontal and vertical, and can be inclined within a reasonable range. For example, the horizontal lower section may extend outward from the center of the base along an upward inclination angle.
  • the camera can also be fixed in some cases. Please refer to Figure 4.
  • the stage 1 carrying the target rotates so that the direction of the target facing the image capture device changes from time to time, so that the image capture device can be different from one another. Acquire the target image at an angle.
  • the rotating arm is fixed on the base, and the stage can be connected to the base through a rotating shaft to rotate.
  • the calculation can still be performed according to the situation converted into the movement of the image acquisition device, so that the movement conforms to the corresponding empirical formula (the details will be described in detail below).
  • the stage rotates
  • the rotation speed is deduced, and the rotation speed of the stage is deduced to facilitate the rotation speed control and realize 3D acquisition.
  • the processor also called a processing unit, is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • the acquisition area moving device is an optical scanning device, so that when the image acquisition device does not move or rotate, the acquisition area of the image acquisition device moves relative to the target.
  • the collection area moving device also includes a light deflection unit, which is mechanically driven to rotate, or is electrically driven to cause light path deflection, or is arranged in multiple groups in space, so as to obtain images of the target object from different angles.
  • the light deflection unit may typically be a mirror, which rotates so that images of the target object in different directions are collected.
  • the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image acquisition device.
  • the camera can take images of different angles of the target, as shown in Figure 5, multiple cameras can also be set at different positions around the target, so that different angles of the target can be photographed at the same time. Image.
  • the background board 5 is located opposite to the image acquisition device 4, and it rotates synchronously when the image acquisition device rotates, and remains still when the image acquisition device 4 is stationary.
  • another rotating arm with the same structure is installed on the opposite side of the rotating arm where the image acquisition device 4 is installed to carry the background board 5, and the two rotating arms rotate synchronously.
  • the above two rotating arms can be constructed in one piece.
  • the background board is all solid color, or most (main body) is solid color. In particular, it can be a white board or a black board, and the specific color can be selected according to the main color of the target object.
  • the background board 5 is usually a flat panel, preferably a curved panel, such as a concave panel, a convex panel, a spherical panel, and even in some application scenarios, it can be a background panel with a wavy surface; it can also be a spliced panel of various shapes. For example, three planes can be used for splicing, and the whole is concave, or flat and curved surfaces can be used for splicing.
  • the light source is distributed around the lens of the image acquisition device 4 in a dispersed manner, for example, the light source is a ring LED lamp around the lens.
  • the collected object is a human body, it is necessary to control the intensity of the light source to avoid causing discomfort to the human body.
  • a soft light device such as a soft light housing, can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces.
  • the light source can also be set in other positions that can provide uniform illumination for the target.
  • the light source can also be a smart light source, that is, the light source parameters are automatically adjusted according to the target object and ambient light conditions.
  • the optical axis direction of the image acquisition device changes relative to the target at different acquisition positions.
  • the positions of two adjacent image acquisition devices, or two adjacent image acquisition positions of the image acquisition device meet the following conditions:
  • d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as T.
  • L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A
  • it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
  • L should be the linear distance between the optical centers of the two image capture devices, but because the position of the optical centers of the image capture devices is not easy to determine in some cases, the photosensitive of the image capture devices can also be used in some cases.
  • the center of the component, the geometric center of the image capture device, the center of the axis connecting the image capture device and the pan/tilt (or platform, bracket), the center of the proximal or distal lens surface instead of Within the acceptable range, therefore, the above-mentioned range is also within the protection scope of the present invention.
  • parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. For example, after collecting the 3D information of an adult's head, and then collecting the head of a child, the head size needs to be re-measured and recalculated. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, which will lead to errors in the estimation of the camera position.
  • this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object.
  • d and f are the fixed parameters of the camera.
  • T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time. Specific experiments See below for data.
  • the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome.
  • the method of the present invention there is no need to measure the size of the object, and the camera position can be determined more conveniently.
  • the camera position determined by the present invention can take into account both the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
  • the multiple images are transmitted to the processor in a data transmission manner.
  • the processor can be set locally, or the image can be uploaded to the cloud platform to use a remote processor. Use the following method in the processor to synthesize the 3D model.
  • the image acquisition device 4 acquires a set of images of the target object by moving relative to the target object;
  • the processing unit obtains the 3D information of the target object according to the multiple images in the above-mentioned set of images.
  • the specific algorithm is as follows.
  • the processing unit can be directly arranged in the housing where the image acquisition device 4 is located, or it can be connected to the image acquisition device 4 through a data line or in a wireless manner.
  • an independent computer, server, cluster server, etc. can be used as the processing unit, and the image data collected by the image acquisition device 4 is transmitted to it for 3D synthesis.
  • the data of the image acquisition device 4 can also be transmitted to a cloud platform, and the powerful computing power of the cloud platform can be used for 3D synthesis.
  • the existing algorithm can be used to realize it, or the optimized algorithm proposed by the present invention can be used, which mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after being enhanced by the Wallis filter
  • m g is the local gray value of the original image Degree mean
  • s g is the local gray-scale standard deviation of the original image
  • m f is the local gray-scale target value of the transformed image
  • s f is the local gray-scale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image brightness coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
  • Step 2 Perform feature point extraction on all input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photos.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, and uses integral images to accelerate convolution to increase the calculation speed and reduce the dimensionality of local image feature descriptors. To speed up the matching speed.
  • the main steps include 1 constructing the Hessian matrix to generate all points of interest for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable 3
  • the main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point.
  • the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again.
  • the direction of the sector with the largest value is taken as the main direction of the feature point; 4 Generate a 64-dimensional feature point description vector, and take a 4*4 rectangular area block around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • Step 3 Input the matching feature point coordinates, use the beam method to adjust, solve the sparse face 3D point cloud and the position and posture data of the camera, that is, obtain the sparse face model 3D point cloud and the position model coordinate value ;
  • sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data.
  • the process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map to check consistency to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
  • Step 4 Use the dense point cloud to reconstruct the face surface. Including the process of defining the octree, setting the function space, creating the vector field, solving the Poisson equation, and extracting the isosurface.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship
  • the vector field of the point cloud is obtained according to the integral relationship
  • the approximation of the indicator function gradient field is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration, the moving cube algorithm is used to extract the isosurface, and the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the face model. After the surface model is built, texture mapping is performed.
  • the main process includes: 1The texture data is obtained through the image reconstruction target's surface triangle grid; 2The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; 3The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topological relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; 4The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
  • the above-mentioned algorithm is an optimized algorithm of the present invention, and this algorithm cooperates with the image acquisition conditions, and the use of this algorithm takes into account the time and quality of synthesis, which is one of the invention points of the present invention.
  • the conventional 3D synthesis algorithm in the prior art can also be used, but the synthesis effect and speed will be affected to a certain extent.
  • the accessory After collecting the 3D information of the target and synthesizing the 3D model, the accessory can be made for the target according to the 3D data.
  • a microscope lens is used to take a 360° image of a cell to synthesize a three-dimensional model of the cell.
  • the three-dimensional model data can be used to enlarge the same scale to make a solid model of the cell for scientific research and teaching.
  • the rotation movement of the present invention is that during the acquisition process, the previous position acquisition plane and the next position acquisition plane cross instead of being parallel, or the optical axis of the image acquisition device at the previous position crosses the optical axis of the image acquisition position at the next position. Instead of parallel. That is to say, the movement of the acquisition area of the image acquisition device around or partly around the target object can be regarded as the relative rotation of the two.
  • the examples of the present invention enumerate more rotational motions with tracks, it can be understood that as long as non-parallel motion occurs between the acquisition area of the image acquisition device and the target, it is in the category of rotation, and the present invention can be used. Qualification.
  • the protection scope of the present invention is not limited to the orbital rotation in the embodiment.
  • the adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves to cause the two to move relative to each other, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
  • the image capture device captures images
  • the image acquisition device can also collect video data, directly use the video data or intercept images from the video data for 3D synthesis.
  • the shooting position of the corresponding frame of the video data or the captured image used in the synthesis still satisfies the above empirical formula.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects. For example, it can be a head, a hand, and so on.
  • the three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with a three-dimensional feature of the target.
  • the so-called three-dimensional in the present invention refers to three-direction information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
  • the collection area mentioned in the present invention refers to the range that an image collection device (such as a camera) can shoot.
  • the image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image capture function.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all the features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un appareil d'acquisition 3D de cible rapprochée, comprenant : un dispositif de déplacement de région d'acquisition utilisé pour commander la région d'acquisition d'un dispositif d'acquisition d'image (4) pour se déplacer par rapport à une cible ; et le dispositif d'acquisition d'image (4) utilisé pour acquérir un groupe d'images de la cible au moyen d'un mouvement relatif, la position d'acquisition du dispositif d'acquisition d'image (4) satisfaisant une condition prédéfinie. Un procédé d'acquisition et de synthèse 3D pour un petit objet est proposé pour la première fois. En configurant un panneau d'arrière-plan (5) pour qu'il tourne également, il est garanti qu'à la fois la vitesse de synthèse et la précision de synthèse peuvent être améliorées.
PCT/CN2020/134763 2019-12-12 2020-12-09 Appareil d'acquisition 3d de cible rapprochée WO2021115301A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911288917.5 2019-12-12
CN201911288917.5A CN111076674B (zh) 2019-12-12 2019-12-12 一种近距离目标物3d采集设备

Publications (1)

Publication Number Publication Date
WO2021115301A1 true WO2021115301A1 (fr) 2021-06-17

Family

ID=70314822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134763 WO2021115301A1 (fr) 2019-12-12 2020-12-09 Appareil d'acquisition 3d de cible rapprochée

Country Status (2)

Country Link
CN (1) CN111076674B (fr)
WO (1) WO2021115301A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111076674B (zh) * 2019-12-12 2020-11-17 天目爱视(北京)科技有限公司 一种近距离目标物3d采集设备
CN111678429B (zh) * 2020-06-09 2022-02-22 江苏瑞奇海力科技有限公司 一种显微测量***及显微测量方法
CN112254671B (zh) * 2020-10-15 2022-09-16 天目爱视(北京)科技有限公司 一种多次组合式3d采集***及方法
CN112254669B (zh) * 2020-10-15 2022-09-16 天目爱视(北京)科技有限公司 一种多偏置角度的智能视觉3d信息采集设备
CN112253913B (zh) * 2020-10-15 2022-08-12 天目爱视(北京)科技有限公司 一种与旋转中心偏离的智能视觉3d信息采集设备
CN112484663B (zh) * 2020-11-25 2022-05-03 天目爱视(北京)科技有限公司 一种多翻滚角度的智能视觉3d信息采集设备

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101074876A (zh) * 2007-06-26 2007-11-21 北京中星微电子有限公司 一种自动测量距离的方法及装置
CN201258235Y (zh) * 2008-08-21 2009-06-17 机械科学研究院浙江分院有限公司 多柱升降式操作平台
CN103075960A (zh) * 2012-12-30 2013-05-01 北京工业大学 多视角大深度显微立体视觉特征融合测量方法
CN104299261A (zh) * 2014-09-10 2015-01-21 深圳大学 人体三维成像方法及***
US20160182903A1 (en) * 2014-12-19 2016-06-23 Disney Enterprises, Inc. Camera calibration
CN108253938A (zh) * 2017-12-29 2018-07-06 武汉大学 Tbm破岩矿渣数字近景摄影测量识别及反演方法
CN109035379A (zh) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 一种目标物360°3d测量及信息获取装置
CN109084679A (zh) * 2018-09-05 2018-12-25 天目爱视(北京)科技有限公司 一种基于空间光调制器的3d测量及获取装置
CN109146961A (zh) * 2018-09-05 2019-01-04 天目爱视(北京)科技有限公司 一种基于虚拟矩阵的3d测量及获取装置
CN109658497A (zh) * 2018-11-08 2019-04-19 北方工业大学 一种三维模型重建方法及装置
CN109801374A (zh) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 一种通过多角度图像集重构三维模型的方法、介质及***
CN111076674A (zh) * 2019-12-12 2020-04-28 天目爱视(北京)科技有限公司 一种近距离目标物3d采集设备
CN211373522U (zh) * 2019-12-12 2020-08-28 天目爱视(北京)科技有限公司 近距离3d信息采集设备及3d合成、显微、附属物制作设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3852704B2 (ja) * 2004-03-15 2006-12-06 住友金属鉱山株式会社 物体の3次元形状を認識する方法とその装置
CN100343625C (zh) * 2005-12-05 2007-10-17 天津大学 基于拼接靶的大型三维形体形貌测量拼接方法和装置
JP2010014699A (ja) * 2008-06-05 2010-01-21 Toppan Printing Co Ltd 形状計測装置及び形状計測方法
KR101635892B1 (ko) * 2015-10-08 2016-07-04 엘지전자 주식회사 헤드마운트 디스플레이 장치
CN106657971A (zh) * 2016-11-24 2017-05-10 文闻 一种360度3d全景拍摄设备和拍摄方法
CN206362712U (zh) * 2016-12-28 2017-07-28 京瓷精密工具(珠海)有限公司 一种刃边检测设备
CN109443235A (zh) * 2018-11-02 2019-03-08 滁州市云米工业设计有限公司 一种工业设计用产品轮廓采集装置

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101074876A (zh) * 2007-06-26 2007-11-21 北京中星微电子有限公司 一种自动测量距离的方法及装置
CN201258235Y (zh) * 2008-08-21 2009-06-17 机械科学研究院浙江分院有限公司 多柱升降式操作平台
CN103075960A (zh) * 2012-12-30 2013-05-01 北京工业大学 多视角大深度显微立体视觉特征融合测量方法
CN104299261A (zh) * 2014-09-10 2015-01-21 深圳大学 人体三维成像方法及***
US20160182903A1 (en) * 2014-12-19 2016-06-23 Disney Enterprises, Inc. Camera calibration
CN108253938A (zh) * 2017-12-29 2018-07-06 武汉大学 Tbm破岩矿渣数字近景摄影测量识别及反演方法
CN109146961A (zh) * 2018-09-05 2019-01-04 天目爱视(北京)科技有限公司 一种基于虚拟矩阵的3d测量及获取装置
CN109084679A (zh) * 2018-09-05 2018-12-25 天目爱视(北京)科技有限公司 一种基于空间光调制器的3d测量及获取装置
CN109035379A (zh) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 一种目标物360°3d测量及信息获取装置
CN109658497A (zh) * 2018-11-08 2019-04-19 北方工业大学 一种三维模型重建方法及装置
CN109801374A (zh) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 一种通过多角度图像集重构三维模型的方法、介质及***
CN111076674A (zh) * 2019-12-12 2020-04-28 天目爱视(北京)科技有限公司 一种近距离目标物3d采集设备
CN211373522U (zh) * 2019-12-12 2020-08-28 天目爱视(北京)科技有限公司 近距离3d信息采集设备及3d合成、显微、附属物制作设备

Also Published As

Publication number Publication date
CN111076674A (zh) 2020-04-28
CN111076674B (zh) 2020-11-17

Similar Documents

Publication Publication Date Title
WO2021115301A1 (fr) Appareil d'acquisition 3d de cible rapprochée
CN111060023B (zh) 一种高精度3d信息采集的设备及方法
CN111292364B (zh) 一种三维模型构建过程中图像快速匹配的方法
WO2021185218A1 (fr) Procédé d'acquisition de coordonnées 3d et de dimensions d'objet pendant un mouvement
WO2021185220A1 (fr) Procédé de construction et mesure d'un modèle tridimensionnel basé sur une mesure de coordonnées
WO2021185214A1 (fr) Procédé d'étalonnage à longue distance dans une modélisation 3d
CN111292239B (zh) 一种三维模型拼接设备及方法
WO2021185217A1 (fr) Procédé d'étalonnage à lasers multiples basé sur une mesure de distance et mesure d'angle
WO2021115302A1 (fr) Dispositif visuel intelligent 3d
CN111028341B (zh) 一种三维模型生成方法
WO2021185216A1 (fr) Procédé d'étalonnage basé sur de multiples télémètres laser
CN112304222B (zh) 一种背景板同步旋转的3d信息采集设备
CN111160136B (zh) 一种标准化3d信息采集测量方法及***
WO2021185215A1 (fr) Procédé de co-étalonnage de caméras multiples en modélisation 3d
CN111208138B (zh) 一种智能木料识别装置
CN211373522U (zh) 近距离3d信息采集设备及3d合成、显微、附属物制作设备
CN110973763B (zh) 一种足部智能3d信息采集测量设备
CN111340959A (zh) 一种基于直方图匹配的三维模型无缝纹理贴图方法
WO2021115297A1 (fr) Appareil et procédé de collecte d'informations 3d
WO2021115296A1 (fr) Module de capture tridimensionnelle ultramince pour terminal mobile
CN111325780B (zh) 一种基于图像筛选的3d模型快速构建方法
WO2021115298A1 (fr) Dispositif de conception d'adaptation de lunettes
CN111207690B (zh) 一种可调整的虹膜3d信息采集测量设备
CN211375622U (zh) 一种高精度虹膜3d信息采集设备及虹膜识别设备
CN112254674B (zh) 一种近距离智能视觉3d信息采集设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899984

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899984

Country of ref document: EP

Kind code of ref document: A1