CN101206116A - Goal spot global automatic positioning method - Google Patents
Goal spot global automatic positioning method Download PDFInfo
- Publication number
- CN101206116A CN101206116A CNA2007101955823A CN200710195582A CN101206116A CN 101206116 A CN101206116 A CN 101206116A CN A2007101955823 A CNA2007101955823 A CN A2007101955823A CN 200710195582 A CN200710195582 A CN 200710195582A CN 101206116 A CN101206116 A CN 101206116A
- Authority
- CN
- China
- Prior art keywords
- impact point
- area
- point
- connected region
- zone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a target point global and automatic location method in photogrammetric image which comprises that: the coarse positioning of a target point is executed to roughly define the central position of the target point; according to the result of the coarse positioning of the target point, the accurate location of the target point is executed to accurately define the central position of the target point. The target point can be white or black. In the coarse positioning, an image-binary threshold is preset according to the highlighting feature of the target point in the image. In the accurate location, the minimum area containing the target point is dynamically defined according to the result of the coarse positioning of the target point. The method can simply obtain image coordinates of all feature points in the image at one time by presetting four parameters, and greatly improves the efficiency and automation degree of visual measurement.
Description
Technical field
The present invention relates to impact point localization method in a kind of photogrammetric image, more particularly, relate to goal spot global automatic positioning method in a kind of photogrammetric image.
Background technology
Close-range photogrammetry is that correlation technique by photography and subsequent image processing and Photogrammetric Processing is to obtain a kind of measuring technique of target shape to be measured, size and motion state.Usually, all targets of absorbing its image all can be used as the object of close-range photogrammetry.In concrete the enforcement, need take some width of cloth images, ask for the image coordinate of point group on different image planes, obtain the three dimensional space coordinate of target point group at last according to certain algorithm as raw data from different erect-positions.The formation of point group is generally to select circular light echo reflective marker point as a token of for use, so that can both obtain the point group image of high-resolution and high-contrast from different perspectives by the some monumented points of layout on target to be measured as impact point or reference mark.The characteristic intrinsic according to perspective projection transformation, circular light echo reflective marker point imaging on the video camera imaging plane is generally ellipse, the image coordinate of this elliptical center is just represented circular index point, and (the oval monumented point that is imaged on the image planes is impact point to be asked, with the oval monumented point of calling in the following text on the image planes is impact point), ask for therefore that the coordinate of elliptical center is the primary work of Photogrammetric Processing on the image planes.
The quantity of the circular light echo reflective marker of usually, arranging on object and density are that size, shape and the actual measurement accuracy requirement according to object under test decides.When carrying out the large sized object surface shape measurement, should on object, arrange more light echo reflective marker point, if object also has complicated free form surface part, then need arrange more monumented points in this part, satisfy follow-up mission requirementses such as face shape match so that obtain lot of data.But traditional impact point localization method is that each impact point coordinate of every width of cloth image is asked for one by one, and this is a very loaded down with trivial details job, greatly reduces efficiency of measurement.
Obtain the problem of impact point centre coordinate inefficiency at above prior art, the present invention proposes a kind of goal spot global automatic positioning method that can improve the efficient and the automaticity of vision measurement.
Summary of the invention
To be partly articulated other aspect of the present invention and/or advantage in the following description, by describing, it can become clearer, perhaps can understand by implementing the present invention.
According to an aspect of the present invention, provide goal spot global automatic positioning method in a kind of photogrammetric image, having comprised: the coarse positioning of carrying out impact point is with the rough center of determining impact point; According to the result of impact point coarse positioning, the fine positioning of carrying out impact point is accurately to determine the center of impact point, and wherein, impact point is white or black.
Description of drawings
By the description of embodiment being carried out below in conjunction with accompanying drawing, these and/or other aspect of the inventive concept that the present invention is total and advantage will become clear and be easier to and understand, wherein:
Fig. 1 is the process flow diagram of expression according to goal spot global automatic positioning method of the present invention.
Fig. 2 is the detail flowchart of the coarse positioning in the presentation graphs 1.
Fig. 3 is the detail flowchart of the fine positioning in the presentation graphs 1.
Fig. 4 is the detail flowchart of the step S310 in the presentation graphs 3.
Embodiment
Now with reference to accompanying drawing the present invention is described more fully, wherein, exemplary embodiment of the present invention is represented in the accompanying drawings.
Fig. 1 is the process flow diagram of expression according to goal spot global automatic positioning method of the present invention.
With reference to Fig. 1, in step S100, carry out the coarse positioning of impact point.That is to say, determine the center of impact point roughly.
In step S110, carry out the fine positioning of impact point according to the result of impact point coarse positioning.That is to say, accurately determine the center of impact point.
After execution in step S110, can preserve the final point data that behind fine positioning, obtains.
Fig. 2 is the detail flowchart of the coarse positioning in the presentation graphs 1.
With reference to Fig. 2, in step S200, gray level image (obtained by the coloured image conversion of taking, or directly take gray level image) is carried out binary conversion treatment.
Because the light echo reflection characteristic that the artificial target has, during actual photographed, monumented point is under the irradiation of specific light source (as flashlamp), diffuse reflection surface exceeds hundreds of times around the brightness of reflection, so in the image that different angles are gathered, reference mark or impact point are fairly obvious in entire image, and according to 0~255 gray level, its gray-scale value generally can reach more than 200.The coarse positioning of impact point just is being based on this fact, at first the image of taking is carried out binary conversion treatment, and determining of its binary-state threshold can be definite according to the brightness of most of impact point in every width of cloth image.For example, the brightness of most of impact point is all more than 180, and then this binary-state threshold is 180.Like this, on the process image of binary conversion treatment, impact point will show especially out.
In step S210, be communicated with the principle execution areas according to 8 and cut apart, and the connected region after each is cut apart carries out mark and handle, make each zone (that is, impact point) have different mark value, to distinguish each connected region.
In step S220, its region area S is calculated in each zone, number of pixels just, the zone of getting rid of S>Smax and S<Smin, wherein, Smax is the maximum area threshold value, and Smin is the minimum area threshold value, and Smax and Smin can be by estimating that according to the size characteristic of impact point in the concrete image the shared elemental area of minimum and maximum impact point (being shared number of pixels) correspondingly is provided with.
Often occur two kinds of situations in the actual measurement: a kind of situation is that some object under test surface is comparatively smooth, and reflectivity is very high, can have the phenomenon of large stretch of speck during from some angle shot image, its gray scale even greater than the light echo reflective marker; Another situation is isolated non-target bright spot to occur, and the shared pixel of these bright spots is few, general 3~5 pixels, and its gray-scale value is bigger.By the elemental area threshold value Smax and the Smin of impact point are set according to image object point size characteristic in above-mentioned steps S220, can reject large stretch of highlight regions and isolated bright spot, reach the purpose of getting rid of non-impact point.
In step S230, calculate the center of gravity (i.e. the rough center of Yu Xia each connected region) and the storage of each remaining connected region.When carrying out this calculating operation, can use simple centroid method to improve counting yield.
After weeding out large stretch of highlight regions and isolated bright spot, remaining connected region is each impact point zone.Usually, contain the image of 3000 * 2000 pixels of about 200 impact points for a width of cloth, (promptly the calculation procedure from S200~S230) only needs the computing time of 4 "~8 " to step S100.
Fig. 3 is the detail flowchart of the fine positioning in the presentation graphs 1.
With reference to Fig. 3, in step S300, read the centre coordinate of the impact point of determining by coarse positioning.
In step S310, dynamically determine to comprise the square area of impact point.Step S310 still handles on the bianry image obtain at step S100 and carries out, with determine only to comprise single target point than the zonule.
In step S320, the zone of determining among the step S310 is mapped in the gray level image, obtain binary-state threshold with this square area of binaryzation with the optimal threshold method, and the zone that further obtains impact point in this square area.
In step S330, carry out the rim detection of impact point, to obtain the edge of this impact point.When carrying out the rim detection of impact point, can utilize the method for mathematical morphology.Preferably, when carrying out the rim detection of impact point, also obtain the area information and the girth information (can utilize the method for mathematical morphology to obtain) of impact point, the circularity that area information by utilizing impact point and girth information can be judged impact point, and correspondingly reject non-ellipse target point.
In step S340, utilize the edge to carry out ellipse fitting, and reject the bigger marginal point of error, to obtain final accurate impact point center.
Fig. 4 is the detail flowchart of the step S310 in the presentation graphs 3.
With reference to Fig. 4, in step S400, for each impact point, the point coordinate that obtains with coarse positioning is the center, with the diameter of maximum target point on the whole image planes be the length of side as initial value, the square area at tentatively definite impact point place.Like this, impact point will be positioned at the center in zone.The diameter of described maximum target point can preestablish by the size characteristic according to maximum target point in the concrete image.
Owing to be the length of side of determining square area according to the diameter of maximum target point, therefore for some comparatively intensive little impact points that distributes, although also can be positioned at the center of square area, partly or entirely may the appearing in the square area of the one or more impact points that are adjacent.Therefore, have only an impact point to exist in order to guarantee each square area, in step S410, in this square area, carry out Region Segmentation, and calculate the area (method is with the step S210 among Fig. 2, S220) of each connected region, and only keeping the impact point of area maximum, other area pixel values are set to 0.
In step S420, in the square area that only contains an impact point, obtain all pixel coordinates of this impact point, utilize the method for minimal closure circle, determine the radius of a circle R and the round centre coordinate of the minimum of all these pixels of encirclement.
In step S430, (that is,, be square center, determine new square area with the center of the circle of the minimum of surrounding all these pixels 2R) as the new square length of side with the diameter of a circle D of above-mentioned minimum.Edge for fear of the impact point coincident in square area can suitably be provided with the length of side that 2R+nR is a square area, and for example, n gets 0~0.5, but the impact point center is constant.
Because the size of initial square area is (the step S400) that determines according to the diameter of maximum target point, after dynamically adjusting impact point region scope, each zone will be according to comparatively suitable the deciding of size of target, not only can avoid the influence of adjacent target point, improve the precision of impact point centralized positioning, but also can reduce the calculated amount of each impact point region being carried out rim detection.
In the description of above Fig. 4, finally determine to comprise the square area of impact point by execution in step S400 to S430.Yet dynamically the zone of determining that comprises impact point also can be a border circular areas.That is to say, in step S420, utilize the method for minimal closure circle to determine to surround after the circle of minimum of all these pixels, can be with center with this smallest circle the center of circle and with R+nR (n gets 0~0.2) be the border circular areas of radius as the zone of dynamically determining that comprises impact point, and execution in step S430 no longer.
In close-range photogrammetry, general light echo reflective marker is the white reflection point, under the irradiation of light source, can present highlighted demonstration on image planes.But in some applications, unique point is a black, for example is used for carrying out the target of camera calibration, normally is made up of the circular target spot of white background black.According to the present invention, also can handle the black unique point, its operating process and white point are basic identical, just before execution in step S200, need earlier gray level image to be carried out gray inversion.Specifically, if the gray-scale value of a pixel is x in the gray level image, then this pixel value becomes 255-x in the image after the counter-rotating.
According to the present invention, before carrying out actual computation, need the setting threshold parameter, four parameters that need to set are: the diameter of the maximum target point that uses among maximum area threshold value Smax that uses among the binary-state threshold that uses among the step S200, the step S220 and minimum area threshold value Smin and the step S400.Choose the suitable parameters threshold value speed and the efficient of calculating is had certain influence.For example, if it is bigger that the maximum gauge value is provided with, so for less impact point, when determining this impact point place square area, the part of another nearer with it impact point even all may appear in the same square area, to have only an impact point in this zone in order guaranteeing like this, just to need extra calculating so that get rid of other target.Therefore, maximum gauge was provided with the efficient that the conference influence is calculated.In the actual computation, under the situation of uncertain above parameter, can rule of thumb carry out 1~3 time parameter adjustment, determine comparatively suitable parameters value with this with range estimation.
In some cases, the impact point size differences is bigger in the image, choose the parameter that is fit to most of impact point this moment almost is impossible, can carry out secondary calculating for this reason, promptly at first one group of threshold parameter is set, extracts general objective dot center coordinate, as bigger circular encoded point at the general objective point, and then another group threshold parameter is set and calculates at little impact point, the impact point on the view picture image planes all can be obtained through twice.Problem, the problem includes: the problem of loss of significance when like this, having avoided one group of threshold value to ask all unique points.
The present invention is directed to and obtain the lower problem of impact point centre coordinate efficient in the large scale vision measurement, a kind of goal spot global automatic positioning method has been proposed, this method is at first utilized the zone at the strong reflection characteristic coarse positioning impact point place of light echo reflective marker point, dynamically adjusts the zone of each impact point then and utilizes ellipse fitting method to carry out accurate centralized positioning.Before actual computation, only need preestablish four parameters, get final product the disposable image coordinate of obtaining all unique points in the image, improved the efficient and the automaticity of vision measurement greatly, the precision of dot center still can reach 0.1 Pixel-level simultaneously.
Although represented with reference to certain preferred embodiment of the present invention and described the present invention, but those skilled in the art should understand that, under the situation that does not break away from the spirit and scope of the present invention that are defined by the claims, can carry out modification on various forms and the details to these embodiment.
Claims (10)
1. goal spot global automatic positioning method in the photogrammetric image comprises:
The coarse positioning of carrying out impact point is with the rough center of determining impact point;
According to the result of impact point coarse positioning, carry out the center of the fine positioning of impact point with accurately definite impact point,
Wherein, impact point is white or black.
2. if the method for claim 1, wherein impact point is a white, then the coarse positioning of described impact point comprises:
Gray level image is carried out binary conversion treatment, and wherein, binary-state threshold is predefined;
Execution area is cut apart, and the image of binaryzation is carried out mark handle to distinguish each connected region;
To each connected region zoning area, and get rid of the connected region of non-impact point according to predefined area threshold;
Calculate the center of remaining connected region.
3. if the method for claim 1, wherein impact point is a black, then the coarse positioning of described impact point comprises:
Gray level image is carried out gray inversion;
Gray level image to counter-rotating carries out binary conversion treatment, and wherein, binary-state threshold is predefined;
Execution area is cut apart, and the image of binaryzation is carried out the mark processing to distinguish each connected region;
To each connected region zoning area, and get rid of the connected region of non-impact point according to predefined area threshold;
Calculate the center of remaining connected region.
4. as claim 2 or 3 described methods, wherein, described binary-state threshold is to come predefined according to the highlighted feature of impact point in the gray level image of gray level image or counter-rotating.
5. as claim 2 or 3 described methods, wherein, described predefined area threshold comprises maximum area threshold value and minimum area threshold value, when getting rid of the connected region of non-impact point, get rid of area greater than the connected region of maximum area threshold value and area connected region less than the minimum area threshold value according to described predefined area threshold.
6. as claim 2 or 3 described methods, wherein, when calculating the center of remaining connected region, use centroid method.
7. the method for claim 1, wherein the fine positioning of described impact point comprises:
According to the result of impact point coarse positioning, dynamically determine to comprise the zone of impact point;
If impact point is a white, then fixed zone is mapped in the gray level image, obtain segmentation threshold and obtain the impact point zone with the optimal threshold method, if impact point is a black, then fixed zone is mapped in the gray level image of counter-rotating, obtains segmentation threshold and further determine the impact point zone with the optimal threshold method;
Utilize the method for mathematical morphology to carry out the rim detection of impact point, to obtain the edge of impact point, when the method for utilizing mathematical morphology is carried out the rim detection of impact point, also obtain the area information of impact point and the girth information of impact point, area by utilizing impact point and girth information judge that the circularity of impact point rejects non-ellipse target point;
Utilize the edge fitting ellipse, and reject the bigger marginal point of error, to obtain accurate impact point center.
8. method as claimed in claim 7, wherein, dynamically the zone of determining that comprises impact point is square area or border circular areas.
9. method as claimed in claim 8, wherein, if dynamically the zone of determining that comprises impact point is a square area, the then described step of dynamically determining to comprise the zone of impact point comprises:
For each impact point, the point coordinate that obtains with coarse positioning is the center, with the diameter of maximum target point on the whole image planes be the length of side as predefined initial value, the square area at tentatively definite impact point place;
In fixed square area, to carry out Region Segmentation, and calculate the area of each connected region, the connected region of determining the area maximum is as impact point, and the non-maximum region of rejecting area, has only an impact point to guarantee each zone;
Determine to surround the radius R of smallest circle of all pixels of above-mentioned definite impact point and the centre coordinate of this circle with the method for minimal closure circle;
As the new square length of side, is square center with this circle center with 2R+nR, determines new square area,
Wherein, n gets 0~0.5.
10. method as claimed in claim 8, wherein, if dynamically the zone of determining that comprises impact point is a border circular areas, the then described step of dynamically determining to comprise the zone of impact point comprises:
For each impact point, the point coordinate that obtains with coarse positioning is the center, with the diameter of maximum target point on the whole image planes be the length of side as predefined initial value, the square area at tentatively definite impact point place;
In fixed square area, to carry out Region Segmentation, and calculate the area of each connected region, the connected region of determining the area maximum is as impact point, and the non-maximum region of rejecting area, has only an impact point to guarantee each zone;
Determine to surround the radius R of smallest circle of all pixels of above-mentioned definite impact point and the centre coordinate of this circle with the method for minimal closure circle;
As radius, is the center of circle with this circle center with R+nR, determines border circular areas,
Wherein, n gets 0~0.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2007101955823A CN101206116B (en) | 2007-12-07 | 2007-12-07 | Goal spot global automatic positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2007101955823A CN101206116B (en) | 2007-12-07 | 2007-12-07 | Goal spot global automatic positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101206116A true CN101206116A (en) | 2008-06-25 |
CN101206116B CN101206116B (en) | 2010-08-18 |
Family
ID=39566491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2007101955823A Expired - Fee Related CN101206116B (en) | 2007-12-07 | 2007-12-07 | Goal spot global automatic positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101206116B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102023768A (en) * | 2009-09-09 | 2011-04-20 | 比亚迪股份有限公司 | A touch contact positioning method, system and video display terminal |
CN102129679A (en) * | 2010-12-02 | 2011-07-20 | 湖南农业大学 | Local positioning system and method |
CN102261910A (en) * | 2011-04-28 | 2011-11-30 | 上海交通大学 | Vision detection system and method capable of resisting sunlight interference |
CN103035004A (en) * | 2012-12-10 | 2013-04-10 | 浙江大学 | Circular target centralized positioning method under large visual field |
CN103054548A (en) * | 2012-07-05 | 2013-04-24 | 东北电力大学 | Fixation point measurement device and pupil recognition method and Purkinje image recognition method |
CN103577808A (en) * | 2013-11-11 | 2014-02-12 | 哈尔滨工程大学 | Frogman recognition method |
CN103810488A (en) * | 2012-11-09 | 2014-05-21 | 阿里巴巴集团控股有限公司 | Image feature extraction method, image searching method, server, terminal and system |
CN104123119A (en) * | 2014-07-07 | 2014-10-29 | 北京信息科技大学 | Dynamic vision measurement feature point center quick positioning method based on GPU |
CN104680586A (en) * | 2014-11-05 | 2015-06-03 | 河南科技大学 | Method for fitting on ellipsoidal surface in spatial arbitrary position based on minimum area |
CN105371935A (en) * | 2015-11-13 | 2016-03-02 | 广州市中崎商业机器股份有限公司 | Automatic logistics freight calculation device with printer and calculation method thereof |
CN105466534A (en) * | 2015-11-13 | 2016-04-06 | 广州市中崎商业机器股份有限公司 | Logistic freight automatic calculating device and calculating method thereof |
CN105678709A (en) * | 2016-01-12 | 2016-06-15 | 西安交通大学 | LED handheld target optical center offset correction algorithm |
CN106780615A (en) * | 2016-11-23 | 2017-05-31 | 安徽慧视金瞳科技有限公司 | A kind of Projection surveying method based on intensive sampling |
CN107301636A (en) * | 2017-05-17 | 2017-10-27 | 华南理工大学 | A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting |
CN107588723A (en) * | 2017-09-22 | 2018-01-16 | 南昌航空大学 | Circular mark leak source detection method on a kind of High-speed target based on two-step method |
CN108062770A (en) * | 2017-10-25 | 2018-05-22 | 华南农业大学 | The accurate positioning method at micropore center in a kind of microwell plate picture taken pictures naturally |
CN108637791A (en) * | 2018-03-29 | 2018-10-12 | 北京精雕科技集团有限公司 | A kind of automatic capturing method at rotating machined workpiece center |
CN109916300A (en) * | 2019-03-20 | 2019-06-21 | 天远三维(天津)科技有限公司 | The index point towards 3-D scanning based on online image procossing pastes indicating means |
CN110736427A (en) * | 2019-10-25 | 2020-01-31 | 中国核动力研究设计院 | Machine vision positioning system and positioning method for reactor detector assembly dismantling device |
WO2020037573A1 (en) * | 2018-08-22 | 2020-02-27 | 深圳市真迈生物科技有限公司 | Method and device for detecting bright spots on image, and computer program product |
US11170506B2 (en) | 2018-08-22 | 2021-11-09 | Genemind Biosciences Company Limited | Method for constructing sequencing template based on image, and base recognition method and device |
CN113643371A (en) * | 2021-10-13 | 2021-11-12 | 中国空气动力研究与发展中心低速空气动力研究所 | Method for positioning aircraft model surface mark points |
CN116086340A (en) * | 2023-04-07 | 2023-05-09 | 成都飞机工业(集团)有限责任公司 | Method, device, equipment and medium for measuring level of whole aircraft |
US12008775B2 (en) | 2018-08-22 | 2024-06-11 | Genemind Biosciences Company Limited | Method and device for image registration, and computer program product |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3793100B2 (en) * | 2002-02-14 | 2006-07-05 | キヤノン株式会社 | Information processing method, apparatus, and recording medium |
CN1188660C (en) * | 2003-04-11 | 2005-02-09 | 天津大学 | Camera calibrating method and its implementing apparatus |
CN1276387C (en) * | 2004-06-10 | 2006-09-20 | 上海交通大学 | Synchronous self-adaptable watermark method based on image continuity |
CN100468004C (en) * | 2004-08-11 | 2009-03-11 | 北京大学 | Calibrating matter automatic extracting method in camera calibration |
CN1797429A (en) * | 2004-12-29 | 2006-07-05 | 鸿富锦精密工业(深圳)有限公司 | System and method of 2D analytical process for image |
CN100384220C (en) * | 2006-01-17 | 2008-04-23 | 东南大学 | Video camera rating data collecting method and its rating plate |
-
2007
- 2007-12-07 CN CN2007101955823A patent/CN101206116B/en not_active Expired - Fee Related
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102023768B (en) * | 2009-09-09 | 2013-03-20 | 比亚迪股份有限公司 | A touch contact positioning method, system and video display terminal |
CN102023768A (en) * | 2009-09-09 | 2011-04-20 | 比亚迪股份有限公司 | A touch contact positioning method, system and video display terminal |
CN102129679A (en) * | 2010-12-02 | 2011-07-20 | 湖南农业大学 | Local positioning system and method |
CN102129679B (en) * | 2010-12-02 | 2013-06-19 | 湖南农业大学 | Local positioning system and method |
CN102261910A (en) * | 2011-04-28 | 2011-11-30 | 上海交通大学 | Vision detection system and method capable of resisting sunlight interference |
CN102261910B (en) * | 2011-04-28 | 2013-05-29 | 上海交通大学 | Vision detection system and method capable of resisting sunlight interference |
CN103054548A (en) * | 2012-07-05 | 2013-04-24 | 东北电力大学 | Fixation point measurement device and pupil recognition method and Purkinje image recognition method |
CN103810488A (en) * | 2012-11-09 | 2014-05-21 | 阿里巴巴集团控股有限公司 | Image feature extraction method, image searching method, server, terminal and system |
CN103035004B (en) * | 2012-12-10 | 2015-08-12 | 浙江大学 | The method of circular target centralized positioning under a kind of Large visual angle |
CN103035004A (en) * | 2012-12-10 | 2013-04-10 | 浙江大学 | Circular target centralized positioning method under large visual field |
CN103577808A (en) * | 2013-11-11 | 2014-02-12 | 哈尔滨工程大学 | Frogman recognition method |
CN104123119A (en) * | 2014-07-07 | 2014-10-29 | 北京信息科技大学 | Dynamic vision measurement feature point center quick positioning method based on GPU |
CN104123119B (en) * | 2014-07-07 | 2017-05-10 | 北京信息科技大学 | Dynamic vision measurement feature point center quick positioning method based on GPU |
CN104680586A (en) * | 2014-11-05 | 2015-06-03 | 河南科技大学 | Method for fitting on ellipsoidal surface in spatial arbitrary position based on minimum area |
CN105371935A (en) * | 2015-11-13 | 2016-03-02 | 广州市中崎商业机器股份有限公司 | Automatic logistics freight calculation device with printer and calculation method thereof |
CN105466534A (en) * | 2015-11-13 | 2016-04-06 | 广州市中崎商业机器股份有限公司 | Logistic freight automatic calculating device and calculating method thereof |
CN105466534B (en) * | 2015-11-13 | 2018-07-03 | 广州市中崎商业机器股份有限公司 | Logistics freight charges automatic computing equipment and its computational methods |
CN105371935B (en) * | 2015-11-13 | 2018-07-03 | 广州市中崎商业机器股份有限公司 | Logistics freight charges automatic computing equipment and its computational methods with printer |
CN105678709A (en) * | 2016-01-12 | 2016-06-15 | 西安交通大学 | LED handheld target optical center offset correction algorithm |
CN105678709B (en) * | 2016-01-12 | 2018-06-26 | 西安交通大学 | A kind of LED handheld target optical center deviation correcting algorithm |
CN106780615A (en) * | 2016-11-23 | 2017-05-31 | 安徽慧视金瞳科技有限公司 | A kind of Projection surveying method based on intensive sampling |
CN106780615B (en) * | 2016-11-23 | 2019-09-27 | 安徽慧视金瞳科技有限公司 | A kind of Projection surveying method based on intensive sampling |
CN107301636A (en) * | 2017-05-17 | 2017-10-27 | 华南理工大学 | A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting |
CN107588723A (en) * | 2017-09-22 | 2018-01-16 | 南昌航空大学 | Circular mark leak source detection method on a kind of High-speed target based on two-step method |
CN108062770A (en) * | 2017-10-25 | 2018-05-22 | 华南农业大学 | The accurate positioning method at micropore center in a kind of microwell plate picture taken pictures naturally |
CN108637791A (en) * | 2018-03-29 | 2018-10-12 | 北京精雕科技集团有限公司 | A kind of automatic capturing method at rotating machined workpiece center |
US11847766B2 (en) | 2018-08-22 | 2023-12-19 | Genemind Biosciences Company Limited | Method and device for detecting bright spots on image, and computer program product |
US12008775B2 (en) | 2018-08-22 | 2024-06-11 | Genemind Biosciences Company Limited | Method and device for image registration, and computer program product |
WO2020037573A1 (en) * | 2018-08-22 | 2020-02-27 | 深圳市真迈生物科技有限公司 | Method and device for detecting bright spots on image, and computer program product |
US11170506B2 (en) | 2018-08-22 | 2021-11-09 | Genemind Biosciences Company Limited | Method for constructing sequencing template based on image, and base recognition method and device |
CN109916300A (en) * | 2019-03-20 | 2019-06-21 | 天远三维(天津)科技有限公司 | The index point towards 3-D scanning based on online image procossing pastes indicating means |
CN110736427B (en) * | 2019-10-25 | 2021-05-18 | 中国核动力研究设计院 | Machine vision positioning system and positioning method for reactor detector assembly dismantling device |
CN110736427A (en) * | 2019-10-25 | 2020-01-31 | 中国核动力研究设计院 | Machine vision positioning system and positioning method for reactor detector assembly dismantling device |
CN113643371A (en) * | 2021-10-13 | 2021-11-12 | 中国空气动力研究与发展中心低速空气动力研究所 | Method for positioning aircraft model surface mark points |
CN116086340A (en) * | 2023-04-07 | 2023-05-09 | 成都飞机工业(集团)有限责任公司 | Method, device, equipment and medium for measuring level of whole aircraft |
CN116086340B (en) * | 2023-04-07 | 2023-07-21 | 成都飞机工业(集团)有限责任公司 | Method, device, equipment and medium for measuring level of whole aircraft |
Also Published As
Publication number | Publication date |
---|---|
CN101206116B (en) | 2010-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101206116B (en) | Goal spot global automatic positioning method | |
CN109212510B (en) | Method and device for measuring the angular resolution of a multiline lidar | |
US9007602B2 (en) | Three-dimensional measurement apparatus, three-dimensional measurement method, and computer-readable medium storing control program | |
EP2588836B1 (en) | Three-dimensional measurement apparatus, three-dimensional measurement method, and storage medium | |
JP4885584B2 (en) | Rangefinder calibration method and apparatus | |
US7495776B2 (en) | Three-dimensional measuring system | |
CN106999256A (en) | Optical tracking method and system based on passive marker | |
JP2013101045A (en) | Recognition device and recognition method of three-dimensional position posture of article | |
JP2001524228A (en) | Machine vision calibration target and method for determining position and orientation of target in image | |
JP2007206797A (en) | Image processing method and image processor | |
EP3358526A1 (en) | System and method for scoring color candidate poses against a color image in a vision system | |
JP2012215394A (en) | Three-dimensional measuring apparatus and three-dimensional measuring method | |
CN106524901A (en) | Imaging light spot calculating method by use of CCD light-sensitive device | |
JP2007072628A (en) | Face direction discriminating device | |
JP7378219B2 (en) | Imaging device, image processing device, control method, and program | |
CN107991665A (en) | It is a kind of based on fixed-focus camera to target three-dimensional coordinate method for continuous measuring | |
CN115376000A (en) | Underwater measurement method, device and computer readable storage medium | |
JP7163025B2 (en) | Image measuring device, image measuring method, imaging device, program | |
JP3236362B2 (en) | Skin surface shape feature extraction device based on reconstruction of three-dimensional shape from skin surface image | |
EP3679402A1 (en) | Method for operating a laser distance measuring device | |
JP3505372B2 (en) | Image processing device and image processing method for soft landing of moon and planet | |
US20200074685A1 (en) | System and method for representing and displaying color accuracy in pattern matching by a vision system | |
JP3343583B2 (en) | 3D surface shape estimation method using stereo camera with focal light source | |
JP6867766B2 (en) | Information processing device and its control method, program | |
CN108833789A (en) | A kind of real-time autofocus and auto focusing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20100818 Termination date: 20111207 |