CN106403951A - Computer vision based positioning system and positioning method thereof - Google Patents

Computer vision based positioning system and positioning method thereof Download PDF

Info

Publication number
CN106403951A
CN106403951A CN201610741005.9A CN201610741005A CN106403951A CN 106403951 A CN106403951 A CN 106403951A CN 201610741005 A CN201610741005 A CN 201610741005A CN 106403951 A CN106403951 A CN 106403951A
Authority
CN
China
Prior art keywords
image
alignment system
target image
sample image
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610741005.9A
Other languages
Chinese (zh)
Inventor
谭红晖
霍金平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
View Energy Technology (shanghai) Co Ltd
Original Assignee
View Energy Technology (shanghai) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by View Energy Technology (shanghai) Co Ltd filed Critical View Energy Technology (shanghai) Co Ltd
Priority to CN201610741005.9A priority Critical patent/CN106403951A/en
Publication of CN106403951A publication Critical patent/CN106403951A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

Relating to the technical field of positioning, the invention puts forward a computer vision based positioning system and a positioning method thereof. The positioning system comprises an image acquisition module and a calculation module. The image acquisition module is used for collecting a sample image corresponding to a static scene and a target image after people and/or object appear in the scene. The calculation module is used for receiving the sample image and the target image, and performs compression, feature extraction and comparison on the sample image and the target image so as to obtain the actual location information of the people and/or object corresponding to the target image in the static scene. According to the positioning system provided by the invention, the positioning process has the advantages of simple calculation, high realtime performance and precision, and stability.

Description

A kind of alignment system based on computer vision and its localization method
Technical field
The present invention relates to field of locating technology, more particularly, to a kind of alignment system based on computer vision and its positioning side Method.
Background technology
The advantage of computer vision positioning is positioning precision height and good stability.At present, based on computer vision technique Object or person are typically taken pictures by indoor locating system using camera, then the position calculation object being imaged from photo Or people is with respect to the relative position of camera.
Based on the indoor positioning technologies of computer vision, it is broadly divided into monocular location technology and binocular location technology.For Monocular vision positioning refers to complete to position work merely with a video camera.Because monocular vision positioning need not solve binocular vision The optimal distance of the video camera in positioning and the matching problem of characteristic point, therefore, have the advantages that simple and applicability is extensive.So And, currently a popular monocular location technology, generally require and carry out complex camera calibration work, and need dependence characteristics Being calculated, amount of calculation is also more complicated for point or characteristic straight line.In addition, current monocular location technology also needs to arrangement height Fast network carries out video data transmission.
Accordingly, it would be desirable to a kind of new alignment system based on computer vision and its localization method.
Content of the invention
One object of the present invention is that in utilizing, the Wi-Fi of low speed provides the people of walking or vehicle in indoor environment High-precision real when the function that positions.
To achieve these goals, according to an aspect of the invention, it is provided a kind of positioning based on computer vision System.Alignment system includes image capture module and computing module.It is corresponding with static scene that image capture module is used for collection Target image after dynamic people and/or object occurs in sample image, and scene.Computing module is used for receiving sample image and mesh Logo image, and sample image and target image are compressed, feature extraction and comparison, to obtain the corresponding people of target image And/or actual position information in static scene for the object.
Preferably, computing module includes graphics processing unit, for using perception hash algorithm by compression of images to 2nx2n Pixel, and calculate the Hash characteristic value of the image after compression as the unique fingerprint for identification image information.
Preferably, graphics processing unit using perception hash algorithm by compression of images to 2nx2nPixel, wherein, the value of n Scope is 5-7.
Preferably, computing module also includes judging unit, for by the Real-time Collection after graphics processing unit is processed The fingerprint corresponding to image arriving, is compared with the fingerprint corresponding to sample image, with judge Real-time Collection image whether For target image.
Preferably, computing module also includes position calculation unit, judges fingerprint for carrying out fingerprint comparison in judging unit When different, to target image, the corresponding people and/or object physical location in static scene calculates.
Preferably, position calculation unit is used for corresponding for the fingerprint of target image and sample image 2nx2nPixel image turns Turn to gray scale picture, and two gray scale pictures are done subtraction and obtain error image, calculate multiple cells that object occupies From the cell that picture centre is nearest in (pixel), the relative position of the corresponding object of this element lattice.
Preferably, relative position (x, y) computing formula is:
Wherein N=2nFor the monolateral amount of pixels of picture after compression, (i, j) is the volume of the corresponding cell of object (pixel) Number, X, Y are the two dimensional extent in the actual corresponding region of image in image capture module demarcation.
Preferably, alignment system also includes converting unit, for will be with respect to by the position according to described image acquisition module Position is converted into absolute position.
Preferably, image capture module includes visible ray or thermal camera, and sample image and target image are described taking the photograph The photo that camera shoots.
Preferably, alignment system also includes connection unit, for being adopted image using the wireless Internet of Things of middle low speed Collection module and computing module connect.
Preferably, the wireless Internet of Things of middle low speed includes Zigbee or bluetooth.
According to another aspect of the present invention, there is provided a kind of localization method employing above-mentioned alignment system.Positioning side Method comprises the steps:
(1) capturing sample image calculate the finger print information of described sample image;
(2) gather target image, using the finger print information calculating target image with sample image identical algorithm;
(3) finger print information of the finger print information of sample image and target image is compared, calculate the phase of object To position (x, y).
Preferably, calculate the finger print information of image, be by compression of images to 2 using perception hash algorithmnx2nPixel, then count Calculate the Hash characteristic value of the image after compression.
Preferably, localization method also includes step (4), is converted relative position according to the nominal data of image capture module For absolute position.Relative position (x, y) is converted into absolute position (x1+x, y1+y), wherein, (x1, y1) is IMAQ mould Coordinate in sample image for the block.
It is an advantage of the current invention that:(1) due to having carried out compression process to image, therefore, position fixing process calculates simply, can Calculated with the local picture processing chip being connected in camera it is not necessary to image processing equipment in centralized, positioning is in real time Property and positioning precision high and stable.(2) do not need to upload shooting picture, netting twine or WIFI scheme can not be adopted, only need Achieve that data transfer compared with the wireless network of low speed.(3) position of multiple targets can quickly be calculated.
Brief description
Hereinafter will be based on embodiment and refer to the attached drawing is being described in more detail to the present invention.Wherein:
Fig. 1 is the schematic diagram of the alignment system based on computer vision in embodiment of the present invention;
Fig. 2 is the schematic diagram of the alignment system of another embodiment of the present invention;
Fig. 3 is the flow chart of the localization method of alignment system in embodiment of the present invention;
Fig. 4 is the hardware schematic of the alignment system in embodiment of the present invention;
Fig. 5 is the position calculation flow chart of the alignment system in one embodiment of the present invention;
Fig. 6 is the imaging geometry schematic diagram of the alignment system in embodiment of the present invention;
Fig. 7 is the error image schematic diagram in the image processing process of localization method in embodiment of the present invention;
Fig. 8 is the course of work schematic diagram of the alignment system in embodiment of the present invention.
In the accompanying drawings, identical part uses identical reference.Accompanying drawing is not according to actual ratio.
Specific embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
The invention provides a kind of alignment system based on computer vision, including:Image capture module 10 and calculating mould Block 20.As shown in figure 1, image capture module 10 is used for gathering sample image corresponding with static scene, and occur dynamic in scene Target image after state people and/or object.Computing module 20 is used for sample image and described target image, and to sample image and Target image is compressed, feature extraction and comparison, to obtain the corresponding people of target image and/or object in static scene Actual position information.In this case, because computing module 20 has carried out compression process to image, therefore reduce at image Amount of calculation during reason.For example, when image capture module 10 is camera, can be in the local image being connected with camera Calculated it is not necessary to center image processing equipment in process chip.Thus, the real-time of positioning, precision and stability are able to Improve.
In a specific embodiment of the present invention, as shown in Fig. 2 computing module 20 includes graphics processing unit 21, For using perception hash algorithm by compression of images to 2nx2nPixel, and calculate the Hash characteristic value conduct of the image after compression Unique fingerprint for identification image information.Preferably, graphics processing unit (21) using perception hash algorithm by compression of images To 2nx2nPixel, wherein, the span of n is 5-7.Compress image to 2nx2nPixel, can conveniently carry out at Image DCT Reason (discrete cosine transform) and hash value (cryptographic Hash) calculate.Hash value is the numerical value of 64.After compression, image procossing Computing capability requires to reduce, and so, need not transmit High-speed Image to central processing unit, and both can be complete on local chip Become to calculate.Preferably, the general value of n is (correspondence image pixel is 32*32,64*64,128*128) between 5 to 7, thus, energy Enough take into account the balance of amount of calculation and positioning precision.
In a specific embodiment of the present invention, as shown in Fig. 2 computing module 20 also includes:Judging unit 22 is used In the fingerprint corresponding to the image arriving the Real-time Collection after graphics processing unit 21 is processed, and corresponding to sample image Whether fingerprint is compared, to judge the image of Real-time Collection as target image.Further, computing module 20 also includes position Computing unit 23, for when judging unit 22 carries out fingerprint comparison and judges that fingerprint is different, to target image corresponding people and/or Physical location in static scene for the object is calculated.In this case, judging unit 20 passes through to contrast camera shooting Photo Hash characteristic value, judge whether camera pickup area people and/or object.For example, when contrast captured in real-time Then it represents that object does not occur in camera collection scene when photo is consistent with the fingerprint of sample photo, and then not start bit Put computing unit 23;And when contrasting captured in real-time photo and being inconsistent with the fingerprint of sample photo then it represents that camera gathers field Occur in that object in scape, at this moment start position calculation unit 23, position calculation is carried out to provide to the people and/or object occurring Positional information.
Additionally, position calculation unit 23 is used for corresponding for the fingerprint of target image and sample image 2nx2nPixel image turns Turn to gray scale picture, and two gray scale pictures are done subtraction and obtain error image, calculate multiple cells that object occupies The middle cell nearest apart from picture centre, as the relative position of object.Specifically, relative position (x, y) computing formula For:
Wherein N=2nFor the monolateral amount of pixels of picture after compression, (i, j) is the numbering of the corresponding cell of object, X, Y The two dimensional extent in the actual corresponding region of image in demarcating for image capture device.Here it should be noted that as Fig. 7 institute Show, each cell corresponds to a pixel, and error image is exactly that the gray value of 2 image respective pixel subtracts each other.For the ease of reason Solve above-mentioned calculating process, be exemplified below:It is assumed that n=5, then N=25=32;I=5, j=7;X=5 rice, Y=4 Rice;Then relative position x=5*5/32=0.78m, y=7*4/32=0.875m can be calculated by above-mentioned formula.
In order to relative position is converted into absolute position, computing module 20 also includes converting unit 24.
Image capture module 10 includes visible ray or thermal camera, and sample image and target image shoot for video camera Photo.Alignment system also includes connection unit 30, for connecting image capture module 10 and computing module 20.Concrete at one Embodiment in, as shown in Figure 4 and Figure 8, the local side of alignment system is mainly by camera (image capture module 10) and process Chip computing unit (computing module 20) composition of image, centre is connected by connecting line (connection unit 30).Object is to need Do people or the vehicle of position calculation.Camera is continuously taken pictures, and photo stream (image or video) is passed through connecting line (connection unit 30) send and carry out image procossing and position calculation to chip computing unit.
Further, as shown in figure 8, when people is in camera intersection, the matrix grid of two cameras is overlapping, Therefore two cameras upload matrix coordinate, respectively in server end merging data.
Present invention also offers a kind of localization method based on above-mentioned alignment system, as shown in figure 3, comprising the steps:
101, capturing sample image simultaneously calculates the finger print information of sample image;
102, gather target image, using the finger print information calculating target image with sample image identical algorithm;
103, the finger print information of the finger print information of sample image and target image is compared, calculates the phase of object To position.
Preferably, calculate the finger print information of image, be by compression of images to 2 using perception hash algorithmnx2nPixel, then count Calculate the Hash characteristic value of the image after compression.
Further, localization method also includes step 104, and the nominal data according to image capture module 10 is by relative position It is converted into absolute position.Relative position (x, y) is converted into absolute position (x1+x, y1+y), wherein, (x1, y1) adopts for image Collection coordinate in sample image for the module.Absolute position is the coordinate judging people in actual map orientation.
In a specific embodiment of the present invention, the flow chart of localization method is as shown in Fig. 2 detailed step is as follows:
Step one:Camera calibration:After camera is fixedly mounted, shoot photo, measure the corresponding area size of photo And position.
Step 2:The fingerprint of still image (sample image) calculates:Shoot photo when guaranteeing nobody or vehicle, will Compression of images is to 2nx2nPixel, calculates the Hash characteristic value of compressed images using perception hash algorithm.
Step 3:Start positioning, read picture from camera:Camera serialograph, photo stream is sent to meter Calculate chip (computing module 20).
Step 4:Fingerprint comparison:After computing chip receives an image, adopt and identical sense during Static Picture Compression Know hash algorithm, by compression of images to same 2nx2nPixel, and calculate hash value.Above-mentioned cryptographic Hash and still image are referred to The cryptographic Hash of line compares, such as identical then it is assumed that nobody or vehicle in image, stop calculating;As difference, then it is believed that image Middle someone or vehicle, carry out next step position calculation.
Step 5:Position calculation:First, camera imaging principle, as shown in Figure 6 it is assumed that camera position is A point, images The intersection point on head main shaft and ground is B point, and people 1 (C-E) is imaged as C-D region, and people 2 (F-H) is imaged as F-G region.Because shooting Head imaging geometry, head is from photo farther out close to photo for the position of people station pin on the ground.
Then, the gray level image that in step 4, will produce in fingerprint calculating, and the gray-scale map of the sample image in step 2 Pixel grey scale in picture does subtraction and obtains error image, and difference is switched to black white image.Only need to calculate what portrait occupied From the cell that picture centre is nearest in multiple cells, this element lattice correspond to the position that people's feet station is stood.Physical location can be by Cell divide and step one in the length ratio demarcated of camera image calculating.Physical location (x, y) computing formula is:
Wherein N=2nFor the amount of pixels of picture monolateral treasured row after compression, (i, j) is the corresponding cell of people's pin (pixel) Numbering, X, Y are the two dimensional extent in the actual corresponding region of photo in camera calibration.
Step 6:After calculating finishes, relative position information is exported.Relative position is turned by system according to the position of camera Turn to absolute position.
In sum, because traditional camera is not involved in data processing, only gather image, therefore entirely carry out by server end Data processing, location response speed is slower.And the alignment system of the present invention is easy to picture compression and judgement by employing one kind Algorithm so that the data processing work of alignment system, can be processed in the process chip integrating with camera, Process chip shares a part of data processing work, to mitigate/to simplify the processing data amount of server, has finally played quickening The purpose of location response speed in real time.
Although by reference to preferred embodiment, invention has been described, in the situation without departing from the scope of the present invention Under, it can be carried out with various improvement and part therein can be replaced with equivalent.Especially, as long as there is not structure punching Prominent, the every technical characteristic being previously mentioned in each embodiment all can combine in any way.The invention is not limited in literary composition Disclosed in specific embodiment, but include all technical schemes of falling within the scope of the appended claims.

Claims (14)

1. a kind of alignment system based on computer vision is it is characterised in that include:
Image capture module (10), for gather sample image corresponding with static scene, and occur in scene dynamic people and/or Target image after object;And
Computing module (20), connects or is integrated to described image acquisition module (10), for receiving described sample image and described Target image, and described sample image and target image are compressed, feature extraction and comparison, to obtain described target image The corresponding people and/or object actual position information in static scene.
2. alignment system according to claim 1 is it is characterised in that described computing module (20) includes:
Graphics processing unit (21), for using perception hash algorithm by compression of images to 2nx2nPixel, and after calculating compression Image Hash characteristic value as the unique fingerprint for identification image information.
3. alignment system according to claim 2 is it is characterised in that described image processing unit (21) is using perception Hash Algorithm is by compression of images to 2nx2nPixel, wherein, the span of n is 5-7.
4. alignment system according to claim 2 is it is characterised in that described computing module (20) also includes:
Judging unit (22), for the finger corresponding to the image that arrives the Real-time Collection after described image processing unit processes Whether line, is compared with the fingerprint corresponding to sample image, to judge the image of described Real-time Collection as target image.
5. alignment system according to claim 4 is it is characterised in that described computing module (20) also includes:
Position calculation unit (23), for when described judging unit carries out fingerprint comparison and judges that fingerprint is different, to target image The corresponding people and/or object physical location in static scene is calculated.
6. alignment system according to claim 5 is it is characterised in that described position calculation unit (23), for will be described The fingerprint corresponding 2 of target image and sample imagenx2nPixel image is converted into gray scale picture, and two gray scale pictures are done subtracts Method obtains error image, calculates apart from the nearest cell of picture centre in multiple cells that object occupies, as mesh The relative position of mark thing.
7. alignment system according to claim 6 is it is characterised in that described relative position (x, y) computing formula is:
x = i X N ; y = j Y N ;
Wherein N=2nFor the monolateral amount of pixels of picture after compression, (i, j) is the numbering of the corresponding cell of object, and X, Y are figure The two dimensional extent in the actual corresponding region of image in demarcating as collecting device.
8. the alignment system according to claim 6 or 7 is it is characterised in that described computing module (20) also includes:
Converting unit (24), for being converted into absolute position according to the position of described image acquisition module by relative position.
9. alignment system according to claim 1 it is characterised in that described image acquisition module (10) include visible ray or Thermal camera, described sample image and target image are the photo that described video camera shoots.
10. alignment system according to claim 1 is it is characterised in that described alignment system also includes:
Connection unit (30), for low speed in can utilizing wireless Internet of Things by described image acquisition module and described calculating mould Block connects.
11. alignment systems according to claim 10 are it is characterised in that the wireless Internet of Things of described middle low speed includes Zigbee or bluetooth.
A kind of 12. localization methods employing the alignment system as described in any of the above claim it is characterised in that include as Lower step:
(1) capturing sample image calculate the finger print information of described sample image;
(2) gather target image, using the finger print information calculating described target image with described sample image identical algorithm;
(3) finger print information of the finger print information of sample image and target image is compared, calculate the relative position of object Put.
The localization method of 13. alignment systems according to claim 12 is it is characterised in that the fingerprint of described calculating image is believed Breath, is by compression of images to 2 using perception hash algorithmnx2nPixel, then calculate the Hash characteristic value of the image after compression.
The localization method of 14. alignment systems according to claim 12 is it is characterised in that described localization method also includes walking Suddenly (4), relative position is converted into by absolute position according to nominal data in sample image for the image capture module.
CN201610741005.9A 2016-08-26 2016-08-26 Computer vision based positioning system and positioning method thereof Pending CN106403951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610741005.9A CN106403951A (en) 2016-08-26 2016-08-26 Computer vision based positioning system and positioning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610741005.9A CN106403951A (en) 2016-08-26 2016-08-26 Computer vision based positioning system and positioning method thereof

Publications (1)

Publication Number Publication Date
CN106403951A true CN106403951A (en) 2017-02-15

Family

ID=58003855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610741005.9A Pending CN106403951A (en) 2016-08-26 2016-08-26 Computer vision based positioning system and positioning method thereof

Country Status (1)

Country Link
CN (1) CN106403951A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110095119A (en) * 2018-01-29 2019-08-06 光禾感知科技股份有限公司 Distributing indoor locating system and distributing indoor orientation method
CN111148033A (en) * 2019-12-19 2020-05-12 广州赛特智能科技有限公司 Auxiliary navigation method of self-moving equipment
CN113115216A (en) * 2021-02-22 2021-07-13 浙江大华技术股份有限公司 Indoor positioning method, service management server and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631698A (en) * 2013-12-20 2014-03-12 中安消技术有限公司 Camera PTZ (pan/tilt/zoom) control method and device for target tracking
CN104866616A (en) * 2015-06-07 2015-08-26 中科院成都信息技术股份有限公司 Method for searching monitor video target
CN105139011A (en) * 2015-09-07 2015-12-09 浙江宇视科技有限公司 Method and apparatus for identifying vehicle based on identification marker image
CN105200938A (en) * 2015-08-27 2015-12-30 广西交通科学研究院 Vision-based anti-collision system for gate rail
CN105225281A (en) * 2015-08-27 2016-01-06 广西交通科学研究院 A kind of vehicle checking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631698A (en) * 2013-12-20 2014-03-12 中安消技术有限公司 Camera PTZ (pan/tilt/zoom) control method and device for target tracking
CN104866616A (en) * 2015-06-07 2015-08-26 中科院成都信息技术股份有限公司 Method for searching monitor video target
CN105200938A (en) * 2015-08-27 2015-12-30 广西交通科学研究院 Vision-based anti-collision system for gate rail
CN105225281A (en) * 2015-08-27 2016-01-06 广西交通科学研究院 A kind of vehicle checking method
CN105139011A (en) * 2015-09-07 2015-12-09 浙江宇视科技有限公司 Method and apparatus for identifying vehicle based on identification marker image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110095119A (en) * 2018-01-29 2019-08-06 光禾感知科技股份有限公司 Distributing indoor locating system and distributing indoor orientation method
CN111148033A (en) * 2019-12-19 2020-05-12 广州赛特智能科技有限公司 Auxiliary navigation method of self-moving equipment
CN113115216A (en) * 2021-02-22 2021-07-13 浙江大华技术股份有限公司 Indoor positioning method, service management server and computer storage medium

Similar Documents

Publication Publication Date Title
CN110599540B (en) Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera
CN106204595B (en) A kind of airdrome scene three-dimensional panorama monitoring method based on binocular camera
CN111899282B (en) Pedestrian track tracking method and device based on binocular camera calibration
US20160180510A1 (en) Method and system of geometric camera self-calibration quality assessment
TWI393376B (en) Method of data transmission
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
KR101601263B1 (en) 3 Method of 3 reconstruction of a scene calling upon asynchronous sensors
CN110782498B (en) Rapid universal calibration method for visual sensing network
CN102387374A (en) Device and method for acquiring high-precision depth map
CN105262949A (en) Multifunctional panorama video real-time splicing method
CN106403951A (en) Computer vision based positioning system and positioning method thereof
CN114529605A (en) Human body three-dimensional attitude estimation method based on multi-view fusion
WO2019123988A1 (en) Calibration data generating device, calibration data generating method, calibration system, and control program
CN112308926B (en) Camera external reference calibration method without public view field
CN110991297A (en) Target positioning method and system based on scene monitoring
US20170244895A1 (en) System and method for automatic remote assembly of partially overlapping images
CN108362205A (en) Space ranging method based on fringe projection
CN110991306B (en) Self-adaptive wide-field high-resolution intelligent sensing method and system
CN108370412A (en) Control device, control method and program
JP5901370B2 (en) Image processing apparatus, image processing method, and image processing program
CN107945166B (en) Binocular vision-based method for measuring three-dimensional vibration track of object to be measured
CN106791803B (en) A kind of disturbance measurement imaging system
CN105447007B (en) A kind of electronic equipment and data processing method
WO2006043319A1 (en) Terminal and server
CN116095473A (en) Lens automatic focusing method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170215