CN106949881B - A kind of mobile robot fast vision localization method - Google Patents

A kind of mobile robot fast vision localization method Download PDF

Info

Publication number
CN106949881B
CN106949881B CN201710106104.4A CN201710106104A CN106949881B CN 106949881 B CN106949881 B CN 106949881B CN 201710106104 A CN201710106104 A CN 201710106104A CN 106949881 B CN106949881 B CN 106949881B
Authority
CN
China
Prior art keywords
image
sampling
size
module
location information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710106104.4A
Other languages
Chinese (zh)
Other versions
CN106949881A (en
Inventor
车自远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201710106104.4A priority Critical patent/CN106949881B/en
Publication of CN106949881A publication Critical patent/CN106949881A/en
Application granted granted Critical
Publication of CN106949881B publication Critical patent/CN106949881B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of mobile robot fast vision localization methods, mainly use CCD sensing module, image processing module and location information resolve three modules of module, algorithm basic procedure is to determine down-sampling rate by the minimum resolution size of size characteristic and object, down-sampling then is carried out to image, image procossing is carried out to the image after down-sampling and obtains the coordinate of object, object locating region in source images is determined according to object coordinates and down-sampling rate, then image segmentation and feature extraction are carried out to the source images region, thus one high-resolution image procossing is changed into the image procossing of two width low resolution, greatly reduce workload and the time of image procossing on the basis of guaranteeing vision positioning precision, the present invention can meet the requirement of real-time of visual pattern processing well.

Description

A kind of mobile robot fast vision localization method
Technical field
The present invention relates to field in intelligent robotics, especially a kind of mobile robot fast vision localization method.
Background technique
Mobile robot is the carrier of artificial intelligence technology, in the carrying of civilian material, the exploration of volcano seabed, intelligent cleaning etc. The military fields such as civil field and mean value are scouted, clearance is got rid of the danger, anti-coring pollution all have broad application prospects.It is self-positioning to be The basic demand of mobile robot.Vision positioning technology is a main direction of development of robot self-localization technology.However with The raising of the development of ccd sensor and self-positioning requirement, the resolution ratio of image to be processed is higher and higher, traditional images processing side It is greatly increased the time required to method, has seriously affected the real-time of mobile robot visual location technology.
Summary of the invention
The present invention provides a kind of mobile robot fast vision location technologies, can be well solved the prior art not Foot, greatly shortens the time of image procossing, effectively improves image processing efficiency, meet the requirement of vision positioning real-time.
A kind of mobile robot fast vision localization method mainly uses three modules,
A, CCD sensing module, for acquiring ambient image and transmitting acquired image to image processing module,
B, image processing module extracts image for receiving the image of CCD sensing module transmission and carrying out image procossing Characteristic information is used to resolve the location information of robot, while extracting processing of the subject size characteristics for lower piece image,
C, location information resolves module, for receiving the image feature information of image processing module and calculating robot Location information;Method and step is as follows:
1) pass through CCD sensing module continuous acquisition ambient image and send image processing module to, execute step 2);
If 2) be currently video first frame image, step 3) is executed, it is no to then follow the steps 4);
3) image processing module handles the image received, and the characteristic information for then extracting image sends position to Information resolves module for resolving robot location information, while extracting processing of the subject size characteristics for lower piece image, Execute step 5);
4) the down-sampling rate of image is determined according to subject size characteristics and minimum resolution size and source images adopt Sample carries out image procossing to the image after down-sampling and obtains object coordinates, determines that object exists according to object coordinates and down-sampling rate Region in source images carries out image procossing to the region of source images, and the characteristic information for extracting image sends location information to Module is resolved for resolving robot location information, while extracting processing of the subject size characteristics for lower piece image, is executed Step 5);
5) position resolves the characteristic information that module receives image processing module transmission, calculates the position letter of mobile robot Breath executes step 2).
Down-sampling algorithm described in step 4) is as follows:
Image resolution ratio is a*b, is c by the subject size characteristics that upper piece image is handled, minimum resolution size is D, then down-sampling rate is d2/c2, image resolution ratio is a*b* (c/d) after down-sampling^2, i.e., per (d/c)^2A point is sampled in a point Form new image.
Object coordinates described in step 4) refer in particular to the coordinate of the external square of object, and external square side length and image Boundary is parallel, then two opposite apex coordinates of external square are the coordinate for indicating the square, external square two phases Opposite vertexes coordinate is respectively (x1, y1) and (x2, y2), then object coordinates are (x1, y1, x2, y2)。
Region of the object described in step 4) in source images determines that algorithm is as follows:
Object coordinate in down-sampled images is (x1, y1, x2, y2), and x1<x2, y1<y2, source images down-sampling rate is k, is examined The possible amplification of object during worry moveable robot movement, enabling object locating region in source images is (kx1/ t, ky1/ t, kx2* t, ky2* t), t amplification coefficient, value 1.2.
The down-sampling algorithm:
When object is square, size characteristic is its side length, and otherwise size characteristic is the side length of the external square of object.
The down-sampling algorithm:
Minimum resolution size is the identified minimum pixel size of object energy under image resolution ratio rigid condition, according to image The minimum pixel size of identification demand and application request, practical application can be greater than theoretical value.
The down-sampling algorithm:
It is the pixel of (x ', y ') for new images coordinate, coordinate (x, y) corresponding relationship with original image pixel is
One high-resolution image procossing is changed into two width low resolution by adaptive down-sampling technology by the present invention Image procossing, greatly reduce workload and the time of image procossing on the basis of guaranteeing vision positioning precision, improve image Treatment effeciency meets the requirement of vision positioning real-time.
Detailed description of the invention
Fig. 1 is hardware system module structure chart;
Fig. 2 is process flow block diagram;
Fig. 3 is image to be processed.
Specific embodiment
A kind of mobile robot fast vision localization method, hardware system is as shown in Figure 1, mainly include following three moulds Block:
A, CCD sensing module, for acquiring ambient image and transmitting acquired image to image processing module;
B, image processing module extracts image for receiving the image of CCD sensing module transmission and carrying out image procossing Characteristic information is used to resolve the location information of robot, while extracting processing of the subject size characteristics for lower piece image;
C, location information resolves module, for receiving the image feature information of image processing module and calculating robot Location information.
Referring to Fig. 2, a kind of mobile robot fast vision localization method, main processing steps are as follows:
1) pass through CCD sensing module continuous acquisition ambient image and send image processing module to, execute step 2);
2) it is assumed to be piece image, executes step 3), it is no to then follow the steps 4);
3) image processing module handles the image received, and the characteristic information for then extracting image sends position to Information resolves module for resolving robot location information, while extracting processing of the subject size characteristics for lower piece image, Execute step 5);
4) the down-sampling rate of image is determined according to subject size characteristics and minimum resolution size and source images adopt Sample carries out image procossing to the image after down-sampling and obtains the coordinate of object, determines object according to object coordinates and down-sampling rate Region in source images carries out image procossing to the region of source images, and the characteristic information for extracting image sends position letter to Breath resolves module for resolving robot location information, while extracting processing of the subject size characteristics for lower piece image, holds Row step 5);
5) position resolves the characteristic information that module receives image processing module transmission, calculates the position letter of mobile robot Breath executes step 2).
Image procossing mainly includes image segmentation and feature extraction, and image segmentation is used based in the relevant dividing method of point Gray level thresholding method, only need to choose a gray level thresholding appropriate, then by each pixel grey scale and it compare, More than redistributing with maximum gray scale (such as 1) for thresholding, lower than thresholding distribution with minimal gray (such as 0), thus can group At a new bianry image, successfully object is revealed from background.The thresholding method of gray level can use the side of formula (1) Journey description.
In formula, f (x, y), g (x, y) are respectively source images and reconstructed image, and T is gray level thresholding.
Position computation thinking is to resolve robot according to object size thus to calculate robot at a distance from object Location information, it can thus be appreciated that only need extract object dimension information, the extraction of dimension information can be by the external square of object It determines.
Object coordinates refer in particular to the coordinate of the external square of object, and external square side length is parallel with image boundary, then Two opposite apex coordinates of external square can indicate the coordinate of the square, it is assumed that external square two opposed apexes Coordinate is respectively (x1, y1) and (x2, y2), then object coordinates are (x1, y1, x2, y2)。
Assuming that object coordinate in down-sampled images is (x1, y1, x2, y2), and x1<x2, y1<y2, source images down-sampling rate is K considers the possible amplification of object during moveable robot movement, and enabling object locating region in source images is (kx1/ t, ky1/ T, kx2* t, ky2* t), t amplification coefficient, value 1.2.
A kind of mobile robot fast vision localization method, adaptive down-sampling algorithm are as follows:
Image resolution ratio might as well be set as a*b, be c by the subject size characteristics that upper piece image is handled, minimum is differentiated Having a size of d, then down-sampling rate is d2/c2, image resolution ratio is a*b* (c/d) after down-sampling^2, i.e., per (d/c)^2It is sampled in a point One point forms new image.Be the pixel of (x ', y ') for new images coordinate, with the coordinate of original image pixel (x, Y) corresponding relationship is
When object is square, size characteristic is its side length, and otherwise size characteristic is the side length of the external square of object.
Minimum resolution size is the identified minimum pixel size of object energy under image resolution ratio rigid condition, according to image The minimum pixel size of identification demand and application request, practical application can be greater than theoretical value, theoretical such as square Minimum resolution size is 3 pixels, considers picture noise, minimum resolution size should be set as being greater than 3 pixels in practical application.
Generally, the resolution ratio of image is higher, and the workload of image procossing is bigger, and image processing time is longer.Herein with The workload of image resolution ratio characterization image procossing.Assuming that the image resolution ratio that image capture device obtains is a*b, then tradition regards Feel location technology image processing work amount be
wo=a*b (3).
The minimum resolution size of feature object is c, size characteristic d, then down-sampling rate is d/c, image point after down-sampling Resolution is a*b* (c/d)^2, i.e., per (d/c)^2A point is extracted in a point and forms new image, then image procossing after down-sampling Workload is
wd=a*b* (c/d)2 (4)。
Source images locations processing workload is related with feature object size, generally with characteristic size square at just Than t might as well be set as2d2, then the amount of work of the image procossing of the technology be
wt=a*b* (c/d)2+t2d2 (5)。
In formula, t=1.2 is amplification coefficient.
If r is the technology compared to the improved efficiency of traditional technology, then r is represented by
By formula (5) it is found that after image resolution ratio and object determine, the raising efficiency compared to traditional technology of the technology It is determined by the size characteristic d of object.When d meets formula (6) condition, raising efficiency is maximum, the time required to image procossing most at this time It is few.
Only when
For being normally applied occasion, d is much smaller than a and b, and c is much smaller than d, then a*b* (c/d)^2+d2It is much smaller than a*b, i.e., described Technology can significantly reduce the workload of image procossing, improve the efficiency of image procossing.
One high-resolution image procossing is changed into two width low resolution by adaptive down-sampling technology by the present invention Image procossing, greatly reduce workload and the time of image procossing on the basis of guaranteeing vision positioning precision, improve image Treatment effeciency meets the requirement of vision positioning real-time.
Embodiment
Assuming that image to be processed is as shown in figure 3, circle is object to be extracted, image resolution ratio 1800*1200 in figure.Size Feature is diameter of a circle, sets gray scale thresholding as 100 progress image segmentations by the thresholding method of gray level and uses The parameter of circle is extracted in hough transformation, and obtaining circular diameter is about 194.7, and enabling round minimum resolution size is 20, then down-sampling rate is about It is 100.By formula (6), theoretically the method for the invention image processing work amount is only the 3.0% of conventional method.Using tradition side Method carries out image procossing to Fig. 3 and feature extraction, required time are about 0.86s, carries out image procossing to Fig. 3 using context of methods And feature extraction, required time are about 0.06s, the actually required time is the 7.0% of conventional method, almost the same with theory.Cause This is only capable of supporting 1.16 frames/s frame rate view using conventional process when Fig. 3 is the piece image of actual video positioning Frequency is handled, and general requirement of the video location to frame rate is 10~30 frames/s, and can support to be up to using the method for the present invention The processing of 16.7 frames/s frame rate video, thus effectively increases video processing capabilities, it is real-time can to meet well video location The requirement of property.

Claims (6)

1. a kind of mobile robot fast vision localization method, which is characterized in that three modules are used,
A, CCD sensing module, for acquiring ambient image and transmitting acquired image to image processing module,
B, image processing module, for receiving the image and the feature for carrying out image procossing extraction image that CCD sensing module transmits Information is used to resolve the location information of robot, while extracting processing of the subject size characteristics for lower piece image,
C, location information resolves module, for receiving the image feature information of image processing module and calculating the position of robot Information;Method and step is as follows:
1) pass through CCD sensing module continuous acquisition ambient image and send image processing module to, execute step 2);
If 2) be currently video first frame image, step 3) is executed, it is no to then follow the steps 4);
3) image processing module handles the image received, and the characteristic information for then extracting image sends location information to Module is resolved for resolving robot location information, while extracting processing of the subject size characteristics for lower piece image, is executed Step 5);
4) the down-sampling rate of image is determined according to subject size characteristics and minimum resolution size and down-sampling is carried out to source images, it is right Image after down-sampling carries out image procossing and obtains object coordinates, determines object in source images according to object coordinates and down-sampling rate In region, image procossing is carried out to the regions of source images, the characteristic information for extracting image sends location information to and resolves mould Block extracts processing of the subject size characteristics for lower piece image for resolving robot location information, executes step 5);
5) position resolves the characteristic information that module receives image processing module transmission, calculates the location information of mobile robot, Execute step 2);
Down-sampling algorithm described in step 4) is as follows:
Image resolution ratio is a*b, is c by the subject size characteristics that upper piece image is handled, and minimum resolution size is d, then Down-sampling rate is d2/c2, image resolution ratio is a*b* (c/d) after down-sampling^2, i.e., per (d/c)^2A point composition is sampled in a point New image.
2. the method according to claim 1, wherein object coordinates described in step 4) refer in particular to object it is external just Rectangular coordinate, and external square side length is parallel with image boundary, then two opposite apex coordinate, that is, tables of external square Show that the coordinate of the square, external square two opposed apexes coordinates are respectively (x1, y1) and (x2, y2), then object coordinates are (x1, y1, x2, y2)。
3. the method according to claim 1, wherein region of the object described in step 4) in source images
Determine that algorithm is as follows:
Object coordinate in down-sampled images is (x1, y1, x2, y2), and x1<x2, y1<y2, source images down-sampling rate is k, considers to move The possible amplification of object in mobile robot motion process, enabling object locating region in source images is (kx1/ t, ky1/ t, kx2* t, ky2* t), t amplification coefficient, value 1.2.
4. the method according to claim 1, wherein the down-sampling algorithm:
When object is square, size characteristic is its side length, and otherwise size characteristic is the side length of the external square of object.
5. the method according to claim 1, wherein the down-sampling algorithm:
Minimum resolution size is the identified minimum pixel size of object energy under image resolution ratio rigid condition, according to image recognition The minimum pixel size of demand and application request, practical application can be greater than theoretical value.
6. the method according to claim 1, wherein the down-sampling algorithm:
It is the pixel of (x ', y ') for new images coordinate, coordinate (x, y) corresponding relationship with original image pixel is
CN201710106104.4A 2017-02-24 2017-02-24 A kind of mobile robot fast vision localization method Expired - Fee Related CN106949881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710106104.4A CN106949881B (en) 2017-02-24 2017-02-24 A kind of mobile robot fast vision localization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710106104.4A CN106949881B (en) 2017-02-24 2017-02-24 A kind of mobile robot fast vision localization method

Publications (2)

Publication Number Publication Date
CN106949881A CN106949881A (en) 2017-07-14
CN106949881B true CN106949881B (en) 2019-04-30

Family

ID=59466479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710106104.4A Expired - Fee Related CN106949881B (en) 2017-02-24 2017-02-24 A kind of mobile robot fast vision localization method

Country Status (1)

Country Link
CN (1) CN106949881B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111238450B (en) * 2020-02-27 2021-11-30 北京三快在线科技有限公司 Visual positioning method and device
CN111951258A (en) * 2020-08-21 2020-11-17 名创优品(横琴)企业管理有限公司 Goods shelf out-of-stock early warning analysis system and method based on edge calculation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007205982A (en) * 2006-02-03 2007-08-16 Fanuc Ltd Three-dimensional visual sensor
CN101660912B (en) * 2009-09-25 2012-12-05 湖南农业大学 Automatic navigating and positioning device and method
CN102168973B (en) * 2011-01-12 2012-10-03 湖南农业大学 Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof
CN102252681A (en) * 2011-04-18 2011-11-23 中国农业大学 Global positioning system (GPS) and machine vision-based integrated navigation and positioning system and method
CN103198477B (en) * 2013-03-25 2015-07-15 沈阳理工大学 Apple fruitlet bagging robot visual positioning method

Also Published As

Publication number Publication date
CN106949881A (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN110428433B (en) Canny edge detection algorithm based on local threshold
CN106548182B (en) Pavement crack detection method and device based on deep learning and main cause analysis
CN104268519B (en) Image recognition terminal and its recognition methods based on pattern match
CN104331876B (en) Method for detecting straight line and processing image and related device
CN110443199B (en) Point cloud posture identification method based on two-dimensional geometric profile
CN106778739B (en) A kind of curving transmogrified text page-images antidote
CN111402330B (en) Laser line key point extraction method based on planar target
CN102568006B (en) Visual saliency algorithm based on motion characteristic of object in video
CN106228569A (en) A kind of fish speed of moving body detection method being applicable to water quality monitoring
CN113327298B (en) Grabbing gesture estimation method based on image instance segmentation and point cloud PCA algorithm
CN106949881B (en) A kind of mobile robot fast vision localization method
CN106530313A (en) Sea-sky line real-time detection method based on region segmentation
CN110599522A (en) Method for detecting and removing dynamic target in video sequence
CN109285183B (en) Multimode video image registration method based on motion region image definition
CN103077398A (en) Livestock group number monitoring method based on embedded natural environment
CN110222647B (en) Face in-vivo detection method based on convolutional neural network
CN103514587B (en) Ship-based image-stabilizing method based on sea-sky boundary detecting
CN110473255A (en) A kind of ship bollard localization method divided based on multi grid
CN110111368B (en) Human body posture recognition-based similar moving target detection and tracking method
CN109410272B (en) Transformer nut recognition and positioning device and method
CN106355576A (en) SAR image registration method based on MRF image segmentation algorithm
CN111783580B (en) Pedestrian identification method based on human leg detection
CN109544608A (en) A kind of unmanned plane Image Acquisition feature registration method
CN111914699B (en) Pedestrian positioning and track acquisition method based on video stream of camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190430

Termination date: 20210224