CN109544615A - Method for relocating, device, terminal and storage medium based on image - Google Patents

Method for relocating, device, terminal and storage medium based on image Download PDF

Info

Publication number
CN109544615A
CN109544615A CN201811415796.1A CN201811415796A CN109544615A CN 109544615 A CN109544615 A CN 109544615A CN 201811415796 A CN201811415796 A CN 201811415796A CN 109544615 A CN109544615 A CN 109544615A
Authority
CN
China
Prior art keywords
frame image
image
parameter
current frame
fisrt feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811415796.1A
Other languages
Chinese (zh)
Other versions
CN109544615B (en
Inventor
郑远力
顾照鹏
肖泽东
陈宗豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Information Technology Co Ltd filed Critical Shenzhen Tencent Information Technology Co Ltd
Priority to CN201811415796.1A priority Critical patent/CN109544615B/en
Publication of CN109544615A publication Critical patent/CN109544615A/en
Application granted granted Critical
Publication of CN109544615B publication Critical patent/CN109544615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of method for relocating based on image, device, terminal and media, and wherein method includes: that current frame image and reference frame image are obtained from image collection;Second feature information is extracted from extraction fisrt feature information in the current frame image and from the reference frame image, and obtains the matching relationship between the fisrt feature information and the second feature information;Determine that the location parameter of the current frame image, the location parameter include rotation parameter and displacement parameter according to the matching relationship;According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, the location parameter of the current frame image is adjusted;The three-dimensional space position of the terminal is determined according to location parameter adjusted.The embodiment of the present invention can be improved the success rate of reorientation and relocate the accuracy of result.

Description

Method for relocating, device, terminal and storage medium based on image
Technical field
The present invention relates to Internet technical fields, and in particular to technical field of image processing, more particularly to it is a kind of based on figure The method for relocating of picture, a kind of relocation device based on image, a kind of terminal and a kind of computer storage medium.
Background technique
During using the terminals such as such as VR (Virtual Reality, virtual reality) glasses, the VR helmet, usually The position for the image trace positioning terminal for needing to acquire based on terminal.Positioning based on image can be inevitably because of various feelings Condition and cause positioning fail, such as: the camera of terminal is blocked or terminal cause since motion amplitude is excessive acquisition image It is fuzzy, or since environmental light brightness is too strong or excessively weak, just need to relocate terminal at this time.It is existing to be based on The re-positioning technology of image refers in the case where terminal positioning failure, obtains the collected present frame figure of camera of terminal Picture, and similar reference frame image is found out from collected historical frames image;Then the two dimension of current frame image is directly utilized The matching relationship of the three-dimensional space of characteristic point and reference frame image point carrys out the relative position parameter of computing terminal, so that it is determined that terminal Three-dimensional space position.Practice discovery, above-mentioned existing method for relocating require reference frame image that must have enough quantity Three-dimensional space point, otherwise can not successfully realize matching or be unable to get accurate matching relationship, so as to cause reorientation failure Or the error of reorientation result is larger.
Summary of the invention
The embodiment of the invention provides a kind of method for relocating based on image, device, terminal and computer storage medium, The success rate of reorientation can be improved and relocate the accuracy of result.
On the one hand, the embodiment of the invention provides a kind of method for relocating based on image, the resetting based on image Position method include:
Current frame image is obtained from image collection and reference frame image, described image set include that terminal is collected more Frame image, the current frame image are the frame image of acquisition time the latest in described image set, and the reference frame image is Acquisition time is prior to the current frame image and a frame image similar with the current frame image in described image set;
Second feature is extracted from extraction fisrt feature information in the current frame image and from the reference frame image Information, and obtain the matching relationship between the fisrt feature information and the second feature information;
Determine that the location parameter of the current frame image, the location parameter include rotation parameter according to the matching relationship And displacement parameter;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the current frame image Location parameter be adjusted;
The three-dimensional space position of the terminal is determined according to location parameter adjusted.
On the other hand, the embodiment of the invention provides a kind of relocation device based on image, the weights based on image Positioning device includes:
Acquiring unit is used to obtain current frame image from image collection and reference frame image, described image set includes The collected multiple image of terminal, the current frame image are the frame image of acquisition time the latest in described image set, institute State reference frame image be in described image set acquisition time prior to the current frame image and similar to the current frame image A frame image;
Extraction unit, for from extraction fisrt feature information in the current frame image and from the reference frame image Second feature information is extracted, and obtains the matching relationship between the fisrt feature information and the second feature information;
Determination unit, for determining the location parameter of the current frame image, the position ginseng according to the matching relationship Number includes rotation parameter and displacement parameter;
Adjustment unit is used for according to the corresponding three-dimensional space position of the reference frame image and the matching relationship, right The location parameter of the current frame image is adjusted;
The determination unit is also used to determine the three-dimensional space position of the terminal according to location parameter adjusted.
In another aspect, the terminal includes input equipment and output equipment, institute the embodiment of the invention provides a kind of terminal State terminal further include:
Processor is adapted for carrying out one or one or more instruction;And
Computer storage medium, the computer storage medium be stored with one or one or more instruction, described one or One or more instruction is suitable for being loaded by the processor and executing following steps:
Current frame image is obtained from image collection and reference frame image, described image set include that terminal is collected more Frame image, the current frame image are the frame image of acquisition time the latest in described image set, and the reference frame image is Acquisition time is prior to the current frame image and a frame image similar with the current frame image in described image set;
Second feature is extracted from extraction fisrt feature information in the current frame image and from the reference frame image Information, and obtain the matching relationship between the fisrt feature information and the second feature information;
Determine that the location parameter of the current frame image, the location parameter include rotation parameter according to the matching relationship And displacement parameter;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the current frame image Location parameter be adjusted;
The three-dimensional space position of the terminal is determined according to location parameter adjusted.
In another aspect, the embodiment of the invention provides a kind of computer storage medium, the computer storage medium storage There are one or one or more instruction, described one or one or more instruction are suitable for being loaded by processor and executing following steps:
Current frame image is obtained from image collection and reference frame image, described image set include that terminal is collected more Frame image, the current frame image are the frame image of acquisition time the latest in described image set, and the reference frame image is Acquisition time is prior to the current frame image and a frame image similar with the current frame image in described image set;
Second feature is extracted from extraction fisrt feature information in the current frame image and from the reference frame image Information, and obtain the matching relationship between the fisrt feature information and the second feature information;
Determine that the location parameter of the current frame image, the location parameter include rotation parameter according to the matching relationship And displacement parameter;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the current frame image Location parameter be adjusted;
The three-dimensional space position of the terminal is determined according to location parameter adjusted.
The embodiment of the present invention based on image terminal is relocated when, first by the characteristic information of current frame image with The characteristic information of reference frame image is matched, and due to being generally comprised in the characteristic information of frame image compared with multi-characteristic points, is led to The available accurate matching relationship of matching between characteristic information is crossed, the accuracy for promoting reorientation is conducive to;Secondly The location parameter of present frame is determined further according to matching relationship, and according to the corresponding three-dimensional space position of reference frame image and matching Relationship is adjusted the location parameter, and the three-dimensional space position of terminal is determined according to location parameter adjusted;Above-mentioned weight In position fixing process, due to without being matched according to the three-dimensional space point of reference frame image, can to avoid reorientation process by The quantity of the three-dimensional space point of reference frame image influences and leads to the situation that reorientation fails or accuracy rate is low, promotes reorientation Success rate and the accuracy for relocating result.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 a is a kind of application scenario diagram of method for relocating based on image provided in an embodiment of the present invention;
Fig. 1 b is the application scenario diagram of another method for relocating based on image provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of method for relocating based on image provided in an embodiment of the present invention;
Fig. 3 be another embodiment of the present invention provides a kind of method for relocating based on image flow diagram;
Fig. 4 be another embodiment of the present invention provides a kind of method for relocating based on image application scenario diagram;
Fig. 5 is a kind of structural schematic diagram of relocation device based on image provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description.
The reorientation scheme based on image that the embodiment of the invention provides a kind of, the program can be applied in terminal with It is relocated in being realized to terminal.Terminal herein can include but is not limited to: virtually display wear-type is aobvious for VR glasses, VR helmet etc. Show the mobile platforms such as equipment and mobile robot, pilotless automobile, unmanned plane.Below will by terminal be the VR helmet for, It is illustrated in conjunction with realization process of Fig. 1 a- Fig. 1 b to the reorientation scheme based on image of the embodiment of the present invention, it is possible to understand that , when terminal be the VR helmet except other equipment when, the example that can refer to the VR helmet is analyzed.
The working principle of the VR helmet is substantially: when user wears and uses the VR helmet, and the VR helmet can call the VR helmet The camera assembly configured acquires ambient image, and the three-dimensional space meta position of the VR helmet is calculated according to the ambient image collected It sets, corresponding virtual image display processing is carried out according to the three-dimensional space position so as to subsequent.But as shown in Figure 1a, VR head Helmet in the process of running, may lead to the three-dimensional space position inaccuracy for the VR helmet being calculated due to long-play, Or cause positioning to fail due to camera assembly is blocked, collects ambient image is too fuzzy etc., to can not count Calculation obtains the three-dimensional space position of the VR helmet, and then causes subsequent not accurately determined according to the three-dimensional space position of the VR helmet Virtual image to be shown, i.e., can not corresponding virtual image accurately be presented for user in the VR helmet at this time.
In this case, it is possible to recalculate to obtain using the reorientation scheme provided in an embodiment of the present invention based on image The three-dimensional space position of the VR helmet;The calculating process generally comprises: 1. the VR helmet can obtain present frame figure from image collection Picture and reference frame image;The camera assembly that image collection herein refers to that the VR helmet is configured is acquired to obtain to ambient image Multiple image set, current frame image refers to that the frame image of acquisition time the latest, reference frame image are in image collection Refer to that acquisition time is prior to current frame image and a frame image similar with current frame image in image collection.2. obtaining present frame figure The matching relationship between the second feature information in fisrt feature information and reference frame image as in, fisrt feature letter herein Breath includes the two-dimensional coordinate of the multiple fisrt feature points and each fisrt feature point in current frame image, and second feature information includes The two-dimensional coordinate of multiple second feature points and each second feature point in reference frame image, so-called characteristic point refer in image It can reflect with distinct characteristic and effectively that image substantive characteristics can be identified for that the point of each object in image, such as: if image For facial image, then characteristic point can refer to the point for containing in facial image and constituting human face five-sense-organ and facial contour, such as people For indicating the point of eyes in face image;If for another example image is vehicle image, characteristic point, which can refer in vehicle image, includes The point of vehicle and vehicle's contour, as in vehicle image for indicating the point of vehicle tyre;The two-dimensional coordinate of characteristic point refers to Pixel coordinate of the characteristic point in the two dimensional image coordinate system corresponding to image.3. determining current frame image according to matching relationship Location parameter;The location parameter may include rotation parameter and displacement parameter, and rotation parameter therein indicates that current frame image is opposite In the variable quantity of the rotation angle of reference frame image, displacement parameter indicates that current frame image becomes relative to the displacement of reference frame image Change amount.Such as rotation parameter is 4 degree, then expression current frame image is 4 degree relative to the rotationally-varying amount of reference frame image, i.e., Current frame image has rotated 4 degree relative to reference frame image;For another example displacement parameter is 4 millimeters, then indicating current frame image phase Displacement variable for reference frame image is 4 millimeters, i.e. current frame image is shifted 4 millimeters relative to reference frame image.④ According to the corresponding three-dimensional space position of reference frame image and matching relationship, the location parameter of current frame image is adjusted; 5. determining the three-dimensional space position of the VR helmet according to location parameter adjusted, the corresponding three-dimensional space of reference frame image herein Position may include the three-dimensional coordinate of the target second feature point in reference frame image, and the three-dimensional coordinate of characteristic point refers to that characteristic point exists The coordinate in three-dimensional space where the VR helmet.It, can be according to the VR helmet after the three-dimensional space position for obtaining the VR helmet Three-dimensional space position determines virtual image to be shown, and determining virtual image is presented to the user and is checked, as shown in Figure 1 b. Certainly, further, the VR helmet is also based on the three-dimensional space position and executes other application processing, such as Movement Control Agency Reason: can determine the location information of the barrier around user according to the three-dimensional space position, generated and controlled according to the location information System instruction, and the control instruction is exported so that user can avoid hindering during using VR glasses according to the control instruction Hinder object, carries out safety moving.
It can be seen that fisrt feature letter of VR helmet during being relocated, in available current frame image The matching relationship of breath and the second feature information in reference frame image, i.e., by the fisrt feature information and reference in current frame image Second feature information in frame image is matched.Since characteristic point refers in image have distinct characteristic and can effectively reflect Image substantive characteristics can be identified for that the point of each object in image, therefore extract from current frame image and reference frame image respectively It may include more characteristic point in special-effect information out, it is available accurate by the matching between characteristic information Matching relationship, so that more quasi- according to the three-dimensional space position that matching relationship carries out the obtained VR of series of computation Really, the success rate of reorientation is improved.Repositioning process shown in Fig. 1 a and Fig. 1 b is not necessarily to the three-dimensional space according to reference frame image Between point matched, can be influenced to avoid the process of reorientation by the quantity of the three-dimensional space point of reference frame image and cause to reset Position failure or the low situation of accuracy rate promote the success rate of reorientation and relocate the accuracy of result.
Based on the description above, the embodiment of the present invention proposes a kind of method for relocating based on image, should be based on image Method for relocating can be executed by above-mentioned mentioned terminal.Refer to Fig. 2, should method for relocating based on image may include with Lower step S201-S205:
S201 obtains current frame image and reference frame image from image collection.
In the process of running, the camera assembly that terminal can be called to be configured is acquired ambient image to terminal, camera shooting Component often collects a frame image, then the frame image can be added in the image collection of memory by processor.Its In, the camera assembly that terminal is configured is the camera that can produce the frame image including depth image region, such as binocular camera, More mesh cameras, RGBD camera, etc., so-called depth image region refer to the distance of each point into ambient image by camera assembly The image-region of (depth) as pixel value.When terminal detects the trigger event of reorientation, can be obtained from image collection Take current frame image and reference frame image;Trigger event herein may include the event and/or terminal reached in the period of reorientation The event of failure is positioned, image collection herein may include the collected multiple image of terminal, and current frame image is image set The frame image of acquisition time the latest in conjunction, reference frame image be image collection in acquisition time prior to current frame image and with work as The similar frame image of prior image frame.
In one embodiment, terminal is in order to avoid leading to the three of the terminal being calculated because of terminal long-play The situation of dimension space position inaccuracy, can periodically carry out relocation process, and trigger event at this time may include: to reset The event of the period arrival of position, the event that the period of so-called reorientation reaches refer to: at the time of last time reorientation and currently The time interval at moment is equal to the event of prefixed time interval.Specifically, at the time of terminal available last reorientation with Time interval between current time;If the time interval reaches prefixed time interval, it may be considered that the period of reorientation Reach, it can think to detect the trigger event of reorientation, can be obtained from image collection at this time current frame image and Reference frame image.
In another embodiment, the trigger event of reorientation may include: the event of terminal positioning failure.Specifically, Terminal can be using location algorithm come the three-dimensional space position of computing terminal, and location algorithm herein can include but is not limited to: KF (Kalman filtering, Kalman filtering) algorithm, UKF (Unscented Kalman Filter, lossless Kalman's filter Wave), the SLAM of sparse Extended information filter (Simultaneous Localization And Mapping, immediately positioning and ground Figure building) algorithm, PF (Particle Filter, particle filter) algorithm, etc..If terminal can not be calculated using location algorithm The three-dimensional space position of terminal is obtained, it may be considered that terminal positioning fails at this time, it can think to detect reorientation Trigger event can obtain current frame image and reference frame image from image collection at this time.
S202 believes from extracting fisrt feature information in current frame image and extracting second feature from reference frame image Breath, and obtain the matching relationship between fisrt feature information and second feature information.
Since characteristic point refers in image have distinct characteristic and can effectively reflect that image substantive characteristics can be identified for that figure The point of each object as in, then including a large amount of characteristic point in each frame image.Therefore, get current frame image and After reference frame image, feature point detection algorithm can be respectively adopted, feature is carried out to current frame image and reference frame image respectively Point detection, to obtain the fisrt feature information of current frame image and the second feature information of reference frame image.Wherein, herein Feature point detection algorithm can include but is not limited to: SIFT (Scale-invariant feature transform, scale Invariant features transformation) algorithm, FAST (Features from accelerated segment test) algorithm, SURF (Speeded-Up Robust Features accelerates robust feature) algorithm, etc.;Fisrt feature information includes current frame image In multiple fisrt feature points and each fisrt feature point two-dimensional coordinate;Second feature information includes more in reference frame image The two-dimensional coordinate of a second feature point and each second feature point.
In one embodiment, after obtaining fisrt feature information and second feature information, characteristic point can be used Matching algorithm carries out matching treatment to fisrt feature information and second feature information, obtains fisrt feature information and second feature letter Matching relationship between breath.Wherein, Feature Points Matching algorithm herein can include but is not limited to: FAST algorithm, SIFT algorithm, ORB (ORiented Brief) algorithm, etc.;Matching relationship includes the matching degree of multiple points pair and each pair of point;Wherein, one Point is to including a fisrt feature point and a second feature point;The multiple centering matching degree is greater than preset matching threshold value Point is to for matching double points.
In another embodiment, after obtaining fisrt feature information and second feature information, it can also use and be based on The characteristic point matching method of deep learning carries out matching treatment to fisrt feature information and second feature information, obtains fisrt feature Matching relationship between information and second feature information.
S203 determines the location parameter of current frame image according to matching relationship, which includes rotation parameter and position Shifting parameter.
Due to reference frame image be in image collection acquisition time prior to current frame image and similar with current frame image One frame image, and characteristic point is that have distinct characteristic in image and can effectively reflect that image substantive characteristics can be identified for that image In each object point, then must exist in current frame image second special in a large amount of fisrt feature point and reference frame image Sign point matches, each fisrt feature point and the second feature point that matches can the same objects in corresponding ambient image The same point.It can be seen that the fisrt feature point in current frame image should correspond to it is matching in reference frame image Second feature point the point in current frame image is mapped in after the evolution for rotating and/or being displaced in three dimensions; That is, what the fisrt feature point in current frame image was constituted with the matching second feature point in reference frame image Point corresponds to the corresponding change in location track of the satisfaction, which can be indicated in the form of target essential matrix, So-called essential matrix is a kind of for indicating the matrix of characteristic point position variation track, can be made of two parts, this two parts It is displacement parameter (or being translation vector) and rotation parameter (or being spin matrix) respectively.
And from the foregoing it will be appreciated that include the matching degree of multiple points pair and each pair of point in matching relationship, a point is to including One fisrt feature point and a second feature point.Since the matching degree of the point pair in matching relationship can react the centering Whether fisrt feature point and second feature point match, and therefore, are obtaining between fisrt feature information and second feature information After relationship, the change in location track between fisrt feature point and second feature point can be determined according to the matching relationship (i.e. target essential matrix).And the change in location of characteristic point can indicate the change in location of the frame image where this feature point, because This can determine the location parameter of current frame image after determining target essential matrix according to the target essential matrix, this The location parameter of the current frame image at place refers to the change in location parameter of the corresponding reference frame image of current frame image, position ginseng Number includes rotation parameter and displacement parameter.
S204, according to the corresponding three-dimensional space position of reference frame image and matching relationship, to the position of current frame image Parameter is adjusted.
S205 determines the three-dimensional space position of terminal according to location parameter adjusted.
Fisrt feature is being determined according to the matching relationship between fisrt feature information and second feature information in step S203 There may be certain error when change in location track (i.e. target essential matrix) between point and second feature point, so as to cause The location parameter inaccuracy for the current frame image being calculated.Therefore, the embodiment of the present invention can also be in S204 according to reference frame The corresponding three-dimensional space position of image and matching relationship are adjusted the location parameter of current frame image, after being adjusted Location parameter, the corresponding three-dimensional space position of reference frame image herein include meeting in the matching double points of target essential matrix The three-dimensional coordinate of target second feature point;After obtaining the location parameter adjusted of current frame image, can in S205 root The three-dimensional space position of terminal is determined according to location parameter adjusted.In one embodiment, to the position of current frame image When parameter is adjusted, the displacement parameter in the location parameter of current frame image can be adjusted, then correspondingly, adjustment Location parameter afterwards may include the rotation parameter and displacement parameter adjusted of current frame image.
The embodiment of the present invention when being relocated based on image to terminal, first by the characteristic information of current frame image and The characteristic information of reference frame image is matched, and due to being generally comprised in the characteristic information of frame image compared with multi-characteristic points, is led to The available accurate matching relationship of matching between characteristic information is crossed, the accuracy for promoting reorientation is conducive to;Secondly The location parameter of present frame is determined further according to matching relationship, and according to the corresponding three-dimensional space position of reference frame image and matching Relationship is adjusted the location parameter, and the three-dimensional space position of terminal is determined according to location parameter adjusted;Above-mentioned weight In position fixing process, due to without being matched according to the three-dimensional space point of reference frame image, can to avoid reorientation process by The quantity of the three-dimensional space point of reference frame image influences and leads to the situation that reorientation fails or accuracy rate is low, promotes reorientation Success rate and the accuracy for relocating result.
Fig. 3 is referred to, is the flow diagram of another method for relocating based on image provided in an embodiment of the present invention. The method for relocating based on image can be executed by above-mentioned mentioned terminal.Fig. 3 is referred to, it should the reorientation based on image Method may include following steps S301-S308:
S301 constructs image collection.
Image collection herein may include key frame images subset and non-key frame image subset;Key frame images Collection includes an at least frame key frame images and the acquisition time of an at least frame key frame images.In one embodiment, it closes Key frame image subset may also include the location information of an at least frame key frame images, and the location information of the key frame images includes closing Rotation parameter and/or displacement parameter of the key frame image relative to first frame image.Non-key frame image subset includes an at least frame The acquisition time of non-key frame image and an at least frame non-key frame image.Terminal can call camera assembly to ambient image It is acquired, obtains multiple image.Terminal often collects a frame image, then the frame image can be added in image collection, I.e. image collection is real-time update.
In specific implementation process, the collected first frame image of terminal can be set to key frame images, and by this The acquisition time of one frame image and first frame image is added in key frame images subset.In subsequent image acquisition process, When terminal often collects new frame image, the acquisition time of the available new frame image;From key frame images subset Choose the smallest Target key frames image of acquisition time difference of acquisition time and new frame image.Then an available new frame Similarity between image and Target key frames image;If similarity is less than default similar threshold value, new frame image is arranged For key frame images, and the acquisition time of new frame image and new frame image is added in key frame images subset;Otherwise Non-key frame image is set by new frame image, and the acquisition time of new frame image and new frame image is added to non-pass In key frame image subset.
It can be seen that the embodiment of the present invention judges newly according to the similarity between new frame image and Target key frames image One frame image is key frame images or non-key frame image.If similarity is less than default similar threshold value, illustrate end at this time The position at end may have occurred large change, i.e., the new frame image recording the position of terminal acquired when varying widely Ambient image, it is therefore desirable to using the new frame image as key frame images;If similarity is not less than default similar threshold value, Illustrate that new frame image is similar to Target key frames image, the positive even running of terminal at this time can make the new frame image For non-key frame image, no longer need to using the new frame image as key frame images, to avoid subsequent from key frame images collection When obtaining the reference frame image of current frame image in conjunction, the similarity of new frame image and current frame image is computed repeatedly, and The similarity of similar Target key frames image and current frame image, can save process resource.
S302 obtains current frame image and reference frame image from image collection.
When obtaining current frame image and reference frame image from image collection, when can obtain acquisition from image collection Between frame image the latest be determined as current frame image, i.e., using the collected newest frame image of camera assembly as present frame figure Picture.Such as include the acquisition time of 3 frame images and every frame image in image collection, it is respectively: frame image A and acquisition time 10:35, frame image B and acquisition time 10:30 and frame image C and acquisition time 10:25, by compare 3 frame images acquisition when Between it is found that the acquisition time 10:35 of frame image A the latest, i.e., using frame image A as current frame image.
Then it chooses from key frame images subset and is determined as with the highest key frame images of similarity of current frame image Reference frame image.Specifically, after current frame image has been determined present frame figure can be calculated separately using image similar method As the similarity with each key frame images in key frame images subset, the highest key frame images conduct of similarity is then chosen Reference frame image.Image similar method herein can include but is not limited to: perceptual hash algorithm (Perceptual hash Algorithm), the method based on Histogram Matching, image similarity calculation method based on characteristic point, etc..
S303 believes from extracting fisrt feature information in current frame image and extracting second feature from reference frame image Breath, and obtain the matching relationship between fisrt feature information and second feature information.
S304 determines the location parameter of current frame image according to matching relationship, which includes rotation parameter and position Shifting parameter.
In step S303, can first be respectively adopted the feature point detection algorithms such as SIFT algorithm respectively to current frame image and Reference frame image carry out characteristic point detection, thus obtain current frame image fisrt feature information and reference frame image second Characteristic information.Wherein, fisrt feature information includes the multiple fisrt feature points and each fisrt feature point in current frame image Two-dimensional coordinate;Second feature information includes that the two dimension of the multiple second feature points and each second feature point in reference frame image is sat Mark.Then matching treatment is carried out to fisrt feature information and second feature information using Feature Points Matchings algorithms such as ORB algorithms, obtained Matching relationship between fisrt feature information and second feature information.
After obtaining matching relationship, it can determine that the position of current frame image is joined according to matching relationship in step s 304 Number, specific implementation process may include steps of s11-s13:
S11 is iterated using fisrt feature information and second feature information and at least one essential matrix is calculated.
Specifically, can using RANSAC (Random Sample Consensus) algorithm according to fisrt feature information and Second feature information, which is iterated, is calculated at least one essential matrix, and so-called RANSAC algorithm is a kind of according to one group of packet Sample data set containing abnormal data and normal data, calculates the algorithm of the mathematical model parameter of data, wherein abnormal data Refer to that the data that can not be described or be constrained by mathematical model, normal data refer to the number that can be described or be constrained by mathematical model According to.During use RANSAC algorithm is iterated and essential matrix is calculated, multiple points can be randomly selected to composition Sample data set and an initial matrix, wherein matching degree is different to concentrating for sample data less than the point of preset matching threshold value Regular data, matching degree are greater than the normal data that the point of preset matching threshold value concentrates (i.e. matching double points) for sample data.Then The Fitting Calculation that initial matrix is iterated using sample data set, to obtain at least one essential matrix.
S12 determines target essential matrix based on matching relationship, wherein at least one from least one essential matrix In essential matrix, the quantity for meeting the matching double points of target essential matrix constraint is most.
Since in the calculating process of essential matrix, sample data set is randomly selected, therefore calculated each Stromal matrix may be different, and the quantity for meeting the matching double points of each essential matrix constraint is different.Obtaining at least one After essential matrix, can determine from least one essential matrix most can accurately indicate current frame image relative to reference frame figure The objective matrix of the change in location of picture.Practice have shown that if the quantity for meeting the matching double points of essential matrix constraint is more, this Stromal matrix can more indicate change in location of the current frame image relative to reference frame image.It therefore, can be according in matching relationship The matching degree of each matching double points filters out target essential matrix from least one essential matrix.Specifically, it can obtain respectively The quantity for meeting the matching double points of each essential matrix constraint chooses the most essential matrix of quantity as target essential matrix.
S13, parsing target essential matrix obtain the location parameter of current frame image, the location parameter include rotation parameter and Displacement parameter.
From the foregoing it will be appreciated that target essential matrix can be made of displacement parameter and rotation parameter two parts, therefore determining mesh After sample stromal matrix, dissection process can be carried out to target essential matrix, to obtain displacement parameter and rotation parameter, parsed Obtained displacement parameter and rotation parameter can be used as the location parameter of current frame image.In one embodiment, it can adopt Dissection process is carried out to target essential matrix with SVD (Singular Value Decomposition, singular value decomposition) algorithm, To obtain displacement parameter and rotation parameter.It should be noted that the location parameter of current frame image refers to current frame image phase For the change in location parameter of reference frame image.
S305 obtains the corresponding three-dimensional space position of reference frame image, the corresponding three-dimensional space position of the reference frame image The three-dimensional coordinate of target second feature point in matching double points including meeting target essential matrix.
In specific implementation process, multiple matching double points can be screened using target essential matrix, obtain meeting mesh The matching double points of sample stromal matrix, it is special using the fisrt feature point in the matching double points for meeting target essential matrix as target first Point is levied, using the second feature point in the matching double points for meeting target essential matrix as target second feature point.Determining target After second feature point, the three-dimensional coordinate of available target second feature point.Specifically, available target second feature point Two-dimensional coordinate in reference frame image, and obtain the depth image region in reference frame image;Since depth image region is Image-region by the distance (depth) of camera assembly each point into ambient image as pixel value, therefore depth map can be combined As the image information in region and the two-dimensional coordinate of target second feature point determine the three-dimensional coordinate of target second feature point;Finally may be used To be constituted the corresponding three-dimensional space position of reference frame image using the three-dimensional coordinate of target second feature point.
S306, the displacement according to the corresponding three-dimensional space position of reference frame image and matching relationship, to current frame image Parameter is adjusted, the displacement parameter after being adjusted.
It, can be according to the corresponding three-dimensional space of reference frame image after obtaining the corresponding three-dimensional space position of reference frame image Between position and matching relationship the displacement parameter of current frame image is adjusted, the displacement parameter after being adjusted is specific real The process of applying may include steps of s21-s23:
S21 determines corresponding target according to the three-dimensional coordinate of the location parameter of current frame image and target second feature point The three-dimensional coordinate of fisrt feature point.
Since the location parameter of current frame image refers to change in location parameter of the current frame image relative to reference frame image, I.e. reference frame image is according to available current frame image after rotation parameter and displacement parameter progress change in location.And in reference frame During image generates change in location to current frame image, the three-dimensional coordinate of the target second feature point in reference frame image Corresponding change in location can occur, the three-dimensional coordinate of the target second feature point after change in location can correspond to current frame image In corresponding target fisrt feature point three-dimensional coordinate.It therefore can be according to the rotation parameter in the location parameter of current frame image It is changed processing with three-dimensional coordinate of the displacement parameter to target second feature point, to obtain corresponding target fisrt feature point Three-dimensional coordinate.
For example, with the three-dimensional coordinate of target second feature point for P (X, Y, Z) for, in the location parameter of current frame image Rotation parameter be R=[R1,R2,R3], R1、R2And R3Respectively indicate the one two three row data of rotation parameter R, displacement parameter For t=[t1,t2,t3], t1、t2And t3Respectively indicate the one two three row data of displacement parameter t.In view of target essential matrix Calculating there is a certain error, cause displacement parameter there may be dimensional variation, therefore introduce dimensional variation parameter s.True It, can be using formula shown in formula 1.1 to P (X, Y, Z) after determining rotation parameter R, displacement parameter t and dimensional variation parameter s It is changed processing, so as to obtain the three-dimensional coordinate P'(X' of the corresponding target fisrt feature point of target second feature point, Y',Z')。
P'=RP+st formula 1.1
The three-dimensional coordinate of target fisrt feature point is converted to two-dimensional coordinate by s22.
Multiple image in image collection is collected by the camera assembly of terminal, and the three-dimensional coordinate of characteristic point can lead to The internal reference for crossing camera assembly is converted into two-dimensional coordinate.In specific implementation process, the internal reference K of camera assembly can be first obtained, this is interior Joining K can be as shown in formula 1.2:
Wherein, fxIndicate the ratio of the physical size of the focal length and pixel of camera assembly on horizontal axis x, fyIndicate camera shooting group The ratio of physical size of the focal length and pixel of part on longitudinal axis y, cxIndicate picture centre pixel coordinate and image origin pixel The horizontal pixel number differed between coordinate, cyIt indicates to differ vertical between picture centre pixel coordinate and image origin pixel coordinate To pixel number.After obtaining internal reference K, it can be obtained according to the internal reference K and the three-dimensional coordinate of target fisrt feature point of camera assembly Two-dimensional coordinate p'(u', v' to after the conversion of target fisrt feature point), wherein
S23, the two-dimensional coordinate being converted to according to target fisrt feature point and the target first in fisrt feature information are special The two-dimensional coordinate of sign point is adjusted the displacement parameter of current frame image, the displacement parameter after being adjusted.
It is possible, firstly, to the target in the two-dimensional coordinate and fisrt feature information that are converted to according to target fisrt feature point The two-dimensional coordinate of one characteristic point determines the coordinate correspondence relationship of target second feature point and target fisrt feature point.If target first The two-dimensional coordinate of characteristic point is p (u, v), from the foregoing it will be appreciated that the fisrt feature point in current frame image should correspond to reference frame Matching second feature point in image is mapped in after the evolution for rotating and/or being displaced in three dimensions works as Point in prior image frame, that is, the two-dimensional coordinate being converted to should be with the target fisrt feature points that include in fisrt feature information Two-dimensional coordinate is equal.So can establish two-dimensional coordinate p'(u', v' that target fisrt feature point is converted to) and fisrt feature The relation of equality of the two-dimensional coordinate p (u, v) of target fisrt feature point in information, can be such as formula 1.3:
Formula 1.3 is arranged in conjunction with above-mentioned formula 1.1, target second feature point and target fisrt feature point can be obtained Coordinate correspondence relationship, as shown in formula 1.4:
For the ease of subsequent calculating, formula 1.4 is arranged, available formula 1.5:
After the coordinate correspondence relationship for obtaining target second feature point and target fisrt feature point, coordinate pair can be based on It should be related to the dimensional variation value for seeking displacement parameter.Specifically, formula 1.5 can be based on, using the institute for meeting target essential matrix There are target fisrt feature point and target second feature point in matching double points to constitute the equation group of various dimensions, as shown in formula 1.6:
Wherein, P1,P2,…PnIt can be used to indicate that the three-dimensional coordinate of different target second feature points, (u1,v1),(u2, v2),…(un,vn) it can be used to indicate that the two-dimensional coordinates of different target fisrt feature points in fisrt feature information.
The value of dimensional variation parameter s can be calculated based on above-mentioned formula 1.6, the value of the dimensional variation parameter is Dimensional variation value.For ease of calculation, the left-hand component that above-mentioned formula 1.6 can be indicated with A indicates the right of above-mentioned formula 1.6 with B Part, then formula 1.6 can be converted to formula 1.7:
As=B formula 1.7
Dimensional variation value s=(A can be calculated according to formula 1.7TA)-1ATB。
It is then possible to dimensional variation is carried out to displacement parameter using dimensional variation value, the displacement parameter after being adjusted.Tool Body, dimensional variation can be carried out to displacement parameter using formula shown in formula 1.8, the displacement parameter t after being adjustedtrue
ttrue=t*s formula 1.8
S307 determines the three-dimensional space position of terminal according to location parameter adjusted.
Wherein, location parameter adjusted includes the rotation parameter and displacement parameter adjusted of current frame image.Due to The rotation parameter of current frame image and displacement parameter adjusted are that current frame image becomes relative to the position of reference frame image Change parameter, therefore when determining the three-dimensional space position of terminal according to location parameter adjusted, it is also necessary to obtain reference frame figure The reference position parameter of picture, the three-dimensional space position for determining terminal in conjunction with the reference position parameter.Specifically, available ginseng The reference position parameter of frame image is examined, which includes reference frame image relative to the first frame figure in image collection The reference rotation parameter of picture and refer to displacement parameter;Then it is calculated according to the rotation parameter of current frame image and with reference to rotation parameter Standard rotation parameter is obtained, the standard rotation parameter is rotationally-varying ginseng of the current frame image relative to the first frame image Number;Standard displacement parameter is calculated according to displacement parameter adjusted and with reference to displacement parameter, the standard displacement parameter is Change in displacement parameter of the current frame image relative to first frame image;It is finally true based on standard rotation parameter and standard displacement parameter Determine the three-dimensional space position of terminal.Specifically, standard rotation parameter and standard displacement parameter can be sent to space orientation journey Sequence handles space orientation program to standard rotation parameter and standard displacement parameter by SLAM method, thus Obtain the three-dimensional space position of terminal.
S308 carries out business processing according to the three-dimensional space position of terminal, which includes: at virtual image display Reason and/or Movement Control Agency reason.
If business processing is virtual image display processing, after the three-dimensional space position of terminal has been determined, Ke Yigen The attitude data of user is calculated according to the three-dimensional space position of terminal, is then determined according to the attitude data to be output virtual Image, and the virtual image is shown in user interface.
If business processing is Movement Control Agency reason, terminal can be determined after determining the three-dimensional space position of terminal With the positional relationship between the barrier in ambient image, and based on the positional relationship generate control instruction, pacified with controlling terminal It is complete mobile, it avoids bumping against with the barrier in ambient image.
The embodiment of the present invention when being relocated based on image to terminal, first by the characteristic information of current frame image and The characteristic information of reference frame image is matched, and due to being generally comprised in the characteristic information of frame image compared with multi-characteristic points, is led to The available accurate matching relationship of matching between characteristic information is crossed, the accuracy for promoting reorientation is conducive to;Secondly The location parameter of present frame is determined further according to matching relationship, and according to the corresponding three-dimensional space position of reference frame image and matching Relationship is adjusted the location parameter, and the three-dimensional space position of terminal is determined according to location parameter adjusted;Above-mentioned weight In position fixing process, due to without being matched according to the three-dimensional space point of reference frame image, can to avoid reorientation process by The quantity of the three-dimensional space point of reference frame image influences and leads to the situation that reorientation fails or accuracy rate is low, promotes reorientation Success rate and the accuracy for relocating result.
Based on the description above, the embodiment of the present invention also proposed a kind of application scenarios of method for relocating based on image Figure, as shown in Figure 4.For this application scene figure using terminal as express delivery robot, business processing is for Movement Control Agency is managed.Express delivery machine Device people needs to call the camera assembly of configuration to be acquired to ambient image during transporting express delivery, and is based on collecting Image positioned immediately and map reconstruction.Then control instruction is generated according to the map of reconstruction, and according to the control instruction Carry out safety moving.
When the movement of express delivery robot acutely causes very much acquired image is too fuzzy or camera assembly is blocked to lead to nothing When method collects image, express delivery robot at this time can not be positioned, so that how can not determine should move in next step.This In the case of, express delivery robot can obtain current frame image and reference frame image from image collection;It obtains in current frame image Fisrt feature information and reference frame image in second feature information between matching relationship;It is determined according to matching relationship current The location parameter of frame image;According to the corresponding three-dimensional space position of reference frame image and matching relationship, to current frame image Location parameter is adjusted;The three-dimensional space position of express delivery robot is determined according to location parameter adjusted.Obtaining express delivery After the three-dimensional space position of robot, map reconstruction can be carried out according to the three-dimensional space position of express delivery robot, at this time fastly Control instruction can be generated according to the map after reconstruction by passing robot, such as control instruction is " directly walking along present road ";That It is mobile that express delivery robot can then continue on present road according to the control instruction.
Based on the description of the above-mentioned method for relocating embodiment based on image, the embodiment of the invention also discloses one kind to be based on The relocation device of image, the relocation device based on image can be operate in a computer program in terminal (including program code) is also possible to comprising an entity apparatus in the terminal.The relocation device based on image can be with Fig. 2 is executed to method shown in Fig. 3.Fig. 5 is referred to, the relocation device based on image can be run such as lower unit:
Acquiring unit 101, for obtaining current frame image and reference frame image, described image set packet from image collection The collected multiple image of terminal is included, the current frame image is the frame image of acquisition time the latest in described image set, The reference frame image be described image set in acquisition time prior to the current frame image and with the current frame image phase As a frame image;
Extraction unit 102, for extracting fisrt feature point information from the current frame image and from the reference frame Second feature point information is extracted in image, and obtains between the fisrt feature point information and second feature point information With relationship;
Determination unit 103, for determining the location parameter of the current frame image, the position according to the matching relationship Parameter includes rotation parameter and displacement parameter;
Adjustment unit 104 is used for according to the corresponding three-dimensional space position of the reference frame image and the matching relationship, The location parameter of the current frame image is adjusted;
The determination unit 103 is also used to determine the three-dimensional space position of the terminal according to location parameter adjusted.
In one embodiment, the relocation device based on image may also include construction unit 105, for constructing Described image set, described image set include key frame images subset and non-key frame image subset;The key frame images Subset includes the acquisition time of an at least frame key frame images and an at least frame key frame images;The non-key frame image Subset includes the acquisition time of an at least frame non-key frame image and an at least frame non-key frame image.
In another embodiment, construction unit 105 is for being specifically used for when constructing image collection:
Key frame images are set by the collected first frame image of the terminal, and by the first frame image and described The acquisition time of first frame image is added in the key frame images subset;
When the terminal often collects new frame image, the acquisition time of the new frame image is obtained;
The acquisition time difference that acquisition time and the new frame image are chosen from the key frame images subset is minimum Target key frames image;
Obtain the similarity between the new frame image and the Target key frames image;
If the similarity is less than the default similar threshold value, key frame images are set by the new frame image, And the acquisition time of the new frame image and the new frame image is added in the key frame images subset;Otherwise will The new frame image is set as non-key frame image, and by the new frame image and the acquisition time of the new frame image It is added in the non-key frame image subset.
In another embodiment, acquiring unit 101 from image collection for obtaining current frame image and reference frame When image, it is specifically used for:
The frame image of acquisition time the latest is obtained from described image set is determined as current frame image;
It is chosen from the key frame images subset true with the highest key frame images of similarity of the current frame image It is set to the reference frame image.
In another embodiment, the fisrt feature information includes multiple fisrt feature points in the current frame image And the two-dimensional coordinate of each fisrt feature point;The second feature information includes multiple second feature in the reference frame image The two-dimensional coordinate of point and each second feature point;
The matching relationship includes the matching degree of multiple points pair and each pair of point;Wherein, a point is to including one first Characteristic point and a second feature point;The multiple centering matching degree is greater than the point of preset matching threshold value to for matching double points.
In another embodiment, determination unit 103 is for determining the current frame image according to the matching relationship Location parameter when, be specifically used for:
It is iterated using the fisrt feature information and the second feature information and at least one essential square is calculated Battle array;
Target essential matrix is determined from least one described essential matrix based on the matching relationship, wherein described In at least one essential matrix, the quantity for meeting the matching double points of the target essential matrix constraint is most;
It parses the target essential matrix and obtains the location parameter of the current frame image, the location parameter includes rotation Parameter and displacement parameter.
In another embodiment, adjustment unit 104 is for according to the corresponding three-dimensional space meta position of the reference frame image It sets and the matching relationship is specifically used for when being adjusted to the location parameter of the current frame image:
Obtain the corresponding three-dimensional space position of the reference frame image, the corresponding three-dimensional space position of the reference frame image The three-dimensional coordinate of target second feature point in matching double points including meeting the target essential matrix;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship to the current frame image Displacement parameter be adjusted, the displacement parameter after being adjusted;
Wherein, the location parameter adjusted include the current frame image rotation parameter and the position adjusted Shifting parameter.
In another embodiment, adjustment unit 104 is for according to the corresponding three-dimensional space meta position of the reference frame image It sets and the matching relationship, the displacement parameter of the current frame image is adjusted, when displacement parameter after being adjusted, It is specifically used for:
It is determined according to the three-dimensional coordinate of the location parameter of the current frame image and the target second feature point corresponding The three-dimensional coordinate of target fisrt feature point;
The three-dimensional coordinate of the target fisrt feature point is converted into two-dimensional coordinate;
The mesh in the two-dimensional coordinate being converted to according to the target fisrt feature point and the fisrt feature information The two-dimensional coordinate of mark fisrt feature point is adjusted the displacement parameter of the current frame image, the displacement ginseng after being adjusted Number.
In another embodiment, adjustment unit 104 is in two for being converted to according to the target fisrt feature point Tie up position of the two-dimensional coordinate to the current frame image of the target fisrt feature point in coordinate and the fisrt feature information Shifting parameter is adjusted, and when displacement parameter after being adjusted, is specifically used for:
The mesh in two-dimensional coordinate and the fisrt feature information being converted to according to the target fisrt feature point The two-dimensional coordinate of mark fisrt feature point determines that the target second feature point and the coordinate pair of the target fisrt feature point should close System;
The dimensional variation value of the displacement parameter is sought based on the coordinate correspondence relationship;
Dimensional variation is carried out to the displacement parameter using the dimensional variation value, the displacement parameter after being adjusted.
In another embodiment, determination unit 103 is for determining the terminal according to location parameter adjusted When three-dimensional space position, it is specifically used for:
The reference position parameter of the reference frame image is obtained, the reference position parameter includes the reference frame image phase Reference rotation parameter for the first frame image in described image set and refer to displacement parameter;
Standard rotation parameter, institute is calculated according to the rotation parameter of the current frame image and the rotation parameter that refers to Stating standard rotation parameter is rotationally-varying parameter of the current frame image relative to the first frame image;
Standard displacement parameter, the mark is calculated according to the displacement parameter adjusted and the displacement parameter that refers to Level shifting parameter is change in displacement parameter of the current frame image relative to the first frame image;
The three-dimensional space position of the terminal is determined based on the standard rotation parameter and the standard displacement parameter.
In another embodiment, the relocation device based on image may also include business unit 106, be used for: root Business processing is carried out according to the three-dimensional space position of the terminal, the business processing includes: virtual image display processing and/or moves Dynamic control processing.
According to one embodiment of present invention, each step involved in Fig. 2 to method shown in Fig. 3 may each be by scheming Each unit in relocation device shown in 5 based on image is performed.For example, step S201 shown in Fig. 2 and S202 acquiring unit 101 and extraction unit 102 shown in Fig. 5 can be executed respectively, and step S203 and S205 can be by Fig. 5 Shown in determination unit 103 execute, step S204 adjustment unit 104 shown in Fig. 5 executes;For another example, in Fig. 3 Shown step S301 construction unit 105 shown in Fig. 5 executes, and step S302 and S303 can be respectively by institute in Fig. 5 The acquiring unit 101 shown and extraction unit 102 execute, step S304 and the S307 determination unit 103 shown in Fig. 5 It executes, step S305-S306 adjustment unit 104 shown in Fig. 5 executes, step S308 industry as shown in Fig. 5 Unit 106 be engaged in execute.
According to another embodiment of the invention, each unit in the relocation device shown in fig. 5 based on image can It is constituted or some (a little) unit therein can also be again with merging into one or several other units respectively or all Functionally smaller multiple units are split as to constitute, this may be implemented similarly to operate, without influencing the embodiment of the present invention Technical effect realization.Said units are logic-based function divisions, and in practical applications, the function of a unit can also To be realized by multiple units or the function of multiple units is realized by a unit.In other embodiments of the invention, base It also may include other units in the relocation device of image, in practical applications, these functions can also be assisted by other units Realization is helped, and can be cooperated and be realized by multiple units.
It according to another embodiment of the invention, can be by including central processing unit (CPU), random access memory It is transported on the universal computing device of such as computer of the processing elements such as medium (RAM), read-only storage medium (ROM) and memory element Row is able to carry out the computer program (including program code) of each step involved in the correlation method as shown in Fig. 2 to Fig. 3, Construct the relocation device equipment based on image as shown in Figure 5, and come realize the embodiment of the present invention based on image Method for relocating.The computer program can be recorded in such as computer readable recording medium, and can by computer Read record medium is loaded into above-mentioned calculating equipment, and is run wherein.
The embodiment of the present invention when being relocated based on image to terminal, first by the characteristic information of current frame image and The characteristic information of reference frame image is matched, and due to being generally comprised in the characteristic information of frame image compared with multi-characteristic points, is led to The available accurate matching relationship of matching between characteristic information is crossed, the accuracy for promoting reorientation is conducive to;Secondly The location parameter of present frame is determined further according to matching relationship, and according to the corresponding three-dimensional space position of reference frame image and matching Relationship is adjusted the location parameter, and the three-dimensional space position of terminal is determined according to location parameter adjusted;Above-mentioned weight In position fixing process, due to without being matched according to the three-dimensional space point of reference frame image, can to avoid reorientation process by The quantity of the three-dimensional space point of reference frame image influences and leads to the situation that reorientation fails or accuracy rate is low, promotes reorientation Success rate and the accuracy for relocating result.
Description based on above method embodiment and Installation practice, the embodiment of the present invention also provide a kind of terminal.Please Referring to Fig. 6, which includes at least processor 201, input equipment 202, output equipment 203 and computer storage medium 204. It may also include camera assembly in the input equipment 202, camera assembly can be used for acquiring ambient image, and the camera assembly can be with The component of configuration at the terminal, is also possible to the external module being connected with terminal when being terminal factory.Wherein, the place in terminal Managing device 201, input equipment 202, output equipment 203 and computer storage medium 204 can be connected by bus or other modes.
Computer storage medium 204 can store in the memory of terminal, and the computer storage medium 204 is for depositing Computer program is stored up, the computer program includes program instruction, and the processor 201 is situated between for executing the computer storage The program instruction that matter 204 stores.Processor 201 (or CPU (Central Processing Unit, central processing unit)) is The calculating core and control core of terminal are adapted for carrying out one or one or more instruction, are particularly adapted to load and execute one Item or one or more instruction are to realize correlation method process or corresponding function;In one embodiment, institute of the embodiment of the present invention The processor 201 stated can be used for carrying out a series of relocation process according to the current frame image got, comprising: from image Current frame image and reference frame image are obtained in set;Fisrt feature information is extracted from the current frame image and from described Second feature information is extracted in reference frame image, and obtains between the fisrt feature information and the second feature information With relationship;Determine that the location parameter of the current frame image, the location parameter include rotation parameter according to the matching relationship And displacement parameter;According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the present frame The location parameter of image is adjusted;The three-dimensional space position, etc. of the terminal is determined according to location parameter adjusted.
The embodiment of the invention also provides a kind of computer storage medium (Memory), the computer storage medium is eventually Memory device in end, for storing program and data.It is understood that computer storage medium herein both may include Built-in storage medium in terminal, naturally it is also possible to the expansion storage medium supported including terminal.Computer storage medium mentions For memory space, which stores the operating system of terminal.Also, it is also housed in the memory space and is suitable for being located One or more than one instructions that reason device 201 is loaded and executed, these instructions can be one or more computer Program (including program code).It should be noted that computer storage medium herein can be high speed RAM memory, it can also To be non-labile memory (non-volatile memory), for example, at least a magnetic disk storage;It optionally can be with It is the computer storage medium that at least one is located remotely from aforementioned processor.
In one embodiment, it can be loaded by processor 201 and execute one or one stored in computer storage medium Above instructions, to realize the above-mentioned corresponding steps in relation to the method in the reorientation embodiment based on image;In the specific implementation, One in computer storage medium or one or more instruction are loaded by processor 201 and execute following steps:
Current frame image is obtained from image collection and reference frame image, described image set include that terminal is collected more Frame image, the current frame image are the frame image of acquisition time the latest in described image set, and the reference frame image is Acquisition time is prior to the current frame image and a frame image similar with the current frame image in described image set;
It is special from extracting fisrt feature point information in the current frame image and extracting second from the reference frame image Sign point information, and obtain the matching relationship between the fisrt feature point information and second feature point information;
Determine that the location parameter of the current frame image, the location parameter include rotation parameter according to the matching relationship And displacement parameter;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the current frame image Location parameter be adjusted;
The three-dimensional space position of the terminal is determined according to location parameter adjusted.
In one embodiment, described one or one or more instruction can also by processor 201 load and specifically execution:
Described image set is constructed, described image set includes key frame images subset and non-key frame image subset;Institute State the acquisition time that key frame images subset includes an at least frame key frame images and an at least frame key frame images;It is described Non-key frame image subset includes the acquisition time of an at least frame non-key frame image and an at least frame non-key frame image.
In another embodiment, when constructing image collection, described one or one or more instruction by processor 201 plus It carries and specifically executes:
Key frame images are set by the collected first frame image of the terminal, and by the first frame image and described The acquisition time of first frame image is added in the key frame images subset;
When the terminal often collects new frame image, the acquisition time of the new frame image is obtained;
The acquisition time difference that acquisition time and the new frame image are chosen from the key frame images subset is minimum Target key frames image;
Obtain the similarity between the new frame image and the Target key frames image;
If the similarity is less than the default similar threshold value, key frame images are set by the new frame image, And the acquisition time of the new frame image and the new frame image is added in the key frame images subset;Otherwise will The new frame image is set as non-key frame image, and by the new frame image and the acquisition time of the new frame image It is added in the non-key frame image subset.
In another embodiment, when obtaining current frame image and reference frame image from image collection, described one Or one or more instruction is loaded by processor 201 and is specifically executed:
The frame image of acquisition time the latest is obtained from described image set is determined as current frame image;
It is chosen from the key frame images subset true with the highest key frame images of similarity of the current frame image It is set to the reference frame image.
In another embodiment, the fisrt feature information includes multiple fisrt feature points in the current frame image And the two-dimensional coordinate of each fisrt feature point;The second feature information includes multiple second feature in the reference frame image The two-dimensional coordinate of point and each second feature point;
The matching relationship includes the matching degree of multiple points pair and each pair of point;Wherein, a point is to including one first Characteristic point and a second feature point;The multiple centering matching degree is greater than the point of preset matching threshold value to for matching double points.
In another embodiment, when determining the location parameter of the current frame image according to the matching relationship, institute It states one or one or more instruction is loaded by processor 201 and specifically executed:
It is iterated using the fisrt feature information and the second feature information and at least one essential square is calculated Battle array;
Target essential matrix is determined from least one described essential matrix based on the matching relationship, wherein described In at least one essential matrix, the quantity for meeting the matching double points of the target essential matrix constraint is most;
It parses the target essential matrix and obtains the location parameter of the current frame image, the location parameter includes rotation Parameter and displacement parameter.
In another embodiment, according to the corresponding three-dimensional space position of the reference frame image and matching pass System, when being adjusted to the location parameter of the current frame image, described one or one or more instruction are loaded by processor 201 And it specifically executes:
Obtain the corresponding three-dimensional space position of the reference frame image, the corresponding three-dimensional space position of the reference frame image The three-dimensional coordinate of target second feature point in matching double points including meeting the target essential matrix;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship to the current frame image Displacement parameter be adjusted, the displacement parameter after being adjusted;
Wherein, the location parameter adjusted include the current frame image rotation parameter and the position adjusted Shifting parameter.
In another embodiment, according to the corresponding three-dimensional space position of the reference frame image and matching pass System, is adjusted the displacement parameter of the current frame image, when displacement parameter after being adjusted, described one or one with Upper instruction is loaded by processor 201 and is specifically executed:
It is determined according to the three-dimensional coordinate of the location parameter of the current frame image and the target second feature point corresponding The three-dimensional coordinate of target fisrt feature point;
The three-dimensional coordinate of the target fisrt feature point is converted into two-dimensional coordinate;
The mesh in the two-dimensional coordinate being converted to according to the target fisrt feature point and the fisrt feature information The two-dimensional coordinate of mark fisrt feature point is adjusted the displacement parameter of the current frame image, the displacement ginseng after being adjusted Number.
In another embodiment, in the two-dimensional coordinate and described first being converted to according to the target fisrt feature point The two-dimensional coordinate of the target fisrt feature point in characteristic information is adjusted the displacement parameter of the current frame image, obtains When to displacement parameter adjusted, described one or one or more instruction is loaded by processor 201 and specifically execution:
The mesh in two-dimensional coordinate and the fisrt feature information being converted to according to the target fisrt feature point The two-dimensional coordinate of mark fisrt feature point determines that the target second feature point and the coordinate pair of the target fisrt feature point should close System;
The dimensional variation value of the displacement parameter is sought based on the coordinate correspondence relationship;
Dimensional variation is carried out to the displacement parameter using the dimensional variation value, the displacement parameter after being adjusted.
In another embodiment, when determining the three-dimensional space position of the terminal according to location parameter adjusted, Described one or one or more instruction by processor 201 load and specifically execution:
The reference position parameter of the reference frame image is obtained, the reference position parameter includes the reference frame image phase Reference rotation parameter for the first frame image in described image set and refer to displacement parameter;
Standard rotation parameter, institute is calculated according to the rotation parameter of the current frame image and the rotation parameter that refers to Stating standard rotation parameter is rotationally-varying parameter of the current frame image relative to the first frame image;
Standard displacement parameter, the mark is calculated according to the displacement parameter adjusted and the displacement parameter that refers to Level shifting parameter is change in displacement parameter of the current frame image relative to the first frame image;
The three-dimensional space position of the terminal is determined based on the standard rotation parameter and the standard displacement parameter.
In another embodiment, described one or one or more instruction can also by processor 201 load and specifically execution:
Business processing is carried out according to the three-dimensional space position of the terminal, the business processing includes: that virtual image is shown Processing and/or Movement Control Agency reason.
The embodiment of the present invention when being relocated based on image to terminal, first by the characteristic information of current frame image and The characteristic information of reference frame image is matched, and due to being generally comprised in the characteristic information of frame image compared with multi-characteristic points, is led to The available accurate matching relationship of matching between characteristic information is crossed, the accuracy for promoting reorientation is conducive to;Secondly The location parameter of present frame is determined further according to matching relationship, and according to the corresponding three-dimensional space position of reference frame image and matching Relationship is adjusted the location parameter;The three-dimensional space position of terminal is determined according to location parameter adjusted;Above-mentioned weight In position fixing process, due to without being matched according to the three-dimensional space point of reference frame image, can to avoid reorientation process by The quantity of the three-dimensional space point of reference frame image influences and leads to the situation that reorientation fails or accuracy rate is low, promotes reorientation Success rate and the accuracy for relocating result.
The above disclosure is only the preferred embodiments of the present invention, cannot limit the right model of the present invention with this certainly It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.

Claims (14)

1. a kind of method for relocating based on image is applied to terminal characterized by comprising
Current frame image is obtained from image collection and reference frame image, described image set include the collected multiframe figure of terminal Picture, the current frame image are the frame image of acquisition time the latest in described image set, and the reference frame image is described Acquisition time is prior to the current frame image and a frame image similar with the current frame image in image collection;
Second feature information is extracted from extraction fisrt feature information in the current frame image and from the reference frame image, And obtain the matching relationship between the fisrt feature information and the second feature information;
Determine that the location parameter of the current frame image, the location parameter include rotation parameter and position according to the matching relationship Shifting parameter;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the position of the current frame image Parameter is set to be adjusted;
The three-dimensional space position of the terminal is determined according to location parameter adjusted.
2. the method as described in claim 1, which is characterized in that the method also includes:
Described image set is constructed, described image set includes key frame images subset and non-key frame image subset;The pass Key frame image subset includes the acquisition time of an at least frame key frame images and an at least frame key frame images;The non-pass Key frame image subset includes the acquisition time of an at least frame non-key frame image and an at least frame non-key frame image.
3. method according to claim 2, which is characterized in that the building image collection, comprising:
Key frame images are set by the collected first frame image of the terminal, and by the first frame image and described first The acquisition time of frame image is added in the key frame images subset;
When the terminal often collects new frame image, the acquisition time of the new frame image is obtained;
The smallest mesh of acquisition time difference of acquisition time and the new frame image is chosen from the key frame images subset Mark key frame images;
Obtain the similarity between the new frame image and the Target key frames image;
If the similarity is less than the default similar threshold value, key frame images are set by the new frame image, and will The acquisition time of the new frame image and the new frame image is added in the key frame images subset;It otherwise will be described New frame image is set as non-key frame image, and the acquisition time of the new frame image and the new frame image is added To in the non-key frame image subset.
4. method according to claim 2, which is characterized in that described to obtain current frame image and reference frame from image collection Image, comprising:
The frame image of acquisition time the latest is obtained from described image set is determined as current frame image;
It chooses from the key frame images subset and is determined as with the highest key frame images of similarity of the current frame image The reference frame image.
5. method according to any of claims 1-4, which is characterized in that the fisrt feature information includes the present frame The two-dimensional coordinate of multiple fisrt feature points and each fisrt feature point in image;The second feature information includes the reference The two-dimensional coordinate of multiple second feature points and each second feature point in frame image;
The matching relationship includes the matching degree of multiple points pair and each pair of point;Wherein, a point is to including a fisrt feature Point and a second feature point;The multiple centering matching degree is greater than the point of preset matching threshold value to for matching double points.
6. method as claimed in claim 5, which is characterized in that described to determine the current frame image according to the matching relationship Location parameter, comprising:
It is iterated and at least one essential matrix is calculated using the fisrt feature information and the second feature information;
Based on the matching relationship from least one described essential matrix determine target essential matrix, wherein it is described at least In one essential matrix, the quantity for meeting the matching double points of the target essential matrix constraint is most;
It parses the target essential matrix and obtains the location parameter of the current frame image, the location parameter includes rotation parameter And displacement parameter.
7. method as claimed in claim 6, which is characterized in that described according to the corresponding three-dimensional space meta position of the reference frame image It sets and the matching relationship, the location parameter of the current frame image is adjusted, comprising:
Obtain the corresponding three-dimensional space position of the reference frame image, the corresponding three-dimensional space position of the reference frame image includes Meet the three-dimensional coordinate of the target second feature point in the matching double points of the target essential matrix;
According to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to the position of the current frame image Shifting parameter is adjusted, the displacement parameter after being adjusted;
Wherein, the location parameter adjusted includes that the rotation parameter of the current frame image and the displacement adjusted are joined Number.
8. the method for claim 7, which is characterized in that described according to the corresponding three-dimensional space meta position of the reference frame image It sets and the matching relationship, the displacement parameter of the current frame image is adjusted, the displacement parameter after being adjusted, wrapped It includes:
Corresponding target is determined according to the three-dimensional coordinate of the location parameter of the current frame image and the target second feature point The three-dimensional coordinate of fisrt feature point;
The three-dimensional coordinate of the target fisrt feature point is converted into two-dimensional coordinate;
The target in the two-dimensional coordinate being converted to according to the target fisrt feature point and the fisrt feature information the The two-dimensional coordinate of one characteristic point is adjusted the displacement parameter of the current frame image, the displacement parameter after being adjusted.
9. method according to claim 8, which is characterized in that two be converted to according to the target fisrt feature point Tie up position of the two-dimensional coordinate to the current frame image of the target fisrt feature point in coordinate and the fisrt feature information Shifting parameter is adjusted, the displacement parameter after being adjusted, comprising:
The target in two-dimensional coordinate and the fisrt feature information being converted to according to the target fisrt feature point The two-dimensional coordinate of one characteristic point determines the coordinate correspondence relationship of the target second feature point and the target fisrt feature point;
The dimensional variation value of the displacement parameter is sought based on the coordinate correspondence relationship;
Dimensional variation is carried out to the displacement parameter using the dimensional variation value, the displacement parameter after being adjusted.
10. the method for claim 7, which is characterized in that described to determine the terminal according to location parameter adjusted Three-dimensional space position, comprising:
Obtain the reference position parameter of the reference frame image, the reference position parameter include the reference frame image relative to The reference rotation parameter of first frame image in described image set and refer to displacement parameter;
Standard rotation parameter, the mark is calculated according to the rotation parameter of the current frame image and the rotation parameter that refers to Quasi- rotation parameter is rotationally-varying parameter of the current frame image relative to the first frame image;
Standard displacement parameter, the normal bit is calculated according to the displacement parameter adjusted and the displacement parameter that refers to Shifting parameter is change in displacement parameter of the current frame image relative to the first frame image;
The three-dimensional space position of the terminal is determined based on the standard rotation parameter and the standard displacement parameter.
11. the method as described in claim 1, which is characterized in that the method also includes:
Business processing is carried out according to the three-dimensional space position of the terminal, the business processing includes: virtual image display processing And/or Movement Control Agency reason.
12. a kind of relocation device based on image is applied to terminal characterized by comprising
Acquiring unit is used to obtain current frame image from image collection and reference frame image, described image set includes terminal Collected multiple image, the current frame image are the frame image of acquisition time the latest in described image set, the ginseng Examining frame image is acquisition time in described image set prior to the current frame image and similar with the current frame image one Frame image;
Extraction unit, for being extracted from extraction fisrt feature information in the current frame image and from the reference frame image Second feature information, and obtain the matching relationship between the fisrt feature information and the second feature information;
Determination unit, for determining the location parameter of the current frame image, the location parameter packet according to the matching relationship Include rotation parameter and displacement parameter;
Adjustment unit is used for according to the corresponding three-dimensional space position of the reference frame image and the matching relationship, to described The location parameter of current frame image is adjusted;
The determination unit is also used to determine the three-dimensional space position of the terminal according to location parameter adjusted.
13. a kind of terminal, including input equipment and output equipment, which is characterized in that further include:
Processor is adapted for carrying out one or one or more instruction;And
Computer storage medium, the computer storage medium is stored with one or one or more is instructed, and described one or one Above instructions are suitable for being loaded by the processor and being executed such as the described in any item reorientations based on image of claim 1-11 Method.
14. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with one or one or more refers to It enables, described one or one or more instruction are suitable for being loaded by processor and being executed and be based on as claim 1-11 is described in any item The method for relocating of image.
CN201811415796.1A 2018-11-23 2018-11-23 Image-based repositioning method, device, terminal and storage medium Active CN109544615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811415796.1A CN109544615B (en) 2018-11-23 2018-11-23 Image-based repositioning method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811415796.1A CN109544615B (en) 2018-11-23 2018-11-23 Image-based repositioning method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN109544615A true CN109544615A (en) 2019-03-29
CN109544615B CN109544615B (en) 2021-08-24

Family

ID=65849672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811415796.1A Active CN109544615B (en) 2018-11-23 2018-11-23 Image-based repositioning method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN109544615B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246163A (en) * 2019-05-17 2019-09-17 联想(上海)信息技术有限公司 Image processing method and its device, equipment, computer storage medium
CN110310333A (en) * 2019-06-27 2019-10-08 Oppo广东移动通信有限公司 Localization method and electronic equipment, readable storage medium storing program for executing
CN110335317A (en) * 2019-07-02 2019-10-15 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and medium based on terminal device positioning
CN111699453A (en) * 2019-07-01 2020-09-22 深圳市大疆创新科技有限公司 Control method, device and equipment of movable platform and storage medium
CN112066988A (en) * 2020-08-17 2020-12-11 联想(北京)有限公司 Positioning method and positioning equipment
CN112102475A (en) * 2020-09-04 2020-12-18 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112148742A (en) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 Map updating method and device, terminal and storage medium
CN112150548A (en) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN112304322A (en) * 2019-07-26 2021-02-02 北京初速度科技有限公司 Restarting method after visual positioning failure and vehicle-mounted terminal
CN112629546A (en) * 2019-10-08 2021-04-09 宁波吉利汽车研究开发有限公司 Position adjustment parameter determining method and device, electronic equipment and storage medium
CN113228105A (en) * 2019-12-30 2021-08-06 商汤国际私人有限公司 Image processing method and device and electronic equipment
CN113286076A (en) * 2021-04-09 2021-08-20 华为技术有限公司 Shooting method and related equipment
CN113630549A (en) * 2021-06-18 2021-11-09 北京旷视科技有限公司 Zoom control method, device, electronic equipment and computer-readable storage medium
CN113747039A (en) * 2020-05-27 2021-12-03 杭州海康威视数字技术股份有限公司 Image acquisition method and device
WO2022247548A1 (en) * 2021-05-27 2022-12-01 上海商汤智能科技有限公司 Positioning method, apparatus, electronic device, and storage medium
WO2023070441A1 (en) * 2021-10-28 2023-05-04 深圳市大疆创新科技有限公司 Movable platform positioning method and apparatus

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160026253A1 (en) * 2014-03-11 2016-01-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20170148223A1 (en) * 2014-10-31 2017-05-25 Fyusion, Inc. Real-time mobile device capture and generation of ar/vr content
CN107340870A (en) * 2017-07-13 2017-11-10 深圳市未来感知科技有限公司 A kind of fusion VR and AR virtual reality display system and its implementation
CN108364319A (en) * 2018-02-12 2018-08-03 腾讯科技(深圳)有限公司 Scale determines method, apparatus, storage medium and equipment
CN108427479A (en) * 2018-02-13 2018-08-21 腾讯科技(深圳)有限公司 Wearable device, the processing system of ambient image data, method and readable medium
CN108564617A (en) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 Three-dimensional rebuilding method, device, VR cameras and the panorama camera of more mesh cameras
CN108596976A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108615247A (en) * 2018-04-27 2018-10-02 深圳市腾讯计算机***有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM
CN108648235A (en) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 Method for relocating, device and the storage medium of camera posture tracing process
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160026253A1 (en) * 2014-03-11 2016-01-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20170148223A1 (en) * 2014-10-31 2017-05-25 Fyusion, Inc. Real-time mobile device capture and generation of ar/vr content
CN107340870A (en) * 2017-07-13 2017-11-10 深圳市未来感知科技有限公司 A kind of fusion VR and AR virtual reality display system and its implementation
CN108364319A (en) * 2018-02-12 2018-08-03 腾讯科技(深圳)有限公司 Scale determines method, apparatus, storage medium and equipment
CN108427479A (en) * 2018-02-13 2018-08-21 腾讯科技(深圳)有限公司 Wearable device, the processing system of ambient image data, method and readable medium
CN108564617A (en) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 Three-dimensional rebuilding method, device, VR cameras and the panorama camera of more mesh cameras
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN108596976A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108615247A (en) * 2018-04-27 2018-10-02 深圳市腾讯计算机***有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108648235A (en) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 Method for relocating, device and the storage medium of camera posture tracing process
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦宝岭: "基于光流—场景流的单目视觉三维重建研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246163A (en) * 2019-05-17 2019-09-17 联想(上海)信息技术有限公司 Image processing method and its device, equipment, computer storage medium
CN110246163B (en) * 2019-05-17 2023-06-23 联想(上海)信息技术有限公司 Image processing method, image processing device, image processing apparatus, and computer storage medium
CN110310333A (en) * 2019-06-27 2019-10-08 Oppo广东移动通信有限公司 Localization method and electronic equipment, readable storage medium storing program for executing
CN112150548B (en) * 2019-06-28 2024-03-29 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN112148742A (en) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 Map updating method and device, terminal and storage medium
CN112150548A (en) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
WO2020259360A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Locating method and device, terminal, and storage medium
CN111699453A (en) * 2019-07-01 2020-09-22 深圳市大疆创新科技有限公司 Control method, device and equipment of movable platform and storage medium
WO2021000225A1 (en) * 2019-07-01 2021-01-07 深圳市大疆创新科技有限公司 Method and apparatus for controlling movable platform, and device and storage medium
CN110335317A (en) * 2019-07-02 2019-10-15 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and medium based on terminal device positioning
CN112304322B (en) * 2019-07-26 2023-03-14 北京魔门塔科技有限公司 Restarting method after visual positioning failure and vehicle-mounted terminal
CN112304322A (en) * 2019-07-26 2021-02-02 北京初速度科技有限公司 Restarting method after visual positioning failure and vehicle-mounted terminal
CN112629546B (en) * 2019-10-08 2023-09-19 宁波吉利汽车研究开发有限公司 Position adjustment parameter determining method and device, electronic equipment and storage medium
CN112629546A (en) * 2019-10-08 2021-04-09 宁波吉利汽车研究开发有限公司 Position adjustment parameter determining method and device, electronic equipment and storage medium
CN113228105A (en) * 2019-12-30 2021-08-06 商汤国际私人有限公司 Image processing method and device and electronic equipment
CN113747039A (en) * 2020-05-27 2021-12-03 杭州海康威视数字技术股份有限公司 Image acquisition method and device
CN112066988A (en) * 2020-08-17 2020-12-11 联想(北京)有限公司 Positioning method and positioning equipment
CN112102475A (en) * 2020-09-04 2020-12-18 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112102475B (en) * 2020-09-04 2023-03-07 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN113286076A (en) * 2021-04-09 2021-08-20 华为技术有限公司 Shooting method and related equipment
WO2022247548A1 (en) * 2021-05-27 2022-12-01 上海商汤智能科技有限公司 Positioning method, apparatus, electronic device, and storage medium
CN113630549B (en) * 2021-06-18 2023-07-14 北京旷视科技有限公司 Zoom control method, apparatus, electronic device, and computer-readable storage medium
CN113630549A (en) * 2021-06-18 2021-11-09 北京旷视科技有限公司 Zoom control method, device, electronic equipment and computer-readable storage medium
WO2023070441A1 (en) * 2021-10-28 2023-05-04 深圳市大疆创新科技有限公司 Movable platform positioning method and apparatus

Also Published As

Publication number Publication date
CN109544615B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN109544615A (en) Method for relocating, device, terminal and storage medium based on image
CN107808407B (en) Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
US11313684B2 (en) Collaborative navigation and mapping
CN107990899B (en) Positioning method and system based on SLAM
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN111046125A (en) Visual positioning method, system and computer readable storage medium
CN106529538A (en) Method and device for positioning aircraft
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN113674416A (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN111709973A (en) Target tracking method, device, equipment and storage medium
US11354923B2 (en) Human body recognition method and apparatus, and storage medium
CN112348887A (en) Terminal pose determining method and related device
Pandey et al. Efficient 6-dof tracking of handheld objects from an egocentric viewpoint
Zhou et al. Information-efficient 3-D visual SLAM for unstructured domains
KR20220058846A (en) Robot positioning method and apparatus, apparatus, storage medium
CN114913246B (en) Camera calibration method and device, electronic equipment and storage medium
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN112750164B (en) Lightweight positioning model construction method, positioning method and electronic equipment
US10447992B1 (en) Image processing method and system
CN116576866B (en) Navigation method and device
Amorós et al. Comparison of global-appearance techniques applied to visual map building and localization
CN113587916B (en) Real-time sparse vision odometer, navigation method and system
Butt et al. Multi-task Learning for Camera Calibration
Millane Scalable Dense Mapping Using Signed Distance Function Submaps
CN117292435A (en) Action recognition method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant