CN108958462A - A kind of methods of exhibiting and device of virtual objects - Google Patents

A kind of methods of exhibiting and device of virtual objects Download PDF

Info

Publication number
CN108958462A
CN108958462A CN201710377499.1A CN201710377499A CN108958462A CN 108958462 A CN108958462 A CN 108958462A CN 201710377499 A CN201710377499 A CN 201710377499A CN 108958462 A CN108958462 A CN 108958462A
Authority
CN
China
Prior art keywords
target object
image
present image
location information
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710377499.1A
Other languages
Chinese (zh)
Inventor
沈慧
陈永健
姜飞俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710377499.1A priority Critical patent/CN108958462A/en
Priority to PCT/CN2018/086783 priority patent/WO2018214778A1/en
Publication of CN108958462A publication Critical patent/CN108958462A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Navigation (AREA)

Abstract

The application discloses the methods of exhibiting and device of a kind of virtual objects, for showing when on the target object position of present image by virtual objects superposition, reduces location error.The described method includes: obtaining the image comprising candidate target object according to the location information of terminal when acquisition present image;The present image and the image comprising candidate target object are subjected to images match, determine the physical location that at least one target object is projected in the present image;It is projected in theoretical position and physical location in the present image according at least one described target object, determines posture information after amendment;According to the location information and the amendment posture information of the location information, target object of terminal when acquisition present image, the corresponding virtual objects of target object are shown in the present image.

Description

A kind of methods of exhibiting and device of virtual objects
Technical field
This application involves field of computer technology more particularly to the methods of exhibiting and device of a kind of virtual objects.
Background technique
Augmented reality (Augmented Reality Technique, abbreviation AR) is increased by computer (terminal) The technology for adding user to perceive real world, can be " seamless " integrated by real world and virtual world, the void that terminal is generated The quasi- virtual objects such as object or the virtual information about real-world object are in the image that terminal (passing through image capture device) acquires Target object on be shown, so that reality and virtual world are complementary to one another, are superimposed, realize enhancing to real scene.
As shown in Figure 1, for the theoretical schematic diagram for showing virtual objects in the present image that terminal acquires, in present image Include target object in (solid line expression), and virtual objects (dotted line expression) superposition is shown on the position of target object.It is existing There is technology, is typically based on LBS (Location Based Service is based on location-based service) and IMU (Inertial Measurement unit, Inertial Measurement Unit are the devices for measuring object triaxial attitude angle (or angular speed) and acceleration) Methods of exhibiting realize the effect of Fig. 1.It specifically, can be according to location information, the posture information of terminal when acquisition present image And the location information of target object, determine that target object is projected in the theoretical position in present image, then by virtual objects Superposition is shown on the theoretical position.
However, the prior art is based only upon LBS and IMU when being shown to virtual objects, due to the location information of terminal, appearance State information etc. will appear error, so as to cause as shown in Figure 2 as a result, i.e. virtual objects position shown in present image Deviate with the physical location of target object, to influence the bandwagon effect of virtual objects, so the prior art is will be virtual Object superposition is shown when on the target object position of present image, is easy to appear biggish location error.
Summary of the invention
The embodiment of the present application provides a kind of methods of exhibiting of virtual objects, for showing by virtual objects superposition current When on the target object position of image, reduce location error.
The embodiment of the present application provides a kind of displaying device of virtual objects, for showing by virtual objects superposition current When on the target object position of image, reduce location error.
In order to solve the above technical problems, the embodiment of the present application is achieved in that
The embodiment of the present application adopts the following technical solutions:
A kind of methods of exhibiting of virtual objects, comprising:
According to the location information of terminal when acquisition present image, the image comprising candidate target object is obtained;
The present image and the image comprising candidate target object are subjected to images match, determine at least one mesh Mark physical location of the Object Projection in the present image;
It is projected in theoretical position and physical location in the present image according at least one described target object, is determined Posture information after amendment;
According to appearance after the location information of terminal, the location information of target object and the amendment when acquisition present image State information shows the corresponding virtual objects of target object in the present image.
Preferably, according to acquisition present image when terminal location information and correct before posture information and it is described at least The location information of one target object determines the theoretical position.
Preferably, according to the location information of terminal when acquisition present image, the image comprising candidate target object, tool are obtained Body includes:
According to the location information of terminal when acquisition present image and preceding posture information is corrected, obtaining includes candidate target object Image.
Preferably, it determines physical location of at least one target object in the present image, specifically includes:
Determine the physical location that at least two target objects are projected in respectively in the present image;
Theoretical position and physical location in the present image are then projected according at least one described target object, really The just rear posture information of periodical repair, specifically includes:
It is projected in theoretical position and physical location in the present image according at least two target object, is determined Posture information after amendment.
Preferably, the method is applied to be equipped with the terminal of image capture device, then the method specifically includes:
Location information when realtime graphic is acquired according to terminal, obtains the image comprising candidate target object;
The realtime graphic and the image comprising candidate target object are subjected to images match, determine at least one mesh Mark physical location of the Object Projection in the realtime graphic;
It is projected in theoretical position and physical location in the realtime graphic according at least one described target object, is determined Posture information after amendment;
Appearance after the location information of location information, target object when according to terminal acquisition realtime graphic and the amendment State information shows the corresponding virtual objects of target object in the realtime graphic.
Preferably, the method also includes:
Posture information before amendment when according to posture information after the amendment and terminal acquisition realtime graphic, determines to become The posture information of change;
In preset period of time, according to the posture information of the variation, and location information when acquisition realtime graphic With the location information for correcting preceding posture information, target object, the corresponding virtual objects of target object are shown in the real-time figure As in.
A kind of displaying device of virtual objects, comprising: acquiring unit, matching unit, determination unit and display unit, In,
The acquiring unit, for the location information according to terminal when acquisition present image, obtaining includes candidate target pair The image of elephant;
The matching unit, for the present image and the image comprising candidate target object to be carried out image Match, determines the physical location that at least one target object is projected in the present image;
The determination unit, for being projected in the theoretical position in the present image according at least one described target object It sets and physical location, determines posture information after amendment;
The display unit, for according to the location information of the location information of terminal when acquisition present image, target object, And posture information after the amendment, the corresponding virtual objects of target object are shown in the present image.
Preferably, the determination unit, is also used to:
According to the location information of terminal when acquisition present image and correct preceding posture information and at least one described target The location information of object determines the theoretical position.
Preferably, the acquiring unit, is specifically used for:
According to the location information of terminal when acquisition present image and preceding posture information is corrected, obtaining includes candidate target object Image.
Preferably, the matching unit, is specifically used for:
Determine the physical location that at least two target objects are projected in respectively in the present image;
The then determination unit, is specifically used for:
It is projected in theoretical position and physical location in the present image according at least two target object, is determined Posture information after amendment.
Preferably, described device is applied to be equipped with the terminal of image capture device, then
The acquiring unit is obtained specifically for acquiring location information when realtime graphic according to terminal comprising candidate mesh Mark the image of object;
The matching unit, specifically for the realtime graphic and the image comprising candidate target object are carried out figure As matching, the physical location that at least one target object is projected in the realtime graphic is determined;
The determination unit, specifically for being projected in the reason in the realtime graphic according at least one described target object By position and physical location, posture information after amendment is determined;
The display unit, the position of location information, target object when specifically for according to terminal acquisition realtime graphic Posture information after information and the amendment shows the corresponding virtual objects of target object in the realtime graphic.
Preferably,
The determination unit, amendment when being also used to according to posture information after the amendment and terminal acquisition realtime graphic Preceding posture information determines the posture information of variation;
The display unit, is also used in preset period of time, real according to the posture information of the variation, and acquisition When image when location information and before correcting posture information, target object location information, target object is corresponding virtual right As showing in the realtime graphic.
A kind of methods of exhibiting of virtual objects, comprising:
According to the location information of terminal when acquisition present image, the image comprising candidate target object is obtained;
The present image and the image comprising candidate target object are subjected to images match, determine at least one mesh Mark physical location of the Object Projection in the present image;
According to the physical location, the corresponding virtual objects of at least one described target object are shown in the current figure As in
A kind of methods of exhibiting of virtual objects, comprising:
According to take pictures image when terminal location information, obtain include candidate target object image;
The image taken pictures and the image comprising candidate target object are subjected to images match, determine at least one Target object is projected in the physical location in the image taken pictures;
According to the physical location, at least one described target object corresponding virtual objects are shown and are taken pictures described In image.
A kind of methods of exhibiting of virtual objects, comprising:
According to the location information of terminal when acquisition present image, the image comprising candidate target object is obtained;
The present image and the image comprising candidate target object are subjected to images match, determine at least one mesh Mark first position of the Object Projection in the present image;
It is projected in first position and the second position in the present image according at least one described target object, is determined Posture information after amendment;
According to appearance after the location information of terminal, the location information of target object and the amendment when acquisition present image State information shows the corresponding virtual objects of target object in the present image.
As can be seen from the technical scheme provided by the above embodiments of the present application, it is whole when the embodiment of the present application is according to acquisition present image The location information at end, obtain include candidate target object image, by present image and the image comprising candidate target object into Row images match determines that at least one target object is projected in the physical location in present image, and according to this at least one Target object is projected in the theoretical position in present image, determines posture information after amendment, is finally believed according to posture after amendment The corresponding virtual objects of target object in present image are shown by breath.Compared with the prior art according only to LBS and IMU into For row is shown, this programme joined attitude rectification information on the basis of LBS and IMU, as far as possible to the error of LBS and IMU It is modified, solves the prior art to a certain extent and show by virtual objects superposition in the target object position of present image When setting, the problem of being easy to appear biggish location error, achievees the effect that reduce location error.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, in the premise of not making the creative labor property Under, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the theoretical schematic diagram that virtual objects are shown in the present image that terminal acquires;
Fig. 2 is that the prior art shows the schematic diagram of error occur when virtual objects in the present image that terminal acquires;
Fig. 3 is a kind of flow diagram of the methods of exhibiting for virtual objects that the embodiment of the present application 1 provides;
Fig. 4 is the schematic diagram for the images match that the embodiment of the present application 1 provides;
Fig. 5 is a kind of flow diagram of the methods of exhibiting for virtual objects that the embodiment of the present application 2 provides;
Fig. 6 is a kind of schematic diagram of the methods of exhibiting for virtual objects that the embodiment of the present application 2 provides;
Fig. 7 is the structural schematic diagram of the displaying device for the virtual objects that the embodiment of the present application 3 provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
Below in conjunction with attached drawing, the technical scheme provided by various embodiments of the present application will be described in detail.
Embodiment 1
As previously mentioned, the prior art is as shown in Fig. 2, open up virtual objects (annotation in figure) based on LBS and IMU When showing, location information, posture information of terminal etc. will appear error.
LBS is determined for location information, the i.e. geographical location information of terminal, and outdoor can be by GPS (Global Positioning System, global positioning system), Wi-Fi (Wireless-Fidelity, Wireless Fidelity), base station carry out it is single Solely or mixed positioning, interior can be positioned by bluetooth or Wi-Fi, and in practical applications, terminal has when acquiring image It is likely to occur biggish location error, such as 0.5 meter even more.
IMU can be used for determining the posture information of terminal by the drift angle in three dimensions relative to each axis, such as Pitch can be to be rotated around X-axis, also referred to as pitch angle;Yaw can be to be rotated around Y-axis, is also yaw angle;Roll can be with It is to be rotated around Z axis, is also roll angle.Gyroscope and accelerometer are the main elements of IMU, and precision directly influences inertia The precision of system.In actual operation, due to inevitable various disturbing factors, and gyroscope and accelerometer is caused to generate Error.
Theoretically, if LBS and IMU are error free, can according to the location information of terminal, posture information and The location information of target object accurately determines out the position that target object is projected in the present image of terminal acquisition, such as As shown in Figure 1, target object is the true picture of " XX university " and " stadium XX ", location information may include longitude and latitude, sea It pulls out;When terminal acquires image, according to the location information and posture information comprising longitude and latitude, height above sea level of terminal, so that it may In the image of terminal acquisition, the theoretical position of target object projection is determined, to will be the pre-set void of the target object Quasi- object superposition is shown on the theoretical position.But it centainly will appear error just because of LBS and IMU, and sometimes can also be by There is large error in signal blocks, the magnetic field influence etc. on geographical location, so will lead to result shown in Fig. 2.So The prior art is shown when on the target object position of present image by virtual objects superposition, is easy to appear biggish position and is missed Difference.Based on drawbacks described above, the embodiment of the present application provides a kind of methods of exhibiting of virtual objects, for being superimposed by virtual objects It shows when on the target object position of present image, reduces location error.The process of this method is as shown in figure 3, include following Step:
Step 11: according to the location information of terminal when acquisition present image, obtaining the image comprising candidate target object.
Through analysis above it is found why when carrying out AR displaying, it may appear that the display location and throwing of virtual objects The problem of display location of the target object of shadow in the picture is differed farther out, mainly due to caused by error, so this method Core seeks to determine first the amount of error, to be modified to AR displaying.It, can be first current according to acquisition in this step The location information of terminal when image obtains the image comprising candidate target object.It should be noted that projection mentioned herein, It can refer to that the operation in acquisition image process, the operation may include taking pictures and not taking pictures, not take pictures and adopted in unlatching image When collecting equipment, the image acquired in real time, the image that take pictures to acquire in real time is stored in terminal memory.
Each virtual objects, which can be, to be pre-generated according to each target object, such as " XX university " in Fig. 1 or Fig. 2 The annotation in " stadium XX ".The annotation can be for entity " XX university " and " stadium XX " pre-generated virtual right As target object can be distributed in different geographical locations, such as " XX university ", " XX hospital ", " XX building " etc., for determination Physical location of the target object in present image out can determine first either with or without target object in present image, specifically, The available image comprising target object, to carry out images match.
In practical applications, image largely comprising various target objects can be prestored, such as currently a popular POI is (emerging Interesting point) be a kind of target object, POI include such as school, hospital, the various entity objects of sport information (such as comprising warp Latitude, the location information of height above sea level, multiple images etc.), the image comprising POI can be backed up, in map or AR In be shown.Since target object is excessive, the also enormous amount of the image comprising target object, so can be worked as according to acquisition The location information of terminal is obtained when preceding image, such as according to the preset range (position of terminal when such as to acquire present image Centered on information, in radius 5000m) obtain etc., due to the image comprising target object got, not necessarily all mesh Mark object is both present in acquisition present image, it is possible to referred to as include the image of candidate target object.
Since in the image comprising candidate target object, it is current that not necessarily all candidate target object is both present in acquisition In image, so reducing the quantity for obtaining the image comprising candidate target object, it is possible to reduce the waste of computing resource.In one kind In embodiment, in order to reduce the waste of computing resource, this step may include: the position according to terminal when acquisition present image Posture information before information and amendment, obtains the image comprising candidate target object.Specifically, due to acquiring present image in terminal When, the target pair that may be projected in present image can be determined according to the posture information before the location information of terminal and amendment As so can also be obtained according to the target object that may be projected in present image comprising that (may go out in present image It is existing) image of candidate target object.For example, being believed according to the posture before the location information of terminal when acquisition present image and amendment Breath, determines the target object i.e. candidate target image that 3 may be projected in present image, can obtain comprising this at this time The image of 3 candidate target objects.Similarly, in order to reduce the probability for omitting target object, the selection of candidate target object can With comprising and more than the possible target object quantity determined.
Step 12: the present image and the image comprising candidate target object being subjected to images match, determine at least one Target object is projected in the physical location in present image.
By the agency of above, need to obtain include candidate target object image, to carry out images match, then this step The present image and the image comprising candidate target object can be subjected to images match.
Specifically, images match can be carried out by following citings:
Firstly, there are n image template, which is the image comprising candidate target object obtained, and one Picture is inputted, which is present image, by the matching in the region in the region and image template in present image Journey are as follows:
Image template is characterized, for example can be characterized by ORB (ORiented Brief), to image template After characterization, the feature vector of several feature locations and feature locations can be determined;
Present image is characterized, stochastical sampling consistency algorithm can be used and matched, matching degree is higher than default The matching of threshold value is as candidate matches result;
According to angle and feature locations data, scaling etc., it is filtered and finally determines matching result, i.e., whether In the presence of then determining the position in present image if it exists.
As shown in figure 4, left figure can be present image to carry out the schematic diagram of images match, right figure, which can be, to be prestored Image comprising candidate target object (" XX university ") is worked as according to images match as a result, can determine that target object is projected in Physical location in preceding image.
It in practical applications, may when obtaining the image comprising candidate target object according to the location information of present image Get the image comprising multiple candidate target objects.For example, present image can be the image of Fig. 1 or Fig. 2, schemed according to acquisition Location information when 1 or Fig. 2 is possible to get image prestoring, individually including " XX university " and individually comprising " XX The image in stadium ", and as image template, hereafter, after being matched respectively with each image template, so that it may determine The physical location of " XX university " and " stadium XX " in present image.So in one embodiment, this step can wrap It includes: determining the physical location that at least two target objects are projected in respectively in present image.
Due in the machine language of application program, and gear shaper without theoretical it is practical it is not, can also will so in practical applications At least one target object is projected in the physical location in present image and is defined as first position, i.e. this step may include: Present image and the image comprising candidate target object are subjected to images match, determines that at least one target object is projected in and deserves First position in preceding image.
Step 13: theoretical position and physical location in present image being projected according at least one target object, really The just rear posture information of periodical repair.
After determining that at least one target object is projected in the physical location in present image in step 12, this step is just It can be projected in the theoretical position in present image according at least one target object, determine posture information after amendment, i.e., According to theoretical and reality, determines how and posture is modified.Wherein, which is projected in present image Theoretical position can according to acquisition present image when terminal location information and correct before posture information, at least one mesh The location information of mark object is determined.Specifically, it can be determined according to projection matrix equation, the equation is as follows:
Y=P × T (θt)×(Xpoi-Xp)
Wherein, Y is the position of target object, XpoiFor the location information of target object, XpTo acquire position when present image Confidence breath, T are spin matrix, and θ is pitch, yaw and roll in posture information, the function of T pitch, yaw and roll, note For T (θ), θtFor posture information after amendment.P is projection matrix, can be obtained by the calibration of image capture device, be adopted for image Collect the basic parameter of equipment.Specifically it can be such that
In above-mentioned projection matrix equation, Xpoi、XpIt is it is known that working as θ with PtFor θimuWhen, θimuIt is whole when to acquire present image Posture information before the amendment at end, Y is the theoretical position that target object is projected in present image at this time.Figure is carried out in step 12 When determining the physical location that target object is projected in the present image as matching, physical location is Y, at this time Determine posture information θ after correctingt
In addition to θ in equationtIn addition, it is given value, can be described as θt=f (Y), that is, θtIt is the function of Y;Here Due to f (Y) be nonlinear equation, can not direct solution, therefore use θimuAs initial value, Jacobi is solved using partial derivative (Jacobian) matrix further iteratively solves, can solve θt
The θ determinedtPosture information after as correcting is not limited to a kind of above-mentioned mode and determines amendment in practical applications Posture information, such as can be according to the improvement equation of above-mentioned equation:
Y=P × T (θt)×(Xpoi-Xp)×α
Wherein, α can be the correction factor for different terminals determined in test process repeatedly.
Furthermore, it is possible to only summarize the margin of error of position and posture by posture information after correcting, as described in this step.It can also be with Individually determine position and posture the margin of error, that is, correct after location information and amendment after posture information.
By the agency of can determine that at least two target objects project respectively in practical applications in a previous step Physical location in present image, so this step may include: to be projected in present image according at least two target objects In theoretical position and physical location, determine amendment after posture information.
Specifically, posture information after at least two amendments can be determined first respectively according at least two target objects, then Posture information after two amendments can be averaged, or (according to position in the picture respectively at a distance from picture centre) plus Weight average value etc. determines posture information after final amendment.
In actual application program, which can also be projected in the theoretical position in present image It sets and is defined as the second position, and by the agency of above, which can be projected in present image Physical location be defined as first position, so, this step can also include: to be projected in currently according at least one target object First position and the second position in image determine posture information after amendment.
Step 14: according to the location information of terminal, the location information of target object and amendment appearance when acquisition present image State information shows the corresponding virtual objects of target object in present image.
Posture information should be determined just when acquiring present image by IMU after determining amendment in the previous step After true posture information, so that it may according to the location information of terminal when acquisition present image, target object location information and Posture information after amendment determines position of the target object in the present image of acquisition, and target object is corresponding virtual Object is shown in present image.It is described above through at least one target object or at least two target objects, is determined Posture after amendment, and the target object in this step, can it is rapid with first two steps in the identical but more situation of target object be The number of target object in rapid more than first two steps, the effect of the target object in aforementioned two step are only used for determining appearance after amendment Gesture, and the target object in this step is for realizing AR effect.For example, the physical location of " XX university " is determined in step 12, And according to posture information after the determining amendment of the theoretical position of " XX university " in step 13, and this step can be repaired according to this The location information of posture information, terminal positional information, target object after just is determined in present image comprising " XX university ", " XX Target objects such as hospital " and " XX building ", and corresponding virtual objects are shown in present image, purpose can To achieve the effect that such as Fig. 1, i.e., the position of virtual objects can be superimposed displaying on corresponding target object.
In fact, step 13 and step 14 are allowed for can continuously acquire during real-time image acquisition, And actually in view of in real-time collection process, it is possible to the only slight change of the location information of terminal and posture information occur The case where, so, in one embodiment, after the step 12, the present embodiment can also include: according to the physical location, The corresponding virtual objects of at least one target object are shown in present image, i.e., ought define at least one target pair As after the physical location that is projected in present image, so that it may directly using the physical location as foundation, show virtual objects.
The method provided using embodiment 1 is obtained according to the location information of terminal when acquisition present image comprising candidate mesh Present image and the image comprising candidate target object are carried out images match, determine at least one mesh by the image for marking object Physical location of the Object Projection in present image is marked, and the reason in present image is projected according at least one target object By position, posture information after amendment is determined, it is finally according to posture information after amendment that the target object in present image is corresponding Virtual objects be shown.For being shown compared with the prior art according only to LBS and IMU, this programme is in LBS and IMU On the basis of, it joined attitude rectification information, the error of LBS and IMU be modified as far as possible, is solved to a certain extent The prior art is shown when on the target object position of present image by virtual objects superposition, is easy to appear biggish position and is missed The problem of difference achievees the effect that reduce location error.
Embodiment 2
There are three features for AR tool: real world and virtual world are integrated;Real-time, interactive;Increase in three dimensions virtual Object.And in practical applications, real-time, interactive is the extremely important feature of AR, with the development of terminal, it is easy to which ground will be located Function, image collecting function collection and the whole body are managed, for example smart phone, tablet computer, digital camera etc. are equipped with Image Acquisition and set Standby terminal, i.e. AR are more generally applied in above-mentioned terminal, and augmented reality effect is realized in the present image obtained in real time Fruit.But the prior art is based only upon LBS and IMU is shown, mode as the aforementioned is easy to appear biggish location error.Institute With based on invention thinking same as Example 1, as the extension of embodiment 1, the embodiment of the present application provides a kind of virtual right The methods of exhibiting of elephant is shown when on realtime graphic by virtual objects superposition with realizing, reduces virtual objects and target object Location error.This method can be applied to the terminal for being equipped with image capture device, and process is as shown in figure 5, include following steps It is rapid:
Step 21: acquiring location information when realtime graphic according to terminal, obtain the image comprising candidate target object.
For example terminal can be the smart phone for being equipped with image capture device, image capture device can be assemblied in The camera at the smart phone back side, and front assembly display screen, when opening camera, so that it may realtime graphic is acquired, this When can according to acquisition realtime graphic when location information, obtain include candidate target object image.Such as location information , can be using N, E as the center of circle comprising longitude and latitude N, E and height above sea level H, candidate target object in search radius 1000m obtains candidate Target object " 1 ", candidate target object " 2 " ... candidate target object " n " etc., and obtain comprising these candidate target objects Image.In practical applications, it can also take pictures when acquiring realtime graphic, so this step may include: according to figure of taking pictures As when terminal location information, obtain include candidate target object image.
Step 22: realtime graphic and the image comprising candidate target object being subjected to images match, determine at least one mesh Mark physical location of the Object Projection in realtime graphic.
In this step, so that it may which realtime graphic and the image each comprising above-mentioned candidate target object are carried out image Match, from matching result, determine at least one, for example target object " 1 " is projected in the physical location in realtime graphic.For example, The resolution ratio of mobile phone screen is 1920 × 1080, then target object " 1 " is projected in the physical location in realtime graphic It is a coordinate, such as (100,500).
In practical applications, multiple target objects (such as target object " 1 ", target object " 2 ", target pair can also be determined As " 3 ") it is projected in the physical location in realtime graphic respectively.
It is mentioned in previous step, acquisition realtime graphic, which can be, takes pictures, then this step may include: the image that will be taken pictures Images match is carried out with the image comprising candidate target object, determines that at least one target object is projected in the image taken pictures Physical location.
Step 23: theoretical position and physical location in realtime graphic being projected according at least one target object, really The just rear posture information of periodical repair.
Similar to Example 1ly, which is projected in the theoretical position in realtime graphic, can be by adopting The location information of the location information of terminal and the preceding posture information of amendment, at least one target object determines when collecting realtime graphic. For example, location information when acquisition realtime graphic includes longitude and latitude N, E and height above sea level H, posture information is real-time pitch, yaw With the function T (θ of rollimu), the location information of target object " 1 " includes longitude and latitude N1、E1And height above sea level H1, can be according to following Projection matrix equation determines amendment posture information:
Y=P × T (θt)×(Xpoi-Xp)
Y is (100,500), and P is the projection matrix parameters of camera, and longitude and latitude N, E and height above sea level H can indicate Xp, Longitude and latitude N1、E1And height above sea level H1It can indicate Xpoi, θ can be usedimuAs initial value, Jacobi is solved using partial derivative (Jacobian) matrix further iteratively solves, can solve θt
If determining the physical location that multiple target objects are projected in respectively in realtime graphic in step 22, this step Multiple amendment posture informations can be determined according to multiple target objects respectively, further according to average value or weighted average, really Fixed final amendment posture information θt
Step 24: after the location information of location information, target object when according to terminal acquisition realtime graphic and amendment Posture information shows the corresponding virtual objects of target object in realtime graphic.
Determine amendment posture information θtAfterwards, so that it may according to projection matrix equation:
Y=P × T (θt)×(Xpoi-Xp)
Determine all target objects with virtual objects, being projected in realtime graphic, the position in realtime graphic It sets, the corresponding virtual objects of target object can also be shown in realtime graphic.It specifically, can be according to the longitude and latitude of mobile phone Spend the longitude and latitude N of N, E and height above sea level H, target object " 1 "1、E1And height above sea level H1... the longitude and latitude N of target object " n "n、EnWith And height above sea level HnIt determines to be projected in the candidate target object with virtual objects in realtime graphic, then determines respectively Position of each candidate target Object Projection in mobile phone screen, filters out the candidate target pair beyond mobile phone screen indication range As behind the position for determining the target object that may finally be projected in realtime graphic and projection, by these target objects pair The virtual objects answered carry out AR displaying according to the position of target object.
As shown in fig. 6, the schematic diagram of the methods of exhibiting for virtual objects, POI content may include the position of target object Information and image, mobile phone can first determine location information when acquisition realtime graphic according to LBS, according to location information out of POI The image comprising candidate POI is obtained in appearance;The realtime graphic of mobile phone acquisition is subjected to image with the image comprising candidate POI Match, obtain matching result, is i.e. at least one POI is projected in the physical location in realtime graphic;Realtime graphic is acquired according to mobile phone When location information and correct before posture information determine that at least one POI is projected in the theoretical position in realtime graphic, and According to theoretical position and physical location, posture information after amendment is determined;According to posture information after amendment, when acquiring realtime graphic Location information, the target object for determining may finally to be projected in realtime graphic (needs shown in the realtime graphic The corresponding virtual objects of POI) and the position of projection after, the corresponding virtual objects of POI are shown in realtime graphic, AR is completed It shows.
In practical applications, real-time image acquisition is usually continual, for example, the realtime graphic of acquisition in 1 second, Or obtain once for 2 seconds, if realtime graphic of every acquisition carries out once correcting posture information determination, computing resource disappears Consume it is larger, so can also include subsequent step in one embodiment:
Step 25: posture information before amendment when according to posture information after amendment and terminal acquisition realtime graphic determines The posture information changed out.
The θ determined abovetIt is understood that the correct posture information that should be determined by IMU, i.e., by three kinds of rotation angles Spend the posture being combined into, it is possible to which before amendment, mobile phone acquires posture information θ original when realtime graphicimu, determine The posture information of variation:
Δ θ=θtimu
Step 26: the position letter in preset period of time, according to the posture information of variation, and when acquisition realtime graphic Breath and the location information for correcting preceding posture information, target object, the corresponding virtual objects of target object are shown in realtime graphic In.
In this step, in preset period of time, so that it may be the period of mobile phone continuous collecting realtime graphic, for example, can Time of operation is shown to assume that user usually carries out AR with an approximate posture as 3 seconds, then in this 3 seconds, the After determining Δ θ when one acquisition, the hereafter realtime graphic of all acquisitions, with Δ θ+θimuAs revised posture information AR displaying is carried out, computing resource is saved.
And in practical applications, virtual objects can be shown according to the image after taking pictures wherein.And in step 22 By the agency of can determine that at least one target object is projected in the physical location in the image taken pictures, if It is not necessary to considered The corresponding virtual objects of at least one target object then directly can be superimposed upon the image of taking pictures by the case where continuous collecting In, also just eliminate posture information and subsequent step after determining amendment.
The method provided using embodiment 2 is obtained according to the location information of terminal when acquisition realtime graphic comprising candidate mesh Realtime graphic and the image comprising candidate target object are carried out images match, determine at least one mesh by the image for marking object Physical location of the Object Projection in realtime graphic is marked, and the reason in realtime graphic is projected according at least one target object By position, posture information after amendment is determined, it is finally according to posture information after amendment that the target object in realtime graphic is corresponding Virtual objects be shown.For being shown compared with the prior art according only to LBS and IMU, this programme is in LBS and IMU On the basis of, it joined attitude rectification information, the error of LBS and IMU be modified as far as possible, is solved to a certain extent The prior art is shown when on the target object position of realtime graphic by virtual objects superposition, is easy to appear biggish position and is missed The problem of difference achievees the effect that reduce location error.
Embodiment 3
Based on identical inventive concept, embodiment 3 provides a kind of displaying device of virtual objects, for virtual right in general It is shown when on the target object position of present image as being superimposed, reduces location error.As shown in fig. 7, being the structure of the device Figure, comprising: acquiring unit 31, matching unit 32, determination unit 33 and display unit 34, wherein
Acquiring unit 31 can be used for the location information according to terminal when acquisition present image, and obtaining includes candidate target The image of object;
Matching unit 32 can be used for present image and the image comprising candidate target object carrying out images match, really At least one fixed target object is projected in the physical location in present image;
Determination unit 33, the theoretical position that can be used for being projected according at least one target object in present image and reality Border position determines posture information after amendment, before the theoretical position is by the location information of terminal when acquisition present image and amendment Posture information, the location information of at least one target object are determining;
Display unit 34 can be used for the position letter according to the location information, target object of terminal when acquisition present image Posture information after breath and the amendment shows the corresponding virtual objects of target object in the present image.
In one embodiment, acquiring unit 31 are specifically used for:
According to the location information of terminal when acquisition present image and preceding posture information is corrected, obtaining includes candidate target object Image.
In one embodiment, matching unit 32 are specifically used for:
Determine the physical location that at least two target objects are projected in respectively in present image;
Then determination unit 33 are specifically used for:
It is projected in theoretical position and physical location in present image according at least two target objects, determines appearance after amendment State information.
In one embodiment, device is applied to be equipped with the terminal of image capture device, then
Acquiring unit 31, specifically for acquiring location information when realtime graphic according to terminal, obtaining includes candidate target The image of object;
Matching unit 32, specifically for realtime graphic and the image comprising candidate target object are carried out images match, really At least one fixed target object is projected in the physical location in realtime graphic;
Determination unit 33, specifically for the theoretical position and reality being projected according at least one target object in realtime graphic Border position determines posture information after amendment;
Believe the position of display unit 34, location information, target object when specifically for acquiring realtime graphic according to terminal Posture information after breath and amendment shows the corresponding virtual objects of target object in realtime graphic.
In one embodiment,
Determination unit 33, before amendment when can be also used for according to posture information after amendment and terminal acquisition realtime graphic Posture information determines the posture information of variation;
Display unit 34, can be also used in preset period of time, real-time according to the posture information of variation, and acquisition The location information of posture information, target object before location information and amendment when image, by the corresponding virtual objects of target object It shows in realtime graphic.
The device provided using embodiment 3 is obtained according to the location information of terminal when acquisition present image comprising candidate mesh Present image and the image comprising candidate target object are carried out images match, determine at least one mesh by the image for marking object Physical location of the Object Projection in present image is marked, and the reason in present image is projected according at least one target object By position, posture information after amendment is determined, it is finally according to posture information after amendment that the target object in present image is corresponding Virtual objects be shown.For being shown compared with the prior art according only to LBS and IMU, this programme is in LBS and IMU On the basis of, it joined attitude rectification information, the error of LBS and IMU be modified as far as possible, is solved to a certain extent The prior art is shown when on the target object position of present image by virtual objects superposition, is easy to appear biggish position and is missed The problem of difference achievees the effect that reduce location error.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity, Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment The combination of equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this The function of each unit can be realized in the same or multiple software and or hardware when application.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can provide as method, system or computer program product. Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application Form.It is deposited moreover, the application can be used to can be used in the computer that one or more wherein includes computer usable program code The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Formula.
The application can describe in the general context of computer-executable instructions executed by a computer, such as program Module.Generally, program module includes routines performing specific tasks or implementing specific abstract data types, programs, objects, group Part, data structure etc..The application can also be practiced in a distributed computing environment, in these distributed computing environments, by Task is executed by the connected remote processing devices of communication network.In a distributed computing environment, program module can be with In the local and remote computer storage media including storage equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal Replacement, improvement etc., should be included within the scope of the claims of this application.

Claims (15)

1. a kind of methods of exhibiting of virtual objects characterized by comprising
According to the location information of terminal when acquisition present image, the image comprising candidate target object is obtained;
The present image and the image comprising candidate target object are subjected to images match, determine at least one target pair As the physical location being projected in the present image;
It is projected in theoretical position and physical location in the present image according at least one described target object, determines amendment Posture information afterwards;
Believed according to posture after the location information of terminal, the location information of target object and the amendment when acquisition present image Breath shows the corresponding virtual objects of target object in the present image.
2. the method as described in claim 1, which is characterized in that according to the location information of terminal when acquisition present image and amendment The location information of preceding posture information and at least one target object determines the theoretical position.
3. the method as described in claim 1, which is characterized in that according to the location information of terminal when acquisition present image, obtain Image comprising candidate target object, specifically includes:
According to the location information of terminal when acquisition present image and preceding posture information is corrected, acquisition includes the figure of candidate target object Picture.
4. the method as described in claim 1, which is characterized in that determine at least one target object in the present image Physical location specifically includes:
Determine the physical location that at least two target objects are projected in respectively in the present image;
Theoretical position and physical location in the present image are then projected according at least one described target object, determination is repaired Posture information after just, specifically includes:
It is projected in theoretical position and physical location in the present image according at least two target object, determines amendment Posture information afterwards.
5. the method as described in claim 1, which is characterized in that the method is applied to be equipped with the end of image capture device End, then the method specifically includes:
Location information when realtime graphic is acquired according to terminal, obtains the image comprising candidate target object;
The realtime graphic and the image comprising candidate target object are subjected to images match, determine at least one target pair As the physical location being projected in the realtime graphic;
It is projected in theoretical position and physical location in the realtime graphic according at least one described target object, determines amendment Posture information afterwards;
Posture is believed after the location information of location information, target object when acquiring realtime graphic according to terminal and the amendment Breath shows the corresponding virtual objects of target object in the realtime graphic.
6. method as claimed in claim 5, which is characterized in that the method also includes:
Posture information before amendment when according to posture information after the amendment and terminal acquisition realtime graphic, determines variation Posture information;
In preset period of time, location information according to the posture information of the variation, and when acquisition realtime graphic and repair The location information of just preceding posture information, target object shows the corresponding virtual objects of target object in the realtime graphic.
7. a kind of displaying device of virtual objects characterized by comprising acquiring unit, matching unit, determination unit and exhibition Show unit, wherein
The acquiring unit is obtained for the location information according to terminal when acquisition present image comprising candidate target object Image;
The matching unit, for the present image and the image comprising candidate target object to be carried out images match, Determine the physical location that at least one target object is projected in the present image;
The determination unit, theoretical position for being projected according at least one described target object in the present image and Physical location determines posture information after amendment;
The display unit, for according to the location information of the location information of terminal when acquisition present image, target object and Posture information after the amendment shows the corresponding virtual objects of target object in the present image.
8. device as claimed in claim 7, which is characterized in that the determination unit is also used to:
According to the location information of terminal when acquisition present image and correct preceding posture information and at least one described target object Location information determine the theoretical position.
9. device as claimed in claim 7, which is characterized in that the acquiring unit is specifically used for:
According to the location information of terminal when acquisition present image and preceding posture information is corrected, acquisition includes the figure of candidate target object Picture.
10. device as claimed in claim 7, which is characterized in that the matching unit is specifically used for:
Determine the physical location that at least two target objects are projected in respectively in the present image;
The then determination unit, is specifically used for:
It is projected in theoretical position and physical location in the present image according at least two target object, determines amendment Posture information afterwards.
11. device as claimed in claim 7, which is characterized in that described device is applied to be equipped with the end of image capture device End, then
The acquiring unit, specifically for acquiring location information when realtime graphic according to terminal, obtaining includes candidate target pair The image of elephant;
The matching unit, specifically for the realtime graphic and the image comprising candidate target object are carried out image Match, determines the physical location that at least one target object is projected in the realtime graphic;
The determination unit, specifically for being projected in the theoretical position in the realtime graphic according at least one described target object It sets and physical location, determines posture information after amendment;
The display unit, the location information of location information, target object when specifically for according to terminal acquisition realtime graphic, And posture information after the amendment, the corresponding virtual objects of target object are shown in the realtime graphic.
12. device as claimed in claim 11, which is characterized in that
The determination unit, appearance before amendment when being also used to according to posture information after the amendment and terminal acquisition realtime graphic State information determines the posture information of variation;
The display unit, is also used in preset period of time, is schemed in real time according to the posture information of the variation, and acquisition As when location information and amendment before posture information, target object location information, by the corresponding virtual objects exhibition of target object Show in the realtime graphic.
13. a kind of methods of exhibiting of virtual objects characterized by comprising
According to the location information of terminal when acquisition present image, the image comprising candidate target object is obtained;
The present image and the image comprising candidate target object are subjected to images match, determine at least one target pair As the physical location being projected in the present image;
According to the physical location, the corresponding virtual objects of at least one described target object are shown in the present image In.
14. a kind of methods of exhibiting of virtual objects characterized by comprising
According to take pictures image when terminal location information, obtain include candidate target object image;
The image taken pictures and the image comprising candidate target object are subjected to images match, determine at least one target Physical location of the Object Projection in the image taken pictures;
According to the physical location, the corresponding virtual objects of at least one described target object are shown in the image taken pictures In.
15. a kind of methods of exhibiting of virtual objects characterized by comprising
According to the location information of terminal when acquisition present image, the image comprising candidate target object is obtained;
The present image and the image comprising candidate target object are subjected to images match, determine at least one target pair As the first position being projected in the present image;
It is projected in first position and the second position in the present image according at least one described target object, determines amendment Posture information afterwards;
Believed according to posture after the location information of terminal, the location information of target object and the amendment when acquisition present image Breath shows the corresponding virtual objects of target object in the present image.
CN201710377499.1A 2017-05-25 2017-05-25 A kind of methods of exhibiting and device of virtual objects Pending CN108958462A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710377499.1A CN108958462A (en) 2017-05-25 2017-05-25 A kind of methods of exhibiting and device of virtual objects
PCT/CN2018/086783 WO2018214778A1 (en) 2017-05-25 2018-05-15 Method and device for presenting virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710377499.1A CN108958462A (en) 2017-05-25 2017-05-25 A kind of methods of exhibiting and device of virtual objects

Publications (1)

Publication Number Publication Date
CN108958462A true CN108958462A (en) 2018-12-07

Family

ID=64396203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710377499.1A Pending CN108958462A (en) 2017-05-25 2017-05-25 A kind of methods of exhibiting and device of virtual objects

Country Status (2)

Country Link
CN (1) CN108958462A (en)
WO (1) WO2018214778A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801341A (en) * 2019-01-30 2019-05-24 北京经纬恒润科技有限公司 A kind of position method of calibration and device for demarcating target
CN110688002A (en) * 2019-09-06 2020-01-14 广东虚拟现实科技有限公司 Virtual content adjusting method and device, terminal equipment and storage medium
CN111311758A (en) * 2020-02-24 2020-06-19 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment
CN111323007A (en) * 2020-02-12 2020-06-23 北京市商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111651051A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device
WO2021072702A1 (en) * 2019-10-17 2021-04-22 深圳盈天下视觉科技有限公司 Augmented reality scene implementation method, apparatus, device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900516A (en) * 2021-09-27 2022-01-07 阿里巴巴达摩院(杭州)科技有限公司 Data processing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376118A (en) * 2014-12-03 2015-02-25 北京理工大学 Panorama-based outdoor movement augmented reality method for accurately marking POI
CN105103089A (en) * 2013-06-28 2015-11-25 谷歌公司 Systems and methods for generating accurate sensor corrections based on video input

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105103089A (en) * 2013-06-28 2015-11-25 谷歌公司 Systems and methods for generating accurate sensor corrections based on video input
CN104376118A (en) * 2014-12-03 2015-02-25 北京理工大学 Panorama-based outdoor movement augmented reality method for accurately marking POI

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801341A (en) * 2019-01-30 2019-05-24 北京经纬恒润科技有限公司 A kind of position method of calibration and device for demarcating target
CN110688002A (en) * 2019-09-06 2020-01-14 广东虚拟现实科技有限公司 Virtual content adjusting method and device, terminal equipment and storage medium
CN110688002B (en) * 2019-09-06 2023-12-19 广东虚拟现实科技有限公司 Virtual content adjusting method, device, terminal equipment and storage medium
WO2021072702A1 (en) * 2019-10-17 2021-04-22 深圳盈天下视觉科技有限公司 Augmented reality scene implementation method, apparatus, device, and storage medium
CN111323007A (en) * 2020-02-12 2020-06-23 北京市商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111323007B (en) * 2020-02-12 2022-04-15 北京市商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111311758A (en) * 2020-02-24 2020-06-19 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment
CN111651051A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device
CN111651051B (en) * 2020-06-10 2023-08-22 浙江商汤科技开发有限公司 Virtual sand table display method and device

Also Published As

Publication number Publication date
WO2018214778A1 (en) 2018-11-29

Similar Documents

Publication Publication Date Title
CN108958462A (en) A kind of methods of exhibiting and device of virtual objects
US20170337745A1 (en) Fine-grain placement and viewing of virtual objects in wide-area augmented reality environments
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
CN112288853B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium
WO2010052558A2 (en) System and method for the precise integration of virtual objects to interactive panoramic walk-through applications
Gomez-Jauregui et al. Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM
TWI694298B (en) Information display method, device and terminal
TWI681364B (en) Method, device and equipment for generating visual objects
CN108344401A (en) Localization method, device and computer readable storage medium
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN108933902A (en) Panoramic picture acquisition device builds drawing method and mobile robot
CN110263615A (en) Interaction processing method, device, equipment and client in vehicle shooting
CN111595332B (en) Full-environment positioning method integrating inertial technology and visual modeling
CN115439528B (en) Method and equipment for acquiring image position information of target object
CN114332648B (en) Position identification method and electronic equipment
JP2012088073A (en) Azimuth estimation apparatus and program
JP6064269B2 (en) Information processing apparatus, information processing method, and program
CN116858215B (en) AR navigation map generation method and device
CN111127661B (en) Data processing method and device and electronic equipment
CN104978476B (en) Indoor map scene, which is carried out, using smart phone mends the method surveyed
CN103632627A (en) Information display method and apparatus and mobile navigation electronic equipment
CN208638479U (en) Panoramic picture acquisition device and mobile robot
JP5817012B2 (en) Information processing apparatus, information processing method, and program
CN115187709A (en) Geographic model processing method and device, electronic equipment and readable storage medium
CN108717724A (en) A kind of measurement method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181207

RJ01 Rejection of invention patent application after publication