CN108108748A - A kind of information processing method and electronic equipment - Google Patents
A kind of information processing method and electronic equipment Download PDFInfo
- Publication number
- CN108108748A CN108108748A CN201711299204.XA CN201711299204A CN108108748A CN 108108748 A CN108108748 A CN 108108748A CN 201711299204 A CN201711299204 A CN 201711299204A CN 108108748 A CN108108748 A CN 108108748A
- Authority
- CN
- China
- Prior art keywords
- characteristic point
- target object
- image
- information
- acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of information processing method and electronic equipment, wherein, method includes:Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, to carry out object identification to the target object;When object identification success, the spatial positional information of extracted characteristic point is obtained;The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
Description
Technical field
The present invention relates to technical field of information processing more particularly to a kind of information processing methods and electronic equipment.
Background technology
A fundamental problem is how to fold virtual information in augmented reality (AR, Augmented Reality)
It adds on real-world object, needs first to carry out object identification in related art scheme, then calculate image capture device and object
Relative attitude namely obtain object the corresponding acquisition pose of image, and then by virtual information seamless matching to user expectation
Position.At present, the scheme of acquisition pose of subject image is obtained including following several:
1, two-dimension picture mark (marker) is sticked on object in advance, then by identifying that marker obtains Image Acquisition
The relative attitude of equipment and object;However, the application scenarios limitation is too strong, without generality;
2, user provides the 3D models of object, extracts the structural information (angle, point, straight line etc.) of object, passes through characteristic matching
Realize object identification;However, the program needs the 3D models of user's offer object, the application scenarios of the program are constrained significantly,
Practicability is not strong;
3, the sparse 3D models of object are gone out by the picture reconstruction of multiple different angles of multiple different angles, are then passed through
Matched mode removes identification object and estimates camera posture frame by frame;However, the recognition speed of the program is slow, and identification is difficult to reality
When, it is difficult to practical application.
The content of the invention
The embodiment of the present invention provides a kind of information processing method, device and storage medium, can quickly to target object into
Row object identification, and determine the acquisition pose corresponding to the image of target object.
What the technical solution of the embodiment of the present invention was realized in:
An embodiment of the present invention provides a kind of information processing method, the described method includes:
Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;
The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;
The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, with
Object identification is carried out to the target object;
When object identification success, the spatial positional information of extracted characteristic point is obtained;
The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
In said program, the construction feature database, including:
Gather multiple image of the target object under different points of view;
Characteristic point information extraction is carried out to the multiple image under the different points of view respectively;
Characteristic point information matching is carried out to the two field picture under the different points of view of extraction, obtains match information;
Based on the obtained match information, three-dimensional reconstruction is carried out to the target object, obtains three-dimensional reconstruction result;
Based on the three-dimensional reconstruction result, the property data base is built.
It is described to be based on the three-dimensional reconstruction result in said program, the property data base is built, including:
Based on the three-dimensional reconstruction result, the spatial positional information of reconstruction features point is obtained;
The N two field pictures chosen under different points of view are used as with reference to two field picture;N is the positive integer more than 1;
Structure includes the reference frame image, the characteristic point in the reference frame image, feature in the reference frame image
The property data base of the spatial positional information of point.
In said program, the characteristic point of the characteristic point information by extraction and two field picture in the property data base
Information is matched, to carry out object identification to the target object, including:
The characteristic point of each two field picture in the property data base is matched with the characteristic point extracted;
When determining that the quantity of the characteristic point of successful match is more than predetermined threshold value, the object identification to the target object is characterized
Success.
In said program, the method further includes:
When object identification success, the target pair is identified in the two field picture of the target object of continuous acquisition
As realizing the target following to the target object.
In said program, the method further includes:
It, will be virtual during the target object is shown according to the acquisition pose of the image of the correspondence target object
Object is superimposed to the predeterminated position of the target object.
In said program, the method further includes:
In response to the object identification success to the target object, and there is the spy that it fails to match in the characteristic point extracted
Sign point,
Obtain the acquisition pose of the corresponding acquisition pose of image in the property data base and the image of the target object
The highest two field picture of similarity;
The image of the target object is subjected to the matching based on projection properties with the image obtained, the matching is obtained and loses
The spatial positional information of the characteristic point lost.
The embodiment of the present invention additionally provides a kind of electronic equipment, and the electronic equipment includes:
Memory, for storing executable program;
Processor is realized for passing through during the executable program for performing and being stored in the memory:
Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;
The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;
The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, with
Object identification is carried out to the target object;
When object identification success, the spatial positional information of extracted characteristic point is obtained;
The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
In said program, the processor is additionally operable to the characteristic point of each two field picture and extraction in the property data base
Characteristic point matched;
When determining that the quantity of the characteristic point of successful match is more than predetermined threshold value, the object identification to the target object is characterized
Success.
In said program, the processor is additionally operable in response to the object identification success to the target object, and is carried
There is the characteristic point that it fails to match in the characteristic point taken,
Obtain the acquisition pose of the corresponding acquisition pose of image in the property data base and the image of the target object
The highest two field picture of similarity;
The image of the target object is subjected to the matching based on projection properties with the image obtained, the matching is obtained and loses
The spatial positional information of the characteristic point lost.
Using above- mentioned information processing method provided in an embodiment of the present invention, device and storage medium, mesh is included by structure
Mark the property data base of the characteristic point information of the multiple image of object so that after the image of target object is acquired, can be based on
Characteristic point information in property data base carries out characteristic point information matching, and then quickly realizes object identification to target object,
And the acquisition pose of the image of definite target object.
Description of the drawings
Fig. 1 is the flow diagram one of information processing method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram provided in an embodiment of the present invention for establishing property data base;
Fig. 3 is the flow diagram two of information processing method provided in an embodiment of the present invention;
Fig. 4 is the flow diagram three of information processing method provided in an embodiment of the present invention;
Fig. 5 is the composition structure diagram of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
The present invention is described in further detail in the following with reference to the drawings and specific embodiments.
In order to make the object, technical solutions and advantages of the present invention clearer, the present invention is implemented below in conjunction with attached drawing
Example is described in further detail, it should be understood that embodiment mentioned herein is only used to explain the present invention, is not used to limit
The fixed present invention.In addition, embodiment provided below is for implementing the section Example of the present invention rather than providing to implement this hair
Bright whole embodiments, on the premise of those skilled in the art do not make the creative labor, to the technical side of following embodiment
Case recombinated obtained by embodiment and based on the protection model that the present invention is belonged to inventing implemented other embodiment
It encloses.
It should be noted that in embodiments of the present invention, term " comprising ", "comprising" or its any other variant are intended to
Cover non-exclusive inclusion, so that method or device including a series of elements not only will including what is be expressly recited
Element, but also including other elements that are not explicitly listed or further include for implementation or device it is intrinsic will
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element
Method or device in also there are other relevant factors (such as step in method).
It should be noted that the term " first second the 3rd " involved by the embodiment of the present invention be only distinguish it is similar
Object does not represent the particular sorted for object, it is possible to understand that ground, " first second the 3rd " can be mutual in the case of permission
Change specific order or precedence.It should be appreciated that the object that " first second the 3rd " is distinguished in the appropriate case can be mutual
It changes, so that the embodiment of the present invention described herein can be real with the order in addition to those for illustrating or describing herein
It applies.
Before the present invention will be described in further detail, the noun involved in the embodiment of the present invention and term are said
Bright, noun and term involved in the embodiment of the present invention are suitable for following explanation.
1) object identification (Object Recognition) refers to identifying in an image or one group of video sequence
(finding) given object.
2) target following in a video or frame image sequence, realizes object identification to target object, and tracks target
The movement locus of object.
3) characteristic point, the point or the larger point of curvature on image border for referring to image intensity value generation acute variation
(intersection point at i.e. two edges), can reflect image substantive characteristics, can be identified for that target object in image (object).
4) describe sub (Descriptors), i.e., feature point description is sub (Feature Descriptors), for describing spy
Levy the attribute of point.
5) pose is gathered, for image, carries out the image capture device (such as camera) of Image Acquisition for being shot mesh
Location and posture for object are marked, including rotation and translation, i.e. image capture device comes compared with target subject object
The six degree of freedom (6DoF, Six Degrees of Freedom) said.
Embodiment one
As an alternative embodiment for realizing information processing method of the embodiment of the present invention, referring to Fig. 1, Fig. 1 is the present invention
One optional flow diagram of the information processing method that embodiment provides, information processing method is related in the embodiment of the present invention
Step 101 illustrates individually below to step 105.
Step 101:Construction feature database;The property data base includes the characteristic point letter of the multiple image of target object
Breath.
It in one embodiment, can construction feature database in the following way:
Gather multiple image of the target object under different points of view;Feature is carried out to the multiple image under different points of view respectively
Point information extraction;Characteristic point information matching is carried out to the two field picture under the different points of view of extraction, obtains match information;Based on obtaining
Match information, to target object carry out three-dimensional reconstruction, obtain three-dimensional reconstruction result;Based on three-dimensional reconstruction result, construction feature
Database.
Referring to Fig. 2, Fig. 2 is the flow diagram provided in an embodiment of the present invention for establishing property data base, in actual implementation
When, the acquisition of multiple image of the target object under different points of view can be realized in the following way:Camera is surrounded into target subject
Object (3D Object) is recorded a video (scan), is obtained the scan video (video) of target object, and then is obtained and form scanning
Every two field picture of video forms multiple image of the target object under different points of view.Entire gatherer process is in the case of offline
Progress facilitates realization.
In one embodiment, referring to Fig. 2, characteristic point information extraction is carried out to two field picture to be included:Two field picture is carried
Take ORB (Oriented FAST and Rotated BRIEF) characteristic points (Key points) and its description.In practical application
In, FAST (features from accelerated segment test) algorithm can be used and carry out ORB feature point extractions, and
The core concept of FAST algorithms is exactly to find out those points above all others, i.e., compares by a point with the point around it, if it
With the point of wherein most is all different can think that it is a characteristic point.In practical applications, BRIEF can be used
(Binary Robust Independent Elementary Features) algorithm is sub to calculate the description of a characteristic point,
The core concept of BRIEF algorithms is to choose N number of point pair around key point P with certain pattern, the comparison knot of this N number of point pair
Fruit is in combination as description.
Since multiple image of the target object under different points of view of acquisition can be regarded by forming the scanning of target object
The multiple image of frequency is formed, and therefore, multiple image of the target object under different points of view can be understood as corresponding target object
One frame image sequence in one embodiment, referring to Fig. 2, characteristic point letter is carried out to the two field picture under the different points of view of extraction
Breath matching, obtains match information, can include:For every two field picture in frame image sequence, by its with before itself adjacent N (N
For positive integer, such as 5) two field picture carries out Feature Points Matching, obtains match information, i.e., the corresponding characteristic point of successful match is to information
(Point pair correspondence)。
In one embodiment, based on obtained match information, three-dimensional reconstruction is carried out to target object, obtains three-dimensional reconstruction
As a result, it can include:Based on obtained above-mentioned match information, using SFM (Structure from Motion) algorithm to target
Object carries out sparse 3D Model Reconstructions, such as by matching characteristic point set obtained above, recovers camera parameter using numerical method
With three-dimensional information, the 3D coordinates (spatial positional information) of sparse 3D models (sparse 3D model) and reconstruction features point are obtained.
In this way, user need not provide 3D models, user only needs to record a video using camera surrounding target object, can be based on obtained video
Sparse 3D models are obtained, it is necessary to illustrate, the pose (6DOF) of image capture device is relative target in the present embodiment
For object.
In one embodiment, based on three-dimensional reconstruction result construction feature database, can include:
The spatial positional information of reconstruction features point is obtained based on three-dimensional reconstruction result;Choose the N two field pictures under different points of view
As with reference to two field picture (reference frames);N is the positive integer more than 1;Structure includes reference frame image, reference frame figure
As the property data base of the spatial positional information of characteristic point in upper characteristic point, reference frame image.Wherein, each reference frame image
The acquisition pose of a corresponding image capture device (camera).For example, it can choose to be formed under the different points of view of sparse 3D models
20 two field pictures be used as with reference to two field picture, correspond to 20 different acquisition poses of camera respectively.
Step 102:The image of target object is gathered, and characteristic point information extraction is carried out to the image of acquisition.
Here, in one embodiment, Image Acquisition is carried out to target object using camera, then extracted in current frame image
ORB characteristic points and its description son.
Step 103:The characteristic point information of extraction is matched with the characteristic point information of two field picture in property data base, with
Object identification is carried out to target object.
In one embodiment, two field picture in the characteristic point information and property data base of extraction can be realized in the following way
The matching of characteristic point information:
The characteristic point of each two field picture in property data base is matched with the characteristic point extracted one by one;Determine successful match
Characteristic point quantity be more than predetermined threshold value when, characterize to target object object identification success.For example, the target pair by extraction
Each reference frame image in the current frame image and property data base of elephant carries out Feature Points Matching one by one, as the spy of successful match
When the quantity of sign point is more than 15, it is considered as the object identification success to target object, otherwise, is considered as object identification and fails, at end
Manage flow.
Step 104:When object identification success, the spatial positional information of extracted characteristic point is obtained.
Here, in actual implementation, due to storing the corresponding spatial positional information of characteristic point in property data base, because
This, when object identification success, can obtain the spatial positional information of the characteristic point of successful match from the property data base of structure.
In one embodiment, when carrying out Feature Points Matching in step 103, it is understood that there may be although object identification success,
There is a situation where the characteristic point that it fails to match in the characteristic point extracted, such as:18 are extracted from target object current frame image
A characteristic point, however by current frame image one by one with reference frame image carry out Feature Points Matching when, the characteristic point of successful match has
16, it is considered as to target object object identification success, however there are 2 characteristic points that it fails to match, can obtains in the following way
Take the spatial positional information for the characteristic point that it fails to match:Obtain the corresponding acquisition pose of reference frame image in property data base and mesh
Mark the highest reference frame image of acquisition pose similarity of the image of object;By the image of target object and the acquisition pose of acquisition
The highest reference frame image of similarity carries out the matching based on projection properties, obtains the space bit confidence for the characteristic point that it fails to match
Breath.
In one embodiment, after to the object identification success of target object, can also in the following way realize to target pair
The target following of elephant:The target object is identified in the two field picture of the target object of continuous acquisition.For example, in the target of acquisition
In the sequential frame image sequence of object, if target object has been identified in previous frame image, by current frame image and former frame
Image carries out Feature Points Matching, when the quantity of the characteristic point of successful match is more than predetermined threshold value (such as 15), is characterized in current
Frame has successfully realized the object identification to target object, and so on, it is realized in continuous frame image sequence to target object
Target following.
Step 105:The spatial positional information of characteristic point based on acquisition determines the acquisition pose of the image of target object.
In one embodiment, can according to the spatial positional information of the characteristic point of acquisition, using n points perspective (PnP,
Perspective-n-Point) algorithm determines the acquisition pose (camera posture) of the image of target object, similarly, can obtain mesh
Mark corresponding acquisition pose (camera posture), the acquisition pose progress to acquisition per two field picture in the sequential frame image sequence of object
Smothing filtering simultaneously exports, while can the two field picture of target object and corresponding characteristic point information, acquisition pose be added in characteristic
According to storehouse.
It in one embodiment, can also be according to definite image after the acquisition pose of image of target object is determined
Pose is gathered, virtual objects are superimposed to the predeterminated position of target object during display target object, it is virtual right to realize
As being superimposed with real-world object.
Embodiment two
As another alternative embodiment for realizing information processing method of the embodiment of the present invention, referring to Fig. 3, Fig. 4, Fig. 3, figure
4 be the optional flow diagram of information processing method provided in an embodiment of the present invention, and applied to electronic equipment, the present invention is real
It applies information processing method in example and is related to step 201 to step 208, illustrate individually below.
Step 201:Electronic equipment loads property data base.
Here, in actual implementation, it is necessary to which construction feature database, this feature database include before this step
The characteristic point information of the multiple image of target object.It in one embodiment, can construction feature database in the following way:Pass through
Camera surrounding target object is recorded a video, the video for including image information of the target object under different points of view is obtained, obtains target
One continuous frame image sequence of object;The ORB characteristic points and its description in each two field picture are extracted, then for each
Its 5 two field picture adjacent with the front is carried out Feature Points Matching, obtains match information (the i.e. characteristic point of successful match by two field picture
To information);Sparse 3D Model Reconstructions are carried out to target object using SFM algorithms, obtain the sparse 3D models and again of target object
Build the spatial positional information (3D coordinates) of characteristic point;Sparse 3D models based on target object randomly select 20 under different points of view
Two field picture is used as with reference to two field picture;Structure includes reference frame image, the characteristic point in reference frame image, feature in reference frame image
The property data base of the spatial positional information of point.It is the two field picture under different points of view since reference frame image is corresponding, 20
Frame reference frame image corresponds to 20 different acquisition poses respectively.
Step 202:Sequential image acquisition is carried out to target object, extracts ORB feature of the target object per two field picture respectively
Point and its description.
In actual implementation, sequential image acquisition is carried out to target object, obtains a continuous frame figure of target object
As sequence, ORB characteristic point detections are carried out to target object two field picture, can be carried out by FAST algorithms ORB characteristic points detection and
Extraction;Specifically, image intensity value that can be around distinguished point based, detects the pixel value to make a circle in candidate feature point week, if
There are the pixel of enough (such as 3/4ths of surrounding circle points) and the gray value differences of the candidate point around candidate point in field
It Gou not be greatly (as being more than given gray threshold), then it is assumed that the candidate point is a characteristic point.Meanwhile in actual implementation, it can adopt
Description of a characteristic point is calculated with BRIEF algorithms.
Step 203:Reference frame image in the two field picture and property data base of target object is subjected to images match one by one,
And judge whether successful match, if successful match, perform step 204;If it fails to match, step 208 is performed.
Here, in actual implementation, by the reference frame image in the two field picture and property data base of target object one by one into
Row images match, including:
The two field picture of target object and reference frame image are subjected to Feature Points Matching, if the characteristic point of successful match is more than pre-
If threshold value (such as 15), then it is considered as images match success;Otherwise, it is considered as images match failure.In practical applications, images match
Object identification success of the success i.e. to target object.If the image of the two field picture of target object and the reference frame image currently chosen
It fails to match, then chooses and (such as randomly select) another reference frame image in property data base and carry out images match, if target object
Current frame image and property data base in all reference frame image it fails to match, terminate process flow.
Step 204:Obtain the spatial positional information of the characteristic point of the two field picture of target object.
Here, in actual implementation, due to storing the corresponding spatial positional information of characteristic point in property data base, because
This, when object identification success when, can the characteristic point based on successful match obtain successful match from the property data base of structure
The spatial positional information of characteristic point.In one embodiment, when carrying out images match to target object current frame image, Ke Nengcun
In images match success, but the characteristic point of target object current frame image does not exactly match successful situation, is lost for matching
The characteristic point lost can obtain the spatial positional information of characteristic point in the following way:Obtain reference frame image pair in property data base
The acquisition pose and the highest reference frame image of acquisition pose similarity of the image of target object answered;By the image of target object
The matching based on projection properties is carried out with the acquisition highest reference frame image of pose similarity of acquisition, obtains the spy that it fails to match
Levy the spatial positional information of point.
Step 205:Target following is carried out to target object based on the two field picture of target object.
In actual implementation, the target following to target object can be realized in the following way:In the target of continuous acquisition
Target object is identified in the two field picture of object.For example, in the sequential frame image sequence of the target object of acquisition, if target object
It has been identified in previous frame image, then current frame image and previous frame image has been subjected to Feature Points Matching, as the spy of successful match
When the quantity of sign point is more than predetermined threshold value (such as 10), it is characterized in present frame and has successfully realized object identification to target object,
And so on, the target following to target object is realized in continuous frame image sequence.
Step 206:The spatial positional information of characteristic point based on acquisition determines the acquisition pose of the image of target object.
In one embodiment, can target object be determined using PnP algorithms according to the spatial positional information of the characteristic point of acquisition
Image acquisition pose (camera pose), that is, gather image camera posture (6DoF), adopt in a like fashion, can obtain
Take corresponding acquisition pose (camera pose), the acquisition to acquisition per two field picture in the sequential frame image sequence of target object
Pose carries out smothing filtering and exports, while can add the two field picture of target object and corresponding characteristic point information, acquisition pose
Enter property data base.
Step 207:According to the acquisition pose of definite image, virtual objects are superimposed during display target object
To the predeterminated position of target object.
Step 208:Terminate this process flow.
Using the above embodiment of the present invention, user need not provide the 3D models of target object, it is only necessary to using camera or other
Image capture device is recorded a video around object, can generate to identify the sparse 3D models of target object and property data base, and
For user without pasting any label on object, the texture information based on object to be identified in itself expands applied field
Scape;The property data base for including the frame image information under each viewpoint of target object based on generation, object is carried out to target object
After body identification, real-time target following can be further carried out to target object, and it is virtual right based on obtained acquisition pose realization
As the superposition with real-world object.
Embodiment three
As an alternative embodiment for realizing electronic equipment of the embodiment of the present invention, implement referring to Fig. 5, Fig. 5 for the present invention
The composition structure diagram of example electronic equipment, electronic equipment include:Processor 21, memory 22 and at least one PERCOM peripheral communication
Interface 23;The processor 21, memory 22 and external communication interface 23 are connected by bus 24;Wherein,
Memory 22, for storing executable program;
Processor 21 is realized for passing through during the executable program for performing and being stored in the memory:
Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;
The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;
The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, with
Object identification is carried out to the target object;
When object identification success, the spatial positional information of extracted characteristic point is obtained;
The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
In one embodiment, the processor 21 is additionally operable to gather multiframe figure of the target object under different points of view
Picture;
Characteristic point information extraction is carried out to the multiple image under the different points of view respectively;
Characteristic point information matching is carried out to the two field picture under the different points of view of extraction, obtains match information;
Based on the obtained match information, three-dimensional reconstruction is carried out to the target object, obtains three-dimensional reconstruction result;
Based on the three-dimensional reconstruction result, the property data base is built.
In one embodiment, the processor 21 is additionally operable to, based on the three-dimensional reconstruction result, obtain reconstruction features point
Spatial positional information;
The N two field pictures chosen under different points of view are used as with reference to two field picture;N is the positive integer more than 1;
Structure includes the reference frame image, the characteristic point in the reference frame image, feature in the reference frame image
The property data base of the spatial positional information of point.
In one embodiment, the processor 21, be additionally operable to by the characteristic point of each two field picture in the property data base with
The characteristic point of extraction is matched;
When determining that the quantity of the characteristic point of successful match is more than predetermined threshold value, the object identification to the target object is characterized
Success.
In one embodiment, the processor 21 is additionally operable to when object identification success, described in continuous acquisition
The target object is identified in the two field picture of target object, realizes the target following to the target object.
In one embodiment, the processor 21 is additionally operable to the acquisition pose of the image according to the correspondence target object,
Virtual objects are superimposed to the predeterminated position of the target object during the target object is shown.
In one embodiment, the processor 21 is additionally operable in response to the object identification success to the target object, and
There is the characteristic point that it fails to match in the characteristic point extracted,
Obtain the acquisition pose of the corresponding acquisition pose of image in the property data base and the image of the target object
The highest two field picture of similarity;
The image of the target object is subjected to the matching based on projection properties with the image obtained, the matching is obtained and loses
The spatial positional information of the characteristic point lost.
It should be noted that:The electronic equipment that above-described embodiment provides belongs to same structure with information processing method embodiment
Think, specific implementation process refers to embodiment of the method, and which is not described herein again.For in electronic equipment embodiment of the present invention not
The technical detail of disclosure refer to the description of the method for the present invention embodiment.
Based on the above-mentioned explanation to information processing method and electronic equipment, the embodiment of the present invention additionally provides a kind of storage and is situated between
Matter is stored thereon with computer instruction, which realizes when being executed by processor:
Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;
The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;
The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, with
Object identification is carried out to the target object;
When object identification success, the spatial positional information of extracted characteristic point is obtained;
The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
In one embodiment, also realized when above-metioned instruction is executed by processor:
Gather multiple image of the target object under different points of view;
Characteristic point information extraction is carried out to the multiple image under the different points of view respectively;
Characteristic point information matching is carried out to the two field picture under the different points of view of extraction, obtains match information;
Based on the obtained match information, three-dimensional reconstruction is carried out to the target object, obtains three-dimensional reconstruction result;
Based on the three-dimensional reconstruction result, the property data base is built.
In one embodiment, also realized when above-metioned instruction is executed by processor:
Based on the three-dimensional reconstruction result, the spatial positional information of reconstruction features point is obtained;
The N two field pictures chosen under different points of view are used as with reference to two field picture;N is the positive integer more than 1;
Structure includes the reference frame image, the characteristic point in the reference frame image, feature in the reference frame image
The property data base of the spatial positional information of point.
In one embodiment, also realized when above-metioned instruction is executed by processor:
The characteristic point of each two field picture in the property data base is matched with the characteristic point extracted;
When determining that the quantity of the characteristic point of successful match is more than predetermined threshold value, the object identification to the target object is characterized
Success.
In one embodiment, also realized when above-metioned instruction is executed by processor:
When object identification success, the target pair is identified in the two field picture of the target object of continuous acquisition
As realizing the target following to the target object.
In one embodiment, also realized when above-metioned instruction is executed by processor:
It, will be virtual during the target object is shown according to the acquisition pose of the image of the correspondence target object
Object is superimposed to the predeterminated position of the target object.
In one embodiment, also realized when above-metioned instruction is executed by processor:
In response to the object identification success to the target object, and there is the spy that it fails to match in the characteristic point extracted
Sign point,
Obtain the acquisition pose of the corresponding acquisition pose of image in the property data base and the image of the target object
The highest two field picture of similarity;
The image of the target object is subjected to the matching based on projection properties with the image obtained, the matching is obtained and loses
The spatial positional information of the characteristic point lost.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through
The relevant hardware of program instruction is completed, and foregoing program can be stored in a computer read/write memory medium, the program
Upon execution, the step of execution includes above method embodiment;And foregoing storage medium includes:It is movable storage device, read-only
Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or
The various media that can store program code such as person's CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and is independent product
Sale in use, can also be stored in a computer read/write memory medium.Based on such understanding, the present invention is implemented
The technical solution of example substantially in other words can be embodied the part that the prior art contributes in the form of software product,
The computer software product is stored in a storage medium, and being used including some instructions (can be with so that computer equipment
It is personal computer, server or network equipment etc.) perform all or part of each embodiment the method for the present invention.
And foregoing storage medium includes:Movable storage device, ROM, RAM, magnetic disc or CD etc. are various can to store program code
Medium.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in change or replacement, should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (10)
1. a kind of information processing method, which is characterized in that the described method includes:
Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;
The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;
The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, with to institute
It states target object and carries out object identification;
When object identification success, the spatial positional information of extracted characteristic point is obtained;
The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
2. the method as described in claim 1, which is characterized in that the construction feature database, including:
Gather multiple image of the target object under different points of view;
Characteristic point information extraction is carried out to the multiple image under the different points of view respectively;
Characteristic point information matching is carried out to the two field picture under the different points of view of extraction, obtains match information;
Based on the obtained match information, three-dimensional reconstruction is carried out to the target object, obtains three-dimensional reconstruction result;
Based on the three-dimensional reconstruction result, the property data base is built.
3. method as claimed in claim 2, which is characterized in that it is described based on the three-dimensional reconstruction result, build the feature
Database, including:
Based on the three-dimensional reconstruction result, the spatial positional information of reconstruction features point is obtained;
The N two field pictures chosen under different points of view are used as with reference to two field picture;N is the positive integer more than 1;
Structure includes the reference frame image, the characteristic point in the reference frame image, characteristic point in the reference frame image
The property data base of spatial positional information.
4. the method as described in claim 1, which is characterized in that the characteristic point information by extraction and the characteristic
It is matched according to the characteristic point information of two field picture in storehouse, to carry out object identification to the target object, including:
The characteristic point of each two field picture in the property data base is matched with the characteristic point extracted;
Determine the characteristic point of successful match quantity be more than predetermined threshold value when, characterize the object identification to the target object into
Work(.
5. the method as described in claim 1, which is characterized in that the method further includes:
When object identification success, the target object is identified in the two field picture of the target object of continuous acquisition,
Realize the target following to the target object.
6. the method as described in claim 1, which is characterized in that the method further includes:
According to the acquisition pose of the image of the correspondence target object, by virtual objects during the target object is shown
It is superimposed to the predeterminated position of the target object.
7. the method as described in claim 1, which is characterized in that the method further includes:
In response to the object identification success to the target object, and there is the feature that it fails to match in the characteristic point extracted
Point,
It is similar to the acquisition pose of the image of the target object to obtain the corresponding acquisition pose of image in the property data base
Spend highest two field picture;
The image of the target object is subjected to the matching based on projection properties with the image obtained, obtains that described it fails to match
The spatial positional information of characteristic point.
8. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
Memory, for storing executable program;
Processor is realized for passing through during the executable program for performing and being stored in the memory:
Construction feature database;The property data base includes the characteristic point information of the multiple image of target object;
The image of the target object is gathered, and characteristic point information extraction is carried out to the image of acquisition;
The characteristic point information of extraction is matched with the characteristic point information of two field picture in the property data base, with to institute
It states target object and carries out object identification;
When object identification success, the spatial positional information of extracted characteristic point is obtained;
The spatial positional information of the characteristic point based on acquisition determines the acquisition pose of the image of the target object.
9. electronic equipment according to claim 8, which is characterized in that
The processor is additionally operable to the characteristic point progress by the characteristic point of each two field picture in the property data base and extraction
Match somebody with somebody;
Determine the characteristic point of successful match quantity be more than predetermined threshold value when, characterize the object identification to the target object into
Work(.
10. electronic equipment according to claim 8, which is characterized in that
The processor is additionally operable in response to the object identification success to the target object, and is deposited in the characteristic point extracted
In the characteristic point that it fails to match,
It is similar to the acquisition pose of the image of the target object to obtain the corresponding acquisition pose of image in the property data base
Spend highest two field picture;
The image of the target object is subjected to the matching based on projection properties with the image obtained, obtains that described it fails to match
The spatial positional information of characteristic point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711299204.XA CN108108748A (en) | 2017-12-08 | 2017-12-08 | A kind of information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711299204.XA CN108108748A (en) | 2017-12-08 | 2017-12-08 | A kind of information processing method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108108748A true CN108108748A (en) | 2018-06-01 |
Family
ID=62208206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711299204.XA Pending CN108108748A (en) | 2017-12-08 | 2017-12-08 | A kind of information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108108748A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109074757A (en) * | 2018-07-03 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Method, terminal and computer readable storage medium for establishing map |
CN109272438A (en) * | 2018-09-05 | 2019-01-25 | 联想(北京)有限公司 | A kind of data capture method, electronic equipment and computer-readable storage media |
CN109559347A (en) * | 2018-11-28 | 2019-04-02 | 中南大学 | Object identifying method, device, system and storage medium |
CN109584377A (en) * | 2018-09-04 | 2019-04-05 | 亮风台(上海)信息科技有限公司 | A kind of method and apparatus of the content of augmented reality for rendering |
CN109582147A (en) * | 2018-08-08 | 2019-04-05 | 亮风台(上海)信息科技有限公司 | A kind of method and user equipment enhancing interaction content for rendering |
CN109657573A (en) * | 2018-12-04 | 2019-04-19 | 联想(北京)有限公司 | Image-recognizing method and device and electronic equipment |
CN109656363A (en) * | 2018-09-04 | 2019-04-19 | 亮风台(上海)信息科技有限公司 | It is a kind of for be arranged enhancing interaction content method and apparatus |
CN109656364A (en) * | 2018-08-15 | 2019-04-19 | 亮风台(上海)信息科技有限公司 | It is a kind of for the method and apparatus of augmented reality content to be presented on a user device |
CN110246163A (en) * | 2019-05-17 | 2019-09-17 | 联想(上海)信息技术有限公司 | Image processing method and its device, equipment, computer storage medium |
CN110404202A (en) * | 2019-06-28 | 2019-11-05 | 北京市政建设集团有限责任公司 | The detection method and device of aerial work safety belt, aerial work safety belt |
CN110428468A (en) * | 2019-08-12 | 2019-11-08 | 北京字节跳动网络技术有限公司 | A kind of the position coordinates generation system and method for wearable display equipment |
CN110728245A (en) * | 2019-10-17 | 2020-01-24 | 珠海格力电器股份有限公司 | Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium |
CN110956644A (en) * | 2018-09-27 | 2020-04-03 | 杭州海康威视数字技术股份有限公司 | Motion trail determination method and system |
CN111311758A (en) * | 2020-02-24 | 2020-06-19 | Oppo广东移动通信有限公司 | Augmented reality processing method and device, storage medium and electronic equipment |
CN111457886A (en) * | 2020-04-01 | 2020-07-28 | 北京迈格威科技有限公司 | Distance determination method, device and system |
CN111882590A (en) * | 2020-06-24 | 2020-11-03 | 广州万维创新科技有限公司 | AR scene application method based on single picture positioning |
CN112288878A (en) * | 2020-10-29 | 2021-01-29 | 字节跳动有限公司 | Augmented reality preview method and preview device, electronic device and storage medium |
CN113657164A (en) * | 2021-07-15 | 2021-11-16 | 美智纵横科技有限责任公司 | Method and device for calibrating target object, cleaning equipment and storage medium |
WO2022267781A1 (en) * | 2021-06-26 | 2022-12-29 | 华为技术有限公司 | Modeling method and related electronic device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719286A (en) * | 2009-12-09 | 2010-06-02 | 北京大学 | Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof |
CN103177468A (en) * | 2013-03-29 | 2013-06-26 | 渤海大学 | Three-dimensional motion object augmented reality registration method based on no marks |
US8798357B2 (en) * | 2012-07-09 | 2014-08-05 | Microsoft Corporation | Image-based localization |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
-
2017
- 2017-12-08 CN CN201711299204.XA patent/CN108108748A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719286A (en) * | 2009-12-09 | 2010-06-02 | 北京大学 | Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof |
US8798357B2 (en) * | 2012-07-09 | 2014-08-05 | Microsoft Corporation | Image-based localization |
CN103177468A (en) * | 2013-03-29 | 2013-06-26 | 渤海大学 | Three-dimensional motion object augmented reality registration method based on no marks |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109074757A (en) * | 2018-07-03 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Method, terminal and computer readable storage medium for establishing map |
CN109582147B (en) * | 2018-08-08 | 2022-04-26 | 亮风台(上海)信息科技有限公司 | Method for presenting enhanced interactive content and user equipment |
CN109582147A (en) * | 2018-08-08 | 2019-04-05 | 亮风台(上海)信息科技有限公司 | A kind of method and user equipment enhancing interaction content for rendering |
CN109656364B (en) * | 2018-08-15 | 2022-03-29 | 亮风台(上海)信息科技有限公司 | Method and device for presenting augmented reality content on user equipment |
CN109656364A (en) * | 2018-08-15 | 2019-04-19 | 亮风台(上海)信息科技有限公司 | It is a kind of for the method and apparatus of augmented reality content to be presented on a user device |
CN109656363A (en) * | 2018-09-04 | 2019-04-19 | 亮风台(上海)信息科技有限公司 | It is a kind of for be arranged enhancing interaction content method and apparatus |
CN109584377A (en) * | 2018-09-04 | 2019-04-05 | 亮风台(上海)信息科技有限公司 | A kind of method and apparatus of the content of augmented reality for rendering |
CN109656363B (en) * | 2018-09-04 | 2022-04-15 | 亮风台(上海)信息科技有限公司 | Method and equipment for setting enhanced interactive content |
CN109584377B (en) * | 2018-09-04 | 2023-08-29 | 亮风台(上海)信息科技有限公司 | Method and device for presenting augmented reality content |
CN109272438A (en) * | 2018-09-05 | 2019-01-25 | 联想(北京)有限公司 | A kind of data capture method, electronic equipment and computer-readable storage media |
CN110956644A (en) * | 2018-09-27 | 2020-04-03 | 杭州海康威视数字技术股份有限公司 | Motion trail determination method and system |
CN110956644B (en) * | 2018-09-27 | 2023-10-10 | 杭州海康威视数字技术股份有限公司 | Motion trail determination method and system |
CN109559347A (en) * | 2018-11-28 | 2019-04-02 | 中南大学 | Object identifying method, device, system and storage medium |
CN109657573A (en) * | 2018-12-04 | 2019-04-19 | 联想(北京)有限公司 | Image-recognizing method and device and electronic equipment |
CN110246163B (en) * | 2019-05-17 | 2023-06-23 | 联想(上海)信息技术有限公司 | Image processing method, image processing device, image processing apparatus, and computer storage medium |
CN110246163A (en) * | 2019-05-17 | 2019-09-17 | 联想(上海)信息技术有限公司 | Image processing method and its device, equipment, computer storage medium |
CN110404202A (en) * | 2019-06-28 | 2019-11-05 | 北京市政建设集团有限责任公司 | The detection method and device of aerial work safety belt, aerial work safety belt |
CN110428468A (en) * | 2019-08-12 | 2019-11-08 | 北京字节跳动网络技术有限公司 | A kind of the position coordinates generation system and method for wearable display equipment |
CN110728245A (en) * | 2019-10-17 | 2020-01-24 | 珠海格力电器股份有限公司 | Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium |
CN111311758A (en) * | 2020-02-24 | 2020-06-19 | Oppo广东移动通信有限公司 | Augmented reality processing method and device, storage medium and electronic equipment |
CN111457886B (en) * | 2020-04-01 | 2022-06-21 | 北京迈格威科技有限公司 | Distance determination method, device and system |
CN111457886A (en) * | 2020-04-01 | 2020-07-28 | 北京迈格威科技有限公司 | Distance determination method, device and system |
CN111882590A (en) * | 2020-06-24 | 2020-11-03 | 广州万维创新科技有限公司 | AR scene application method based on single picture positioning |
CN112288878A (en) * | 2020-10-29 | 2021-01-29 | 字节跳动有限公司 | Augmented reality preview method and preview device, electronic device and storage medium |
CN112288878B (en) * | 2020-10-29 | 2024-01-26 | 字节跳动有限公司 | Augmented reality preview method and preview device, electronic equipment and storage medium |
WO2022267781A1 (en) * | 2021-06-26 | 2022-12-29 | 华为技术有限公司 | Modeling method and related electronic device, and storage medium |
CN113657164A (en) * | 2021-07-15 | 2021-11-16 | 美智纵横科技有限责任公司 | Method and device for calibrating target object, cleaning equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108108748A (en) | A kind of information processing method and electronic equipment | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
Zhang et al. | Object-occluded human shape and pose estimation from a single color image | |
Monroy et al. | Salnet360: Saliency maps for omni-directional images with cnn | |
Chen et al. | Cartoongan: Generative adversarial networks for photo cartoonization | |
Matsuyama et al. | Real-time 3D shape reconstruction, dynamic 3D mesh deformation, and high fidelity visualization for 3D video | |
US8866845B2 (en) | Robust object recognition by dynamic modeling in augmented reality | |
CN110648397B (en) | Scene map generation method and device, storage medium and electronic equipment | |
EP3998547A1 (en) | Line-of-sight detection method and apparatus, video processing method and apparatus, and device and storage medium | |
US20030095701A1 (en) | Automatic sketch generation | |
CN107329962B (en) | Image retrieval database generation method, and method and device for enhancing reality | |
US20120139899A1 (en) | Semantic Rigging of Avatars | |
CN109410316A (en) | Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object | |
WO2020014294A1 (en) | Learning to segment via cut-and-paste | |
CN113593001A (en) | Target object three-dimensional reconstruction method and device, computer equipment and storage medium | |
CN116843834A (en) | Three-dimensional face reconstruction and six-degree-of-freedom pose estimation method, device and equipment | |
CN108109164A (en) | A kind of information processing method and electronic equipment | |
Kampelmuhler et al. | Synthesizing human-like sketches from natural images using a conditional convolutional decoder | |
CN108268863A (en) | A kind of image processing method, device and computer storage media | |
RU2755396C1 (en) | Neural network transfer of the facial expression and position of the head using hidden position descriptors | |
Ling et al. | Human object inpainting using manifold learning-based posture sequence estimation | |
CN113570615A (en) | Image processing method based on deep learning, electronic equipment and storage medium | |
US11410398B2 (en) | Augmenting live images of a scene for occlusion | |
JP2005523488A (en) | Automatic 3D modeling system and method | |
Blažević et al. | Towards reversible de-identification in video sequences using 3d avatars and steganography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180601 |
|
RJ01 | Rejection of invention patent application after publication |