CN104424635A - Information processing method, system and equipment - Google Patents

Information processing method, system and equipment Download PDF

Info

Publication number
CN104424635A
CN104424635A CN201310389812.5A CN201310389812A CN104424635A CN 104424635 A CN104424635 A CN 104424635A CN 201310389812 A CN201310389812 A CN 201310389812A CN 104424635 A CN104424635 A CN 104424635A
Authority
CN
China
Prior art keywords
image
electronic equipment
positional information
unique point
spatial positional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310389812.5A
Other languages
Chinese (zh)
Inventor
李南君
刘国良
申浩
张贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310389812.5A priority Critical patent/CN104424635A/en
Publication of CN104424635A publication Critical patent/CN104424635A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses information processing methods, systems and devices. A method applied to first electronic device comprises the steps of receiving and saving first images uploaded by second electronic device; establishing a three-dimensional coordinate map, conducting feature point extraction to the first images based on the established three-dimensional coordinate map, analyzing and saving feature point description information of corresponding feature points, and determining and saving the spatial position information of the extracted feature points in the three-dimensional coordinate map; establishing a feature point dataset based on analyzed and saved feature point description information of a first quantity of first images and the determined and saved spatial position information of the feature points. By using the information processing methods, the systems and the devices, the spatial positioning of electronic devices based on images is realized.

Description

A kind of information processing method, system and equipment
Technical field
The present invention relates to the field of locating technology based on image, particularly relate to a kind of image information processing method, system and equipment.
Background technology
Location technology is widely used in the routine work of people at present with in life, and the most degree of accuracy of location technology ripe is at present not high.Location technology based on image is current research direction, and how realizing the higher location of degree of accuracy based on the process of image information is current problem demanding prompt solution.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of image information processing method, system and equipment, locates with the electronic device space at least realized based on image.
For achieving the above object, technical scheme of the present invention is achieved in that
A kind of image information processing method, be applied to the first electronic equipment, the method comprises:
Receive the first image that the second electronic equipment uploads and preserve;
Build three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map extracted;
Based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving.
A kind of image information processing method, be applied to the second electronic equipment, the method comprises:
Gather the first image in current scene, and give the first electronic equipment by described first image uploading; Described first image is the image carrying out feature point extraction for described first electronic equipment.
A kind of image information processing method, be applied to the 3rd electronic equipment, the method comprises:
Send Location Request to the first electronic equipment, in described Location Request, carry the second image, or carry the depth information of selected unique point in described second image and described second image;
Receiving described first electronic equipment according to described second image is described first spatial positional information that described first electronic equipment is determined.
A kind of first electronic equipment, comprising:
First receiver module, for receiving the first image that the second electronic equipment is uploaded;
Processing module, for building three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map of extraction; Based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving;
Preserve module, for preserving described first image and described characteristic point data collection.
A kind of second electronic equipment, comprising:
Acquisition module, for gathering the first image in current scene;
Upper transmission module, for giving the first electronic equipment by described first image uploading, described first image is the image carrying out feature point extraction for described first electronic equipment.
A kind of 3rd electronic equipment, comprising:
Sending module, for sending Location Request to the first electronic equipment, carries the second image in described Location Request, or carries the depth information of selected unique point in described second image and described second image;
Second receiver module is described first spatial positional information that described first electronic equipment is determined for receiving described first electronic equipment according to described second image.
A kind of pattern information processing system, comprises the first above-mentioned electronic equipment, the second electronic equipment and the 3rd electronic equipment.
A kind of image information processing method provided by the present invention, system and equipment, achieves the image uploaded based on electronic equipment and carry out space orientation to described electronic equipment, and positioning precision is higher.
Accompanying drawing explanation
Fig. 1 is the process flow diagram one of a kind of image information processing method of the embodiment of the present invention;
Fig. 2 is the flowchart 2 of a kind of image information processing method of the embodiment of the present invention;
Fig. 3 is the flow chart 3 of a kind of image information processing method of the embodiment of the present invention;
Fig. 4 is the composition structural representation of a kind of first electronic equipment of the embodiment of the present invention;
Fig. 5 is the composition structural representation of a kind of second electronic equipment of the embodiment of the present invention;
Fig. 6 is the composition structural representation of a kind of 3rd electronic equipment of the embodiment of the present invention;
Fig. 7 is the composition structural representation of a kind of pattern information processing system of the embodiment of the present invention.
Embodiment
Below in conjunction with the drawings and specific embodiments, the technical solution of the present invention is further elaborated.
For realizing locating based on the electronic device space of image, the embodiment of the present invention provides a kind of image information processing method, is applied to the first electronic equipment, and as shown in Figure 1, the method comprises:
Step 101, receives the first image that the second electronic equipment uploads and preserves.
Because the method is applied in the first electronic equipment, therefore the executive agent of described step 101 is the first electronic equipment, and so, step 101 also can be described as: the first electronic equipment receives the first image that the second electronic equipment uploads and preserves.Described first image can be 2D image, also can be 3D rendering.
Step 102, build three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map of extraction.
Because the method is applied in the first electronic equipment, therefore the executive agent of described step 102 is the first electronic equipment, so, step 102 also can be described as: the first electronic equipment builds three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map extracted.
Wherein, feature point extraction is carried out to the first image and can adopt one of following characteristics point extracting mode: based on rapid robust feature (SURF, Speeded Up Robust Features) algorithm feature point extraction mode, based on scale invariant feature conversion the feature point extraction mode of (SIFT, Scale-Invariant Feature Transform) algorithm, the feature point extraction mode based on harris algorithm.Described unique point descriptor RGB colouring information comprising described Feature point correspondence etc.; The spatial positional information of unique point in described three-dimensional coordinate map extracted, comprises (x, y, z) information that individual features point is corresponding in three-dimensional coordinate map.
Step 103, based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving.
Because the method is applied in the first electronic equipment, therefore the executive agent of described step 103 is the first electronic equipment, so, step 103 also can be described as: the first electronic equipment is based on the described unique point descriptor of also preserving described first graphical analysis of the first quantity and determine and the spatial positional information construction feature point data collection of the described unique point of preserving.
Wherein, described first quantity sets according to actual needs, namely forms sampling set by described first image of the some of the second electronic equipment collection.In the embodiment of the present invention, by analyzing the image in sampling set and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map extracted; And based on the described unique point descriptor of the Feature point correspondence of all images and spatial positional information construction feature point data collection.Preferably, described characteristic point data collection can eight modes of pitching map (octomap) build, the unique point descriptor of each node preservation character pair point in described octomap, spatial positional information, sensing.Pointer information of adjacent feature point etc.
As a kind of better embodiment of the present invention, the first electronic equipment receives the depth information of selected unique point in described first image that described second electronic equipment is uploading that described first image uploads simultaneously;
Accordingly, feature point extraction is being carried out to described first image and after obtaining the spatial positional information of described unique point, according to the depth information of selected unique point and the spatial positional information of the selected unique point of correspondence, calculate the described second electronic equipment spatial positional information residing when gathering described first image and corresponding preservation.The spatial positional information of unique point refers to that unique point is in (x, y, z) information corresponding in three-dimensional coordinate map.
Preferably, described method also comprises:
First electronic equipment receives the Location Request of described 3rd electronic equipment, carries the second image in described Location Request;
Feature point extraction is carried out to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to described 3rd electronic equipment, as the first spatial positional information of described 3rd electronic equipment.
As one embodiment of the present invention, when the first electronic equipment by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; In the unique point of mating from described second image, selected characteristic point carries out depth analysis, obtains the depth information of selected unique point; According to the depth information of selected unique point and the spatial positional information of selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.First spatial positional information of the 3rd electronic equipment refers to (x, y, z) information that the 3rd electronic equipment is corresponding in three-dimensional coordinate map.
As another embodiment of the present invention, the depth information of selected unique point in described second image is also carried in described Location Request, when the first electronic equipment by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; According to the depth information with the spatial positional information corresponding to the unique point of selected Feature Points Matching and described selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
Preferably, the method also comprises: the first electronic equipment receives described second electronic equipment and uploading second space positional information that described first image uploads simultaneously and preserving, and described second space positional information is the spatial orientation information obtained when gathering described first image for characterizing described second electronic equipment;
Accordingly, after the first spatial positional information obtaining described 3rd electronic equipment, process is weighted to described first spatial positional information and described second space positional information, and the 3rd spatial positional information obtained after weighting process is sent to described 3rd electronic equipment.
Wherein, described second space positional information is such as GPS (GPS) positional information that described second electronic equipment obtains from network side when gathering described first image.The error of GPS position information is usually comparatively large, and the scope of its error amount tends towards stability.The depth information of the spatial positional information in the embodiment of the present invention corresponding to unique point and described unique point calculates the first spatial positional information of gained, also different errors can be there is with the depth information difference of described unique point, when the depth information of unique point is greater than certain threshold value, the first spatial positional information error calculating gained can be greater than the error of described GPS position information, therefore, as one preferably embodiment, can by being weighted process to described first spatial positional information and described second space positional information, such as: when described depth information is greater than set threshold value, the weights of described first spatial positional information are set to 0, described GPS position information (i.e. second space positional information) is selected to send to the 3rd electronic equipment as described 3rd spatial positional information, when described depth information is less than set threshold value, the weights of described GPS position information are set to 0, described first spatial positional information is selected to send to the 3rd electronic equipment as described 3rd spatial positional information.
The embodiment of the present invention also provides a kind of image information processing method, is applied to the second electronic equipment, and as shown in Figure 2, the method comprises:
Step 201, gathers the first image in current scene.
Because the method is applied in the second electronic equipment, therefore the executive agent of described step 201 is the second electronic equipment, and so, step 201 also can be described as: the second electronic equipment gathers the first image in current scene.
Step 202, gives the first electronic equipment by described first image uploading, and described first image is the image carrying out feature point extraction for described first electronic equipment.
Because the method is applied in the second electronic equipment, therefore the executive agent of described step 202 is the second electronic equipment, so, step 202 also can be described as: described first image uploading is given the first electronic equipment by the second electronic equipment, and described first image is the image carrying out feature point extraction for described first electronic equipment.
Preferably, the method also comprises: the depth information of selected unique point in described first image uploaded by the second electronic equipment simultaneously uploading described first image; Feature point extraction is being carried out to described first image and after obtaining the spatial positional information of described unique point for the first electronic equipment, according to the depth information of selected unique point and the spatial positional information of the selected unique point of correspondence, calculate the described second electronic equipment spatial positional information residing when gathering described first image and corresponding preservation.
Preferably, the method also comprises: the second electronic equipment uploads second space positional information uploading described first image simultaneously, and described second space positional information is for characterizing the described second electronic equipment spatial orientation information residing when gathering described first image;
So corresponding, described first electronic equipment is after the first spatial positional information of acquisition the 3rd electronic equipment, process can be weighted to described first spatial positional information and described second space positional information, and the 3rd spatial positional information obtained after weighting process is sent to described 3rd electronic equipment.
Corresponding embodiment also provides a kind of image information processing method, and be applied to the 3rd electronic equipment, as shown in Figure 3, the method comprises:
Step 301, sends Location Request to the first electronic equipment, carries the second image in described Location Request, or carries the depth information of selected unique point in described second image and described second image.
Because the method is applied in the 3rd electronic equipment, therefore the executive agent of described step 301 is the second electronic equipment, so, step 301 also can be described as: the 3rd electronic equipment sends Location Request to the first electronic equipment, carry the second image in described Location Request, or carry the depth information of selected unique point in described second image and described second image.
When carrying the second image in described Location Request, accordingly, first electronic equipment carries out feature point extraction to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to described 3rd electronic equipment, as the first spatial positional information of described 3rd electronic equipment;
When by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; In the unique point of mating from described second image, selected characteristic point carries out depth analysis, obtains the depth information of selected unique point; According to the depth information of selected unique point and the spatial positional information of selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
When carrying the depth information of selected unique point in the second image and described second image in described Location Request, accordingly, first electronic equipment carries out feature point extraction to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to described 3rd electronic equipment, as the first spatial positional information of described 3rd electronic equipment,
When by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; According to the depth information with the spatial positional information corresponding to the unique point of selected Feature Points Matching and described selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
Step 302, receiving described first electronic equipment according to described second image is the spatial positional information that described first electronic equipment is determined.
Because the method is applied in the 3rd electronic equipment, therefore the executive agent of described step 302 is the second electronic equipment, so, step 302 also can be described as: it is the spatial positional information that described first electronic equipment is determined according to described second image that the 3rd electronic equipment receives described first electronic equipment.It should be noted that, when the first electronic equipment does not perform above-mentioned weighting process, described first electronic equipment is that the spatial positional information that described first electronic equipment is determined is the first described spatial positional information according to described second image; When the first electronic equipment performs above-mentioned weighting process, described first electronic equipment is that the spatial positional information that described first electronic equipment is determined is the 3rd described spatial positional information according to described second image.
The image information processing method that corresponding first electronic equipment side realizes, the embodiment of the present invention additionally provides a kind of first electronic equipment, and as shown in Figure 4, this equipment comprises:
First receiver module 11, for receiving the first image that the second electronic equipment is uploaded;
Processing module 12, for building three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map extracted; Based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving;
Preserve module 13, for preserving described first image and described characteristic point data collection.
Preferably, the first receiver module 11 is further used for, and receives the depth information of selected unique point in described first image that described second electronic equipment is uploading that described first image uploads simultaneously;
Accordingly, described processing module 12 is further used for, feature point extraction is being carried out to described first image and after obtaining the spatial positional information of described unique point, according to the depth information of selected unique point and the spatial positional information of corresponding selected unique point, calculate the spatial positional information that described second electronic equipment is residing when gathering described first image;
Preserve module 13 to be further used for, described second electronic equipment residing spatial positional information when gathering described first image is preserved.
Preferably, the first receiver module 11 is further used for, and receives the Location Request of described 3rd electronic equipment, carries the second image in described Location Request;
Accordingly, described processing module 12 is further used for, feature point extraction is carried out to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to described 3rd electronic equipment, as the first spatial positional information of described 3rd electronic equipment.
Preferably, processing module 12 is further used for, when by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; In the unique point of mating from described second image, selected characteristic point carries out depth analysis, obtains the depth information of selected unique point; According to the depth information of selected unique point and the spatial positional information of selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
Preferably, in described Location Request, also carry the depth information of selected unique point in described second image,
Processing module 12 is further used for, when by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; According to the depth information with the spatial positional information corresponding to the unique point of selected Feature Points Matching and described selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
Preferably, first receiver module 11 is further used for, receive described second electronic equipment uploading second space positional information that described first image uploads simultaneously and preserving, described second space positional information is for characterizing the described second electronic equipment spatial orientation information residing when gathering described first image;
Accordingly, described processing module 12 is further used for, after the first spatial positional information obtaining described 3rd electronic equipment, process is weighted to described first spatial positional information and described second space positional information, and the 3rd spatial positional information obtained after weighting process is sent to described 3rd electronic equipment.
Preferably, the mode of feature point extraction one of in the following ways: the feature point extraction mode based on SURF algorithm, the feature point extraction mode based on SIFT algorithm, feature point extraction mode based on harris algorithm.
The image information processing method that corresponding second electronic equipment side realizes, the embodiment of the present invention additionally provides a kind of second electronic equipment, and as shown in Figure 5, this equipment comprises:
Acquisition module 21, for gathering the first image in current scene;
Upper transmission module 22, for giving the first electronic equipment by described first image uploading, described first image is the image carrying out feature point extraction for described first electronic equipment.
Preferably, upper transmission module 22 is further used for, and is uploading described first image and to upload simultaneously in described first image the depth information of selected unique point; Or,
Upload second space positional information uploading described first image, described second space positional information is for characterizing the described second electronic equipment spatial orientation information residing when gathering described first image simultaneously.
Preferably, the second electronic equipment also can comprise: image processing module 23, after the first image in described acquisition module collection current scene, carries out the first process to described first image gathered;
Accordingly, described upper transmission module 22 is further used for, and image processing module 23 is carried out the first image uploading after the first process to described first electronic equipment.Described first process can time Kalman filtering to improve picture quality, can also be that the foundation I frame of image and P frame carry out compression of images to save storage space.
The image information processing method that corresponding 3rd electronic equipment side realizes, the embodiment of the present invention additionally provides a kind of 3rd electronic equipment, and as shown in Figure 6, this equipment comprises:
Sending module 31, for sending Location Request to the first electronic equipment, carries the second image in described Location Request, or carries the depth information of selected unique point in described second image and described second image;
Second receiver module 32 is described first spatial positional information that described first electronic equipment is determined for receiving described first electronic equipment according to described second image.
In addition, the embodiment of the present invention additionally provides a kind of pattern information processing system comprising above-mentioned first electronic equipment, the second electronic equipment and the 3rd electronic equipment, as shown in Figure 7, wherein,
First electronic equipment 10 is positioned at high in the clouds, for receiving the first image that the second electronic equipment 20 uploads and preserving; Build three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map extracted; Based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving.
Preferably, the first electronic equipment 10 also for, receive the depth information of selected unique point in described first image that the second electronic equipment 20 is uploading that described first image uploads simultaneously;
Accordingly, feature point extraction is being carried out to described first image and after obtaining the spatial positional information of described unique point, according to the depth information of selected unique point and the spatial positional information of the selected unique point of correspondence, calculate the second electronic equipment 20 spatial positional information residing when gathering described first image and corresponding preservation.
Preferably, the first electronic equipment 10 also for, receive the Location Request of the 3rd electronic equipment 30, in described Location Request, carry the second image;
Feature point extraction is carried out to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to the 3rd electronic equipment 30, as the first spatial positional information of the 3rd electronic equipment 30.
Preferably, the first electronic equipment 10 also for, by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; In the unique point of mating from described second image, selected characteristic point carries out depth analysis, obtains the depth information of selected unique point; According to the depth information of selected unique point and the spatial positional information of selected unique point, calculate the first spatial positional information of the 3rd electronic equipment 30 and send to the 3rd electronic equipment 30.
Preferably, the depth information of selected unique point in described second image is also carried in Location Request, first electronic equipment 10 also for, by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; According to the depth information with the spatial positional information corresponding to the unique point of selected Feature Points Matching and described selected unique point, calculate the first spatial positional information of described 3rd electronic equipment 30 and send to described 3rd electronic equipment 30.
Preferably, first electronic equipment 10 also for, receive the second electronic equipment 20 uploading second space positional information that described first image uploads simultaneously and preserving, described second space positional information is the spatial orientation information obtained when gathering described first image for characterizing described second electronic equipment;
Accordingly, after the first spatial positional information of acquisition the 3rd electronic equipment 30, process is weighted to described first spatial positional information and described second space positional information, and the 3rd spatial positional information obtained after weighting process is sent to described 3rd electronic equipment 30.
Second electronic equipment 20 for, gather the first image in current scene, and will described first image uploading to the first electronic equipment 10; Described first image is the image carrying out feature point extraction for described first electronic equipment 10.
Preferably, the second electronic equipment 20 also for, uploading described first image and to upload simultaneously in described first image the depth information of selected unique point; Or upload second space positional information uploading described first image, described second space positional information is for characterizing described second electronic equipment 20 spatial orientation information residing when gathering described first image simultaneously.
Preferably, the second electronic equipment 20 also for, after gathering the first image in current scene, after the first process is carried out to described first image gathered, be uploaded to the first electronic equipment 10.
3rd electronic equipment 30 for, send Location Request to the first electronic equipment 10, in described Location Request, carry the second image, or to carry in described second image and described second image the depth information of selected unique point; Receiving described first electronic equipment 10 according to described second image is the spatial positional information that described first electronic equipment 10 is determined.
Wherein, the inner structure of described first electronic equipment 10 and function are see Fig. 4 and corresponding embodiment thereof, the inner structure of the second electronic equipment 20 and function are see Fig. 5 and corresponding embodiment thereof, and the inner structure of the 3rd electronic equipment 30 and function are see Fig. 6 and corresponding embodiment thereof.
It should be noted that, the second electronic equipment described in practical application and the 3rd electronic equipment also can be same equipment, and such as: when needing to position the second electronic equipment entering scene, described second electronic equipment then possesses the function of described 3rd equipment simultaneously.
The embodiment of the present invention achieves the image uploaded based on electronic equipment and carries out space orientation to described electronic equipment, and positioning precision is higher.
In several embodiment provided by the present invention, should be understood that, disclosed method, device and electronic equipment, can realize by another way.Apparatus embodiments described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, and as: multiple unit or assembly can be in conjunction with, maybe can be integrated into another system, or some features can be ignored, or do not perform.In addition, the coupling each other of shown or discussed each ingredient or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of equipment or unit or communication connection can be electrical, machinery or other form.
The above-mentioned unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, also can be distributed in multiple network element; Part or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in various embodiments of the present invention can all be integrated in a processing unit, also can be each unit individually as a unit, also can two or more unit in a unit integrated; Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form that hardware also can be adopted to add SFU software functional unit realizes.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can have been come by the hardware that programmed instruction is relevant, aforesaid program can be stored in a computer read/write memory medium, this program, when performing, performs the step comprising said method embodiment; And aforesaid storage medium comprises: movable storage device, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
Or, if the above-mentioned integrated unit of the embodiment of the present invention using the form of software function module realize and as independently production marketing or use time, also can be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the embodiment of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium, comprises some instructions and performs all or part of of method described in each embodiment of the present invention in order to make a computer equipment (can be personal computer, server or the network equipment etc.).And aforesaid storage medium comprises: movable storage device, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of described claim.

Claims (23)

1. an image information processing method, be applied to the first electronic equipment, it is characterized in that, the method comprises:
Receive the first image that the second electronic equipment uploads and preserve;
Build three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map extracted;
Based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving.
2. image information processing method according to claim 1, it is characterized in that, the method also comprises:
Receive the depth information of selected unique point in described first image that described second electronic equipment is uploading that described first image uploads simultaneously;
Accordingly, feature point extraction is being carried out to described first image and after obtaining the spatial positional information of described unique point, according to the depth information of selected unique point and the spatial positional information of the selected unique point of correspondence, calculate the described second electronic equipment spatial positional information residing when gathering described first image and corresponding preservation.
3. image information processing method according to claim 2, it is characterized in that, described method also comprises:
Receive the Location Request of described 3rd electronic equipment, in described Location Request, carry the second image;
Feature point extraction is carried out to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to described 3rd electronic equipment, as the first spatial positional information of described 3rd electronic equipment.
4. image information processing method according to claim 3, it is characterized in that, described method also comprises:
When by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; In the unique point of mating from described second image, selected characteristic point carries out depth analysis, obtains the depth information of selected unique point; According to the depth information of selected unique point and the spatial positional information of selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
5. image information processing method according to claim 3, is characterized in that, also carry the depth information of selected unique point in described second image in described Location Request, described method also comprises:
When by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection;
According to the depth information with the spatial positional information corresponding to the unique point of selected Feature Points Matching and described selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
6. image information processing method according to claim 3,4 or 5, it is characterized in that, the method also comprises:
Receive described second electronic equipment uploading second space positional information that described first image uploads simultaneously and preserving, described second space positional information is the spatial orientation information obtained when gathering described first image for characterizing described second electronic equipment;
Accordingly, after the first spatial positional information obtaining described 3rd electronic equipment, process is weighted to described first spatial positional information and described second space positional information, and the 3rd spatial positional information obtained after weighting process is sent to described 3rd electronic equipment.
7. image information processing method according to any one of claim 1 to 6, it is characterized in that, the mode of described feature point extraction one of in the following ways: based on the feature point extraction mode of rapid robust feature SURF algorithm, based on the feature point extraction mode of scale invariant feature conversion SIFT algorithm, the feature point extraction mode based on harris algorithm.
8. an image information processing method, be applied to the second electronic equipment, it is characterized in that, the method comprises:
Gather the first image in current scene, and give the first electronic equipment by described first image uploading; Described first image is the image carrying out feature point extraction for described first electronic equipment.
9. image information processing method according to claim 8, it is characterized in that, the method also comprises: uploading described first image and upload simultaneously the depth information of selected unique point in described first image; Or,
Upload second space positional information uploading described first image, described second space positional information is for characterizing the described second electronic equipment spatial orientation information residing when gathering described first image simultaneously.
10. image information processing method according to claim 8 or claim 9, it is characterized in that, the method also comprises: after gathering the first image in current scene, is uploaded to described first electronic equipment after carrying out the first process to described first image gathered.
11. 1 kinds of image information processing methods, be applied to the 3rd electronic equipment, it is characterized in that, the method comprises:
Send Location Request to the first electronic equipment, in described Location Request, carry the second image, or carry the depth information of selected unique point in described second image and described second image;
Receiving described first electronic equipment according to described second image is the spatial positional information that described first electronic equipment is determined.
12. a kind of first electronic equipment, is characterized in that, comprising:
First receiver module, for receiving the first image that the second electronic equipment is uploaded;
Processing module, for building three-dimensional coordinate map, and based on the three-dimensional coordinate map built, feature point extraction is carried out to described first image, analyze and preserve the unique point descriptor of individual features point, determine and preserve the spatial positional information of unique point in described three-dimensional coordinate map of extraction; Based on to described first graphical analysis of the first quantity and the described unique point descriptor of preserving and determining and the spatial positional information construction feature point data collection of the described unique point of preserving;
Preserve module, for preserving described first image and described characteristic point data collection.
13., according to the first electronic equipment described in claim 12, is characterized in that, described first receiver module is further used for, and receive the depth information of selected unique point in described first image that described second electronic equipment is uploading that described first image uploads simultaneously;
Accordingly, described processing module is further used for, feature point extraction is being carried out to described first image and after obtaining the spatial positional information of described unique point, according to the depth information of selected unique point and the spatial positional information of corresponding selected unique point, calculate the spatial positional information that described second electronic equipment is residing when gathering described first image;
Described preservation module is further used for, and preserves described second electronic equipment residing spatial positional information when gathering described first image.
14., according to the first electronic equipment described in claim 13, is characterized in that, described first receiver module is further used for, and receive the Location Request of described 3rd electronic equipment, carry the second image in described Location Request;
Accordingly, described processing module is further used for, feature point extraction is carried out to described second image, obtain the unique point descriptor of individual features point, and search described characteristic point data collection according to the described unique point descriptor obtained, when by search described characteristic point data energy collecting match accordingly described first image time, the spatial positional information of described for the correspondence of preservation the first image is sent to described 3rd electronic equipment, as the first spatial positional information of described 3rd electronic equipment.
15. according to the first electronic equipment described in claim 14, it is characterized in that, described processing module is further used for, when by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; In the unique point of mating from described second image, selected characteristic point carries out depth analysis, obtains the depth information of selected unique point; According to the depth information of selected unique point and the spatial positional information of selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
16., according to the first electronic equipment described in claim 14, is characterized in that, also carry the depth information of selected unique point in described second image in described Location Request,
Described processing module is further used for, when by search described characteristic point data collection cannot match accordingly described first image time, obtain unique point and the spatial positional information of corresponding described unique point of coupling by searching described characteristic point data collection; According to the depth information with the spatial positional information corresponding to the unique point of selected Feature Points Matching and described selected unique point, calculate the first spatial positional information of described 3rd electronic equipment and send to described 3rd electronic equipment.
17. according to claim 14,15 or 16 first electronic equipment, it is characterized in that, described first receiver module is further used for, receive described second electronic equipment uploading second space positional information that described first image uploads simultaneously and preserving, described second space positional information is the spatial orientation information obtained when gathering described first image for characterizing described second electronic equipment;
Accordingly, described processing module is further used for, after the first spatial positional information obtaining described 3rd electronic equipment, process is weighted to described first spatial positional information and described second space positional information, and the 3rd spatial positional information obtained after weighting process is sent to described 3rd electronic equipment.
18. according to claim 12 to the first electronic equipment described in 17 any one, it is characterized in that, the mode of described feature point extraction one of in the following ways: based on the feature point extraction mode of rapid robust feature SURF algorithm, based on the feature point extraction mode of scale invariant feature conversion SIFT algorithm, the feature point extraction mode based on harris algorithm.
19. a kind of second electronic equipment, is characterized in that, comprising:
Acquisition module, for gathering the first image in current scene;
Upper transmission module, for giving the first electronic equipment by described first image uploading, described first image is the image carrying out feature point extraction for described first electronic equipment.
20., according to the second electronic equipment described in claim 19, is characterized in that, described upper transmission module is further used for, and are uploading described first image and to upload simultaneously in described first image the depth information of selected unique point; Or,
Upload second space positional information uploading described first image, described second space positional information is for characterizing the described second electronic equipment spatial orientation information residing when gathering described first image simultaneously.
21. according to claim 19 or 20 second electronic equipment, it is characterized in that, also comprise: image processing module, for after the first image of gathering in current scene at described acquisition module, the first process is carried out to described first image gathered;
Accordingly, described upper transmission module is further used for, and the first image uploading after processing first is to described first electronic equipment.
22. a kind of the 3rd electronic equipment, is characterized in that, comprising:
Sending module, for sending Location Request to the first electronic equipment, carries the second image in described Location Request, or carries the depth information of selected unique point in described second image and described second image;
Second receiver module is the spatial positional information that described first electronic equipment is determined for receiving described first electronic equipment according to described second image.
23. 1 kinds of pattern information processing systems, is characterized in that, described system comprises the first electronic equipment described in any one of claim 12-18, the second electronic equipment described in any one of claim 19-21 and the 3rd electronic equipment according to claim 22.
CN201310389812.5A 2013-08-30 2013-08-30 Information processing method, system and equipment Pending CN104424635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310389812.5A CN104424635A (en) 2013-08-30 2013-08-30 Information processing method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310389812.5A CN104424635A (en) 2013-08-30 2013-08-30 Information processing method, system and equipment

Publications (1)

Publication Number Publication Date
CN104424635A true CN104424635A (en) 2015-03-18

Family

ID=52973522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310389812.5A Pending CN104424635A (en) 2013-08-30 2013-08-30 Information processing method, system and equipment

Country Status (1)

Country Link
CN (1) CN104424635A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610177A (en) * 2017-09-29 2018-01-19 联想(北京)有限公司 A kind of method and apparatus that characteristic point is determined in synchronous superposition
CN110268225A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 The method of positioning device, server-side and mobile robot on map
CN112735342A (en) * 2020-12-30 2021-04-30 深圳Tcl新技术有限公司 Backlight light type processing method and device and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101109643A (en) * 2007-08-22 2008-01-23 广东瑞图万方科技有限公司 Navigation apparatus
CN101131323A (en) * 2006-08-25 2008-02-27 高德软件有限公司 Collecting device for road landscape information and locating information
US20090237510A1 (en) * 2008-03-19 2009-09-24 Microsoft Corporation Visualizing camera feeds on a map
CN101945327A (en) * 2010-09-02 2011-01-12 郑茂 Wireless positioning method and system based on digital image identification and retrieve
CN102142081A (en) * 2010-02-02 2011-08-03 索尼公司 Image processing device, image processing method, and program
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103177475A (en) * 2013-03-04 2013-06-26 腾讯科技(深圳)有限公司 Method and system for showing streetscape maps
CN103249142A (en) * 2013-04-26 2013-08-14 东莞宇龙通信科技有限公司 Locating method, locating system and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101131323A (en) * 2006-08-25 2008-02-27 高德软件有限公司 Collecting device for road landscape information and locating information
CN101109643A (en) * 2007-08-22 2008-01-23 广东瑞图万方科技有限公司 Navigation apparatus
US20090237510A1 (en) * 2008-03-19 2009-09-24 Microsoft Corporation Visualizing camera feeds on a map
CN102142081A (en) * 2010-02-02 2011-08-03 索尼公司 Image processing device, image processing method, and program
CN101945327A (en) * 2010-09-02 2011-01-12 郑茂 Wireless positioning method and system based on digital image identification and retrieve
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103177475A (en) * 2013-03-04 2013-06-26 腾讯科技(深圳)有限公司 Method and system for showing streetscape maps
CN103249142A (en) * 2013-04-26 2013-08-14 东莞宇龙通信科技有限公司 Locating method, locating system and mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610177A (en) * 2017-09-29 2018-01-19 联想(北京)有限公司 A kind of method and apparatus that characteristic point is determined in synchronous superposition
CN110268225A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 The method of positioning device, server-side and mobile robot on map
WO2020223975A1 (en) * 2019-05-09 2020-11-12 珊口(深圳)智能科技有限公司 Method of locating device on map, server, and mobile robot
CN110268225B (en) * 2019-05-09 2022-05-10 深圳阿科伯特机器人有限公司 Method for cooperative operation among multiple devices, server and electronic device
CN112735342A (en) * 2020-12-30 2021-04-30 深圳Tcl新技术有限公司 Backlight light type processing method and device and computer readable storage medium
CN112735342B (en) * 2020-12-30 2022-05-03 深圳Tcl新技术有限公司 Backlight light type processing method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN109241846B (en) Method and device for estimating space-time change of remote sensing image and storage medium
KR20130057465A (en) Object recognition using incremental feature extraction
CN108124489B (en) Information processing method, apparatus, cloud processing device and computer program product
CN108564653B (en) Human body skeleton tracking system and method based on multiple Kinects
CN111323788B (en) Building change monitoring method and device and computer equipment
CN105336002A (en) Information processing method and electronic equipment
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
CN104486585A (en) Method and system for managing urban mass surveillance video based on GIS
CN104424635A (en) Information processing method, system and equipment
CN111458691B (en) Building information extraction method and device and computer equipment
CN112393735B (en) Positioning method and device, storage medium and electronic device
Feng et al. A novel saliency detection method for wild animal monitoring images with WMSN
CN104951752A (en) Method for extracting houses from airborne laser point cloud data
CN105517018B (en) A kind of method and device obtaining location information
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN112001289A (en) Article detection method and apparatus, storage medium, and electronic apparatus
CN114913246B (en) Camera calibration method and device, electronic equipment and storage medium
CN116958485A (en) Visual field analysis method and device
CN108447084B (en) Stereo matching compensation method based on ORB characteristics
CN116152177A (en) Epidemic wood identification method, device, computer equipment and computer readable storage medium
CN110046632A (en) Model training method and device
CN114743150A (en) Target tracking method and device, electronic equipment and storage medium
CN104424636A (en) Image segmentation method, image retrieval method and electronic device
CN104182990A (en) A method for acquiring a sequence image motion target area in real-time

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150318