CN109559347A - Object identifying method, device, system and storage medium - Google Patents
Object identifying method, device, system and storage medium Download PDFInfo
- Publication number
- CN109559347A CN109559347A CN201811431164.4A CN201811431164A CN109559347A CN 109559347 A CN109559347 A CN 109559347A CN 201811431164 A CN201811431164 A CN 201811431164A CN 109559347 A CN109559347 A CN 109559347A
- Authority
- CN
- China
- Prior art keywords
- identified
- information
- image information
- image
- acquisition device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of object identifying method, device, system and storage mediums, this method comprises: obtaining the first image information of image acquisition device;Based on corresponding second image information of the first image information capture object to be identified;Real-time positioning information based on second image information and described image acquisition device obtains the location information of the object to be identified;Location information and preset object database based on the object to be identified carry out location matches, determine the mapping relations of object in the object to be identified and the object database.The identification and management to object to be identified are realized, and due to not needing to add any peripheral hardware to object to be identified, maintenance is simple, and saves cost.
Description
Technical field
The present invention relates to Object identifying fields, and in particular to a kind of object identifying method, device, system and storage medium.
Background technique
Reality production, there is a situation where in living scene a large amount of objects be similar but spatial position be distributed it is different, these
In shape without significant difference between object, but object itself belongs to different individuals, has respective attributive character, such as
The blasthole of opencut, ancient Tree management, dock container etc..
In traditional management, it will usually be placed above these objects with wooden, iron or made of paper Sign Board or side
It is identified.The major defect of this method is: each object needs to make Sign Board, cost of manufacture height, troublesome maintenance, easily
Deviate and be identified object, it is often more important that is not easy to digitlization, information-based object maintenance and information in this way and inquires
Deng limited information in addition can only be recorded on Sign Board, it is difficult to meet the mark demand of attributive character.
With the development of information technology, object is identified based on the tag card of Bluetooth technology, RFID technique by placing, but
The cost of this mode is higher, and some environment are less likely setting tag card, such as in the application field of opencut blasthole,
Old blasthole constantly is fried, and new blasthole constantly generates, and a usual quick-fried heap has thousands of even more blastholes, no matter setting
It sets Sign Board or designation card and there is a problem of and is at high cost and difficult in maintenance.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of object identifying method, device, system and storage mediums, it is intended to
Reduce the identification cost of object.
The technical solution of the embodiment of the present invention is achieved in that
The embodiment of the present invention in a first aspect, provide a kind of object identifying method, this method comprises:
Obtain the first image information of image acquisition device;
Based on corresponding second image information of the first image information capture object to be identified;
It is described to be identified right to be obtained based on the real-time positioning information of second image information and described image acquisition device
The location information of elephant;
Location information based on the object to be identified and preset object database carry out location matches, determine it is described to
Identify the mapping relations of object in object and the object database.
Further, it is described based on corresponding second image information of the first image information capture object to be identified it
Before, comprising:
The comparison model of object for identification is generated by machine learning training;
It is described to be based on corresponding second image information of the first image information capture object to be identified, comprising:
Corresponding second image of the object to be identified is captured based on the comparison model in the first image information
Information.
Further, described based on described in second image information and the acquisition of the location information of described image acquisition device
The location information of object to be identified, comprising:
Obtain the real-time positioning information of described image acquisition device;
Based on the corresponding real-time positioning of the second image information described in the second image information described at least two frames and each frame
Information determines the center position of the object to be identified.
Further, described corresponding based on the second image information described in the second image information and each frame described at least two frames
The real-time positioning information determines the center position of the object to be identified, comprising:
Extract the characteristic point of the second image information described in each frame;
It is corresponding described real-time to save the second image information described in the characteristic point of the second image information and each frame described in each frame
Location information is determined for compliance with the characteristic point of setting condition by frame matching and forms matching double points;
The essential matrix that described image acquisition device is calculated according to the matching double points carries out feature to the essential matrix
Value is decomposed, and the relative motion matrix of the described image acquisition device between two matching frames is obtained;
The space that pixel coordinate based on the relative motion matrix and the matching double points obtains each characteristic point is sat
Mark;
The center position of the object to be identified is determined based on the space coordinate of each characteristic point.
Further, the location information based on the object to be identified and preset object database carry out position
With before, comprising:
The object database is established, the object database stores the location information of each object and for characterizing each object
The secondary parameter information of attributive character.
Further, the location information based on the object to be identified and preset object database carry out position
Match, determine the mapping relations of object in the object to be identified and the object database, comprising:
It determines based on the center position of the object to be identified and meets the object of matching condition in the object data;
It determines that in the object data object is matched, establishes and be matched reflecting for object and the object to be identified
Penetrate relationship;
Determine that multiple objects in the object database are matched, determine the immediate object of current visual angle and establish with
The mapping relations of the object to be identified determine object according to input information and establish the mapping with the object to be identified
Relationship.
Further, present invention method further include:
According to the mapping relations output of object in the object to be identified and the object database for characterize it is described to
Identify the secondary parameter information and/or the maintenance object database of the attributive character of object.
Second aspect of the embodiment of the present invention also provides a kind of object recognition equipment, which includes:
Module is obtained, for obtaining the first image information of image acquisition device;
Capture module, for being based on corresponding second image information of the first image information capture object to be identified;
Position information acquisition module, for the real-time positioning based on second image information and described image acquisition device
The location information of object to be identified described in acquisition of information;
Determining module is matched, carries out position with preset object database for the location information based on the object to be identified
Matching is set, determines the mapping relations of object in the object to be identified and the object database.
The third aspect of the embodiment of the present invention also provides a kind of object recognition system, including image collecting device, described image
Acquisition device is equipped with the locating module for obtaining the real-time positioning information of described image acquisition device, described image acquisition dress
It sets communication link and is connected to processing equipment, the processing equipment includes:
Memory, for storing executable program;
Processor when for executing the executable program stored in the memory, is realized described in aforementioned any embodiment
Object identifying method.
Fourth aspect of the embodiment of the present invention also provides a kind of computer storage medium, is stored with executable program, it is described can
When execution program is executed by processor, object identifying method described in aforementioned any embodiment is realized.
In technical solution provided in an embodiment of the present invention, based on second image information and described image acquisition device
Real-time positioning information obtains the location information of the object to be identified;Location information based on the object to be identified with it is preset
Object database carries out location matches, determines the mapping relations of object in the object to be identified and the object database, real
Show the identification and management to object to be identified, and due to not needing to add any peripheral hardware to object to be identified, maintenance is simple, and
Save cost.
Detailed description of the invention
Fig. 1 is the flow diagram of object identifying method in one embodiment of the invention;
Fig. 2 is the schematic illustration that the immediate object of current visual angle is determined in one embodiment of the invention;
Fig. 3 is the structural schematic diagram of object recognition equipment in one embodiment of the invention;
Fig. 4 is the structural schematic diagram of object recognition system in one embodiment of the invention;
Fig. 5 is the structural schematic diagram of processing equipment in Fig. 4.
Specific embodiment
Technical solution of the present invention is further described in detail with reference to the accompanying drawings and specific embodiments of the specification.It should
Understand, embodiment mentioned herein is only used to explain the present invention, is not intended to limit the present invention.In addition, provided below
Embodiment be section Example for carrying out the present invention, rather than provide and implement whole embodiments of the invention, do not conflicting
In the case where, the embodiment of the present invention record technical solution can mode in any combination implement.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention
The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in limitation the application.Term as used herein "and/or" includes one or more phases
Any and all combinations of the listed item of pass.
Two kinds of technologies of object identifying method combination Object Snap of the embodiment of the present invention (target detection) and vision positioning carry out
Object identifying can carry out data base administration to all objects to be identified, wherein each object has determining geographical position
It sets, each position is separated by the resolution ratio that cannot be less than used location technology.Object to be identified in the embodiment of the present invention can
Think that blasthole, ancient tree wood, dock container of opencut etc. has the object of position attribution.
Fig. 1 is the flow diagram of object identifying method in one embodiment of the invention.Referring to Fig. 1, the embodiment of the present invention
Object identifying method includes:
Step 101, the first image information of image acquisition device is obtained.
In the present embodiment, image collecting device may include based on CCD (charge coupled device, Charged Couple
Element) or CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor)
Imaging sensor obtains the first image information comprising object to be identified by the image acquisition device image.
Step 102, it is based on corresponding second image information of the first image information capture object to be identified.
The present embodiment is obtained by carrying out Object Snap to the first image information of acquisition in the first image information and wait know
The image data of the corresponding partial region of other object is to generate the second image information, and for example, the second image information is to include wait know
The corresponding image information of rectangle frame of other object.Certainly, it will be appreciated by those skilled in the art that the second image information can also be to include
The circular frame of object to be identified, fan-shaped frame or other corresponding image informations of interception frame.
In some embodiments, described based on the corresponding second image letter of the first image information capture object to be identified
Before breath, comprising: generate the comparison model of object for identification by machine learning training.
Processing equipment instructs model using the image information by obtaining the corresponding image information of master sample
Practice, which carries out image characteristics extraction using convolutional layer, and the regression forecasting of classification and position is carried out with full articulamentum, passes through instruction
Practice and generates the model for reaching detection accuracy requirement as the comparison model.
It is described to be based on corresponding second image information of the first image information capture object to be identified, comprising: described
Corresponding second image information of the object to be identified is captured based on the comparison model in first image information.
Processing equipment obtains the first image information of image acquisition device, according to the comparison model to described first
Image information carries out Object Snap, corresponding second image information of interception object to be identified.
Step 103, based on described in the acquisition of the real-time positioning information of second image information and described image acquisition device
The location information of object to be identified.
In the present embodiment, the location information of the object to be identified refers to the center position of the object to be identified.It can
Using based on GPS (Global Positioning System, global positioning system) or RTK (Real-time
Kinematic, dynamic controls in real time) location technology realize image collecting device real-time positioning information acquisition, based on described
The real-time positioning information of second image information and described image acquisition device passes through SfM (Structure from Motion) skill
Art realizes the calculating of the center position of object to be identified.Specifically, the first image information, base can be obtained using monocular camera
The location information of object to be identified is determined at least two field pictures information.It in other embodiments, can also be using such as binocular phase
Machine, RGB-D camera etc. obtain the first image information, and the location information of object to be identified is determined using corresponding algorithm.
In an optional embodiment, the positioning based on second image information and described image acquisition device
The location information of object to be identified described in acquisition of information, comprising: obtain the real-time positioning information of described image acquisition device;It is based on
Described in the corresponding real-time positioning information of second image information described in second image information described at least two frames and each frame determines
The center position of object to be identified.
In some alternative embodiments, described based on second described in the second image information and each frame described at least two frames
The corresponding real-time positioning information of image information determines the center position of the object to be identified, comprising:
Extract the characteristic point of the second image information described in each frame;
Here it is possible to using SIFT, (Scale-invariant feature transform, scale invariant feature become
Change), ORB, FAST (features from accelerated segment test), SURF (Speeded-Up Robust
Features) scheduling algorithm extracts characteristic point, and those skilled in the art can select effect the best way as the case may be.
It is corresponding described real-time to save the second image information described in the characteristic point of the second image information and each frame described in each frame
Location information is determined for compliance with the characteristic point of setting condition by frame matching and forms matching double points;
Here, different setting conditions is determined according to the type of different characteristic points, for the characteristic point of floating type, is based on
The nearest principle of Euclidean distance carries out frame matching.For binary characteristic point, frame is carried out based on the nearest principle of Hamming distance
Between match.The matching double points for meeting setting condition are found between two frames.
The essential matrix that described image acquisition device is calculated according to the matching double points carries out feature to the essential matrix
Value is decomposed, and the relative motion matrix of the described image acquisition device between two matching frames is obtained;
The space that pixel coordinate based on the relative motion matrix and the matching double points obtains each characteristic point is sat
Mark;
The center position of the object to be identified is determined based on the space coordinate of each characteristic point.
Step 104, location information and preset object database based on the object to be identified carry out location matches, really
The mapping relations of object in the fixed object to be identified and the object database.
In the present embodiment, the location information based on the object to be identified and preset object database carry out position
Before matching, comprising: establish the object database, the object database store the location information of each object with for characterizing
The secondary parameter information of each object property characteristics.Secondary parameter information includes but is not limited in the present embodiment: object type, object
The relevant informations such as operating instruction.For example, by taking blasthole as an example, which may include: blasthole classification, data of explosive filled
Deng.
The present embodiment object identifying method, the real-time positioning based on second image information and described image acquisition device
The location information of object to be identified described in acquisition of information;Location information and preset object data based on the object to be identified
Library carries out location matches, determines the mapping relations of object in the object to be identified and the object database, realizes and treat
Identify the identification and management of object, and due to not needing to add any peripheral hardware to object to be identified, maintenance is simple, and saves into
This.
In some embodiments, optionally, the location information based on the object to be identified and preset number of objects
Location matches are carried out according to library, determine the mapping relations of object in the object to be identified and the object database, comprising:
It determines based on the center position of the object to be identified and meets the object of matching condition in the object data;
It determines that in the object data object is matched, establishes and be matched reflecting for object and the object to be identified
Penetrate relationship;
Determine that multiple objects in the object database are matched, determine the immediate object of current visual angle and establish with
The mapping relations of the object to be identified determine object according to input information and establish the mapping with the object to be identified
Relationship.
In the present embodiment, there may be multiple objects correctly to be matched in present viewing field, there are two types of object is true for the present embodiment
Recognize scheme:
(1), find out automatically with current visual angle most consistent object as final identification as.Concrete principle is as follows: step
1): image collector is set to headset equipment, and camera need to be with user's eyes towards consistent and in same vertical plane, user when wearing
Direct-view object to be identified is needed in use process, as shown in Figure 2.Step 2): in error free situation, user looks at object B straight, B's
The C point being projected on imaging plane perpendicular bisector, and △ CDO is similar with △ ABO, in figure, d and f (f=DO) they are datum
According to, wherein d refers to camera optical center at a distance from eyes, and camera optical center O point coordinate can be converted to obtain by real-time locating module,
If B space of points coordinate is it is known that the theoretical pixel coordinate of B point can be calculated according to constraints above, that is, the position of C point, simultaneously
Other objects under current visual angle can be obtained and do not meet this constraint relationship.Step 3): it can be captured during Object Snap
The center pixel coordinate value of object, hereinafter referred to as the pixel coordinate observation of the object.It finds out in camera present frame and owns
Pixel coordinate observation, and will be in these objects according to the constraint relationship in step 2) in the object of imaging plane approximate centerline
The space coordinate (location matches are completed in front, this coordinate value is exactly the actual spatial coordinates in database) of the heart converts,
The pixel coordinate theoretical value of these objects is obtained, then calculates separately the error of these object pixel coordinate theoretical values and observed value,
The smallest object of error is final identification object.Optionally, further by the secondary parameter information of the object through display device
Display.In some embodiments, image collecting device is equipped with laser head, and laser head is directed toward the object, is guiding staff just
The target object in reality is corresponded to, really convenient for carrying out relevant operation to the object.
(2), the data of all matching objects are shown that user is selecting a certain need according to work requirements through display device
The object to be carried out the work, laser head automatically point to the object, complete identification positioning.Specifically, display device elder generation output data
All object informations being matched to are established matching and are closed to display interface, user according to these information, selected target object in library
System, laser head automatically point to the object according to matched trigger signal.
In some embodiments, optionally, the object identifying method further include:
According to the mapping relations output of object in the object to be identified and the object database for characterize it is described to
Identify the secondary parameter information and/or the maintenance object database of the attributive character of object.
Specifically, the corresponding auxiliary parameter of object to be identified stored in the available object database of object identifying method
Information, exporting the secondary parameter information through display device can instruct user to carry out relevant operation, for example, to blast hole charging fire
Medicine.In addition, the present embodiment can also safeguard the relevant information in object database, to realize the intelligence to object
Change, dynamically management and information update etc..
Below by taking the blasthole of opencut as an example, the application of present invention method is illustrated, needs to illustrate
, the present embodiment object identifying method can be applicable to other similar object, and for example, other exist a large amount of similar right
As object shape does not have significant difference and is distributed in different spatial, and object itself belongs to Different Individual, has and respectively belongs to
Property feature application scenarios in, basic implementing procedure is identical, such as ancient Tree management, dock container management.
The recognition methods of the blasthole of opencut of the embodiment of the present invention the following steps are included:
1, model learning: opencut blasthole photo is collected before application, by deep learning frame in high-performance calculation
Training detection model on machine obtains the model that can accurately identify and position (positioning, which refers to, generates target detection frame) to blasthole.
2, Object Snap: camera is fixed on user (such as on safety cap), trained detection model is transplanted
Onto the tablet computer in the device, to the image real-time detection that camera obtains, the real-time figure for having blasthole detection block is generated
Picture, and be shown on tablet computer, it completes the detection to blasthole and captures.
3, camera positions in real time: being sat by the real-time space that the real-time locating module in equipment obtains camera center
Mark, and by coordinate storage in tablet computer.
4, the calculating to object center point position to be identified is realized based on SfM technology: the algorithm finished writing is transplanted to plate
In computer, by SfM method, the space coordinate of characteristic point in image is calculated, all feature spaces of points in each detection block is taken to sit
For target mean value as corresponding blasthole center point coordinate, it is real which needs to use the camera obtained by real-time locating module
When coordinate data.
Here, obtain blasthole center point coordinate mainly comprise the steps that 1., to obtain in real time image (refer to by inspection
The image of target detection frame is had after survey model treatment) feature extraction is carried out, it can be using the side such as SIFT, ORB, FAST, SURF
Method extracts characteristic point, selects effect the best way as the case may be.2., save each frame characteristic point and present frame under phase
These characteristic points are carried out frame matching, for retouching for floating type by the real-time coordinates information that machine is obtained by location technologies such as GPS
It states and is matched using the nearest principle of Euclidean distance, binary description is matched using the nearest principle of Hamming distance, two
All standard compliant matching double points are found between frame.3., by frame matching obtain matching double points after, use RANSAC
(Random sample consensus) 8 methods calculate the essential matrix of camera (if target to be detected is in same plane
On, can also calculate and calculate homography matrix using four pairs of points), essential matrix is subjected to Eigenvalues Decomposition, obtains phase between matching frame
The relative motion matrix T (being made of spin matrix R and displacement t) of machine, monocular cam does not have space scale, relative displacement t
It is calculated by the camera actual spatial coordinates that the real-time positioning device that front is chosen obtains, to obtain true scale.④,
The pixel coordinate of kinematic matrix T and all matching double points of the camera between two frames have been obtained, has been acquired with the method for triangulation
The actual spatial coordinates of characteristic point.In practical application scene, whole process real time execution, there are noises, can generate error, right
Continuous several frames construct error equation using the method for Bundle Adjustment, use the side Levernburg-Marquardt
Method acquires optimal solution, realizes the optimization to characteristic point space coordinate.5., calculate object to be identified center point coordinate use method
It is: takes the object to correspond in target detection frame (bounding box) all characteristic points, by the space coordinate of these characteristic points
Center point coordinate of the mean value as object to be identified.
5, location matches and object confirm.
In the present embodiment, center point coordinate and preset object database based on the object to be identified carry out position
Match, determines the mapping relations of object in the object to be identified and the object database.
Here, before location matches, object database is established, stores the location information and other needs of object to be identified
The information of management, structure is as shown in table 1, table 2:
Table 1
Number | Coordinate |
0001 | (X1, Y1, Z1) |
0001 | (X2, Y2, Z2) |
0003 | (X3, Y3, Z3) |
0004 | …… |
Table 2
Number | Other information |
0001 | (such as blasthole classification, data of explosive filled) |
0002 | …… |
0003 | …… |
…… | …… |
Here, it is matched be object to be identified coordinate, after being correctly matched to coordinate, by number find object to be identified
Relevant information, this process is considered as location-based Object identifying.May have in the picture obtained in real time in present viewing field
The object to be identified that one or more is detected, by table in the center point coordinate and object database of all objects detected
1 location information is matched, it is believed that in database with object to be identified central point corresponding to the same or similar coordinate
Object be correct matching.
There are two types of object affirmation modes after correct matching: 1. find out automatically with the most consistent blasthole of current visual angle, in plate
Computer shows all information of the blasthole, and the laser head in device automatically points to the blasthole, completes the identification to blasthole and positions.②
The data information of all blastholes being matched to is shown that, at tablet computer interface, user selects on tablet computer according to work requirements
Fixed a certain blasthole, laser head automatically point to the blasthole, and guidance staff finds the blasthole, complete the identification to blasthole and position.
6, business processing (inquiry, editor etc.): realizing the identification positioning of blasthole, can carry out corresponding business processing,
The powder charge of such as blasthole.It is directly operated on tablet computer simultaneously, the inquiry of blasthole information, acquisition, editor is also fully achieved
Digitlization reduces cost, improves working efficiency.
The embodiment of the present invention also provides a kind of object recognition equipment, referring to Fig. 3, the device includes:
Module 301 is obtained, for obtaining the first image information of image acquisition device;
Capture module 302, for being based on corresponding second image information of the first image information capture object to be identified;
Position information acquisition module 303, for real-time based on second image information and described image acquisition device
Location information obtains the location information of the object to be identified;
Match determining module 304, for based on the object to be identified location information and preset object database into
Row location matches determine the mapping relations of object in the object to be identified and the object database.
In some embodiments, capture module 302 is based on the first image information capture object to be identified corresponding the
Before two image informations, it is also used to generate the comparison model of object for identification by machine learning training.Capture module 302 is used
Corresponding second image information of the object to be identified is captured in being based on the comparison model in the first image information.
In some embodiments, position information acquisition module 303 is specifically used for: obtaining the real-time of described image acquisition device
Location information;Based on the corresponding real-time positioning of the second image information described in the second image information described at least two frames and each frame
Information determines the center position of the object to be identified.
In some embodiments, position information acquisition module 303 is specifically used for: extracting the second image information described in each frame
Characteristic point;It is corresponding described fixed in real time to save the second image information described in the characteristic point of the second image information and each frame described in each frame
Position information, is determined for compliance with the characteristic point of setting condition by frame matching and forms matching double points;According to the matching double points meter
The essential matrix for calculating described image acquisition device carries out Eigenvalues Decomposition to the essential matrix, obtains between two matching frames
The relative motion matrix of described image acquisition device;Pixel coordinate based on the relative motion matrix and the matching double points obtains
To the space coordinate of each characteristic point;The central point of the object to be identified is determined based on the space coordinate of each characteristic point
Position.
In some embodiments, matching determining module 304 based on the object to be identified location information with it is preset
It before object database carries out location matches, is also used to: establishing the object database, the object database stores each object
Location information with for characterizing the secondary parameter informations of each object property characteristics.
In some embodiments, matching determining module 304 is specifically used for: the center position based on the object to be identified
Determine the object for meeting matching condition in the object data;It determines that in the object data object is matched, establishes
It is matched the mapping relations of object Yu the object to be identified;Determine that multiple objects in the object database are matched, really
Determine the immediate object of current visual angle and establishes with the mapping relations of the object to be identified or according to determining pair of input information
As and establish and the mapping relations of the object to be identified.
In some embodiments, object recognition equipment further include: input/output module (not shown) is used for basis
The mapping relations of object export the category for characterizing the object to be identified in the object to be identified and the object database
Property feature secondary parameter information and/or the maintenance object database.
It should be understood that object recognition equipment provided by the above embodiment is when carrying out Object identifying, only with above-mentioned each
The division progress of program module can according to need for example, in practical application and distribute above-mentioned processing by different journeys
Sequence module is completed, i.e., the internal structure of object recognition equipment is divided into different program modules, described above complete to complete
Portion or part are handled.In addition, object recognition equipment provided by the above embodiment belong to object identifying method embodiment it is same
Design, specific implementation process are detailed in embodiment of the method, and which is not described herein again.
In practical applications, above-mentioned each program module can be by central processing unit (CPU, the Central on server
Processing Unit), microprocessor (MPU, Micro Processor Unit), digital signal processor (DSP,
Digital Signal Processor) or field programmable gate array (FPGA, Field Programmable Gate
) etc. Array realize.
The embodiment of the present invention also provides a kind of object recognition system, referring to Fig. 4, the object recognition system includes: image
Acquisition device 401, described image acquisition device 401 are equipped with the real-time positioning information for obtaining described image acquisition device
Locating module 402,401 communication link of described image acquisition device are connected to processing equipment 500, and the processing equipment 500 includes:
Memory, for storing executable program;
Processor when for executing the executable program stored in the memory, is realized described in aforementioned any embodiment
Object identifying method.
In the present embodiment, image collecting device 401 may include based on CCD (charge coupled device, charge
Coupling element) or CMOS (complementary metal oxide semiconductor, complementary metal oxide are partly led
Body) imaging sensor.Locating module 402 can be GPS positioning module or RTK locating module.Optionally, image collector
It sets and is additionally provided with the laser head 403 with shaft on 401, which can automatically point to identification object, to guide staff
Correspond correctly to the target object in reality.Processing equipment 500 can be tablet computer, notebook, palm PC, desktop
Brain, server etc. include the equipment of processor.It can be wired or wireless between image collecting device 401 and processing equipment 500
Connection is communicated, processing equipment 500 can receive the image information of the acquisition of image collecting device 401.
Referring to Fig. 5, processing equipment 500 provided in an embodiment of the present invention includes: at least one processor 501, memory
502, user interface 503 and at least one network interface 504.Various components in processing equipment 500 pass through 505 coupling of bus system
It is combined.It is appreciated that bus system 505 is for realizing the connection communication between these components.Bus system 505 is removed
It further include power bus, control bus and status signal bus in addition except data/address bus.But for the sake of clear explanation, in Fig. 5
It is middle that various buses are all designated as bus system 505.
Wherein, user interface 503 may include display, keyboard, mouse, trace ball, click wheel, key, button, sense of touch
Plate or touch screen etc..
It is appreciated that memory 502 can be volatile memory or nonvolatile memory, may also comprise volatibility and
Both nonvolatile memories.
Memory 502 in the embodiment of the present invention is for storing various types of data holding with support target recognition methods
Row.The example of these data includes: any executable program for running in processing equipment 500, such as executable program
5021, realize that the program of the object identifying method of the embodiment of the present invention may be embodied in executable program 5021.
The object identifying method that the embodiment of the present invention discloses can be applied in processor 501, or real by processor 501
It is existing.Processor 501 may be a kind of IC chip, the processing capacity with signal.During realization, Object identifying side
Each step of method can be completed by the integrated logic circuit of the hardware in processor 501 or the instruction of software form.It is above-mentioned
Processor 501 can be general processor, digital signal processor (DSP, Digital Signal Processor), or
Other programmable logic device, discrete gate or transistor logic, discrete hardware components etc..Processor 501 may be implemented
Or disclosed each method, step and logic diagram in the execution embodiment of the present invention.General processor can be microprocessor
Or any conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly in hardware
Decoding processor executes completion, or in decoding processor hardware and software module combination execute completion.Software module can
To be located in storage medium, which is located at memory 502, and processor 501 reads the information in memory 502, in conjunction with
Its hardware completes the step of object identifying method provided in an embodiment of the present invention.
The embodiment of the invention also provides a kind of readable storage medium storing program for executing, storage medium may include: movable storage device, with
Machine accesses memory (RAM, Random Access Memory), read-only memory (ROM, Read-Only Memory), magnetic disk
Or the various media that can store program code such as CD.The readable storage medium storing program for executing is stored with executable program;It is described can
Program is executed for realizing object identifying method described in any embodiment of the present invention when being executed by processor.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as the production of method, system or computer program
Product.Therefore, hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the embodiment of the present invention
Form.Moreover, it wherein includes the calculating of computer usable program code that the embodiment of the present invention, which can be used in one or more,
The computer program product implemented in machine usable storage medium (including but not limited to magnetic disk storage and optical memory etc.)
Form.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, equipment (system) and computer program product
Flowchart and/or the block diagram describe.It should be understood that can be realized by computer program instructions in flowchart and/or the block diagram
The combination of process and/or box in each flow and/or block and flowchart and/or the block diagram.It can provide these calculating
Processing of the machine program instruction to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing systems
Device is to generate a machine, so that being generated by the instruction that computer or the processor of other programmable data processing systems execute
For realizing the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram
Device.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing systems with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing systems, so that counting
Series of operation steps are executed on calculation machine or other programmable systems to generate computer implemented processing, thus in computer or
The instruction executed on other programmable systems is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (10)
1. a kind of object identifying method characterized by comprising
Obtain the first image information of image acquisition device;
Based on corresponding second image information of the first image information capture object to be identified;
Real-time positioning information based on second image information and described image acquisition device obtains the object to be identified
Location information;
Location information and preset object database based on the object to be identified carry out location matches, determine described to be identified
The mapping relations of object in object and the object database.
2. object identifying method as described in claim 1, which is characterized in that described to be waited for based on the first image information capture
Before corresponding second image information of identification object, comprising:
The comparison model of object for identification is generated by machine learning training;
It is described to be based on corresponding second image information of the first image information capture object to be identified, comprising:
Corresponding second image information of the object to be identified is captured based on the comparison model in the first image information.
3. object identifying method as described in claim 1, which is characterized in that described based on second image information and described
The location information of image collecting device obtains the location information of the object to be identified, comprising:
Obtain the real-time positioning information of described image acquisition device;
Based on the corresponding real-time positioning information of the second image information described in the second image information described at least two frames and each frame
Determine the center position of the object to be identified.
4. object identifying method as claimed in claim 3, which is characterized in that described based on the letter of the second image described at least two frames
The center position that the real-time positioning information corresponding with the second image information described in each frame determines the object to be identified is ceased,
Include:
Extract the characteristic point of the second image information described in each frame;
Save the corresponding real-time positioning of the second image information described in the characteristic point of the second image information and each frame described in each frame
Information is determined for compliance with the characteristic point of setting condition by frame matching and forms matching double points;
The essential matrix that described image acquisition device is calculated according to the matching double points carries out characteristic value point to the essential matrix
Solution, obtains the relative motion matrix of the described image acquisition device between two matching frames;
Pixel coordinate based on the relative motion matrix and the matching double points obtains the space coordinate of each characteristic point;
The center position of the object to be identified is determined based on the space coordinate of each characteristic point.
5. object identifying method as described in claim 1, which is characterized in that the position letter based on the object to be identified
Breath is carried out with preset object database before location matches, comprising:
The object database is established, the object database stores the location information of each object and for characterizing each object properties
The secondary parameter information of feature.
6. object identifying method as described in claim 1, which is characterized in that the position letter based on the object to be identified
Breath carries out location matches with preset object database, determines that the object to be identified is reflected with object in the object database
Penetrate relationship, comprising:
It determines based on the center position of the object to be identified and meets the object of matching condition in the object data;
It determines that in the object data object is matched, establishes the mapping pass for being matched object and the object to be identified
System;
Determine that multiple objects in the object database are matched, determine the immediate object of current visual angle and establish with it is described
The mapping relations of object to be identified determine object according to input information and establish the mapping relations with the object to be identified.
7. object identifying method as described in claim 1, which is characterized in that further include:
Mapping relations output according to the object to be identified and object in the object database is described to be identified for characterizing
The secondary parameter information and/or the maintenance object database of the attributive character of object.
8. a kind of object recognition equipment characterized by comprising
Module is obtained, for obtaining the first image information of image acquisition device;
Capture module, for being based on corresponding second image information of the first image information capture object to be identified;
Position information acquisition module, for the real-time positioning information based on second image information and described image acquisition device
Obtain the location information of the object to be identified;
Determining module is matched, carries out position with preset object database for the location information based on the object to be identified
Match, determines the mapping relations of object in the object to be identified and the object database.
9. a kind of object recognition system, which is characterized in that including image collecting device, described image acquisition device is equipped with and is used for
The locating module of the real-time positioning information of described image acquisition device is obtained, described image acquisition device communication link is connected to processing and sets
Standby, the processing equipment includes:
Memory, for storing executable program;
Processor when for executing the executable program stored in the memory, is realized as described in claim 1 to 7 is any
Object identifying method.
10. a kind of computer storage medium, which is characterized in that be stored with executable program, the executable code processor
When execution, the object identifying method as described in claim 1 to 7 is any is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811431164.4A CN109559347A (en) | 2018-11-28 | 2018-11-28 | Object identifying method, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811431164.4A CN109559347A (en) | 2018-11-28 | 2018-11-28 | Object identifying method, device, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109559347A true CN109559347A (en) | 2019-04-02 |
Family
ID=65867529
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811431164.4A Pending CN109559347A (en) | 2018-11-28 | 2018-11-28 | Object identifying method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109559347A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335313A (en) * | 2019-06-17 | 2019-10-15 | 腾讯科技(深圳)有限公司 | Audio collecting device localization method and device, method for distinguishing speek person and system |
CN110956642A (en) * | 2019-12-03 | 2020-04-03 | 深圳市未来感知科技有限公司 | Multi-target tracking identification method, terminal and readable storage medium |
CN111652940A (en) * | 2020-04-30 | 2020-09-11 | 平安国际智慧城市科技股份有限公司 | Target abnormity identification method and device, electronic equipment and storage medium |
CN112926371A (en) * | 2019-12-06 | 2021-06-08 | ***通信集团设计院有限公司 | Road surveying method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106557549A (en) * | 2016-10-24 | 2017-04-05 | 珠海格力电器股份有限公司 | The method and apparatus of identification destination object |
CN108108748A (en) * | 2017-12-08 | 2018-06-01 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN108109176A (en) * | 2017-12-29 | 2018-06-01 | 北京进化者机器人科技有限公司 | Articles detecting localization method, device and robot |
-
2018
- 2018-11-28 CN CN201811431164.4A patent/CN109559347A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106557549A (en) * | 2016-10-24 | 2017-04-05 | 珠海格力电器股份有限公司 | The method and apparatus of identification destination object |
CN108108748A (en) * | 2017-12-08 | 2018-06-01 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN108109176A (en) * | 2017-12-29 | 2018-06-01 | 北京进化者机器人科技有限公司 | Articles detecting localization method, device and robot |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335313A (en) * | 2019-06-17 | 2019-10-15 | 腾讯科技(深圳)有限公司 | Audio collecting device localization method and device, method for distinguishing speek person and system |
WO2020253616A1 (en) * | 2019-06-17 | 2020-12-24 | 腾讯科技(深圳)有限公司 | Audio collection device positioning method and apparatus, and speaker recognition method and system |
CN110335313B (en) * | 2019-06-17 | 2022-12-09 | 腾讯科技(深圳)有限公司 | Audio acquisition equipment positioning method and device and speaker identification method and system |
US11915447B2 (en) | 2019-06-17 | 2024-02-27 | Tencent Technology (Shenzhen) Company Limited | Audio acquisition device positioning method and apparatus, and speaker recognition method and system |
CN110956642A (en) * | 2019-12-03 | 2020-04-03 | 深圳市未来感知科技有限公司 | Multi-target tracking identification method, terminal and readable storage medium |
CN112926371A (en) * | 2019-12-06 | 2021-06-08 | ***通信集团设计院有限公司 | Road surveying method and system |
CN112926371B (en) * | 2019-12-06 | 2023-11-03 | ***通信集团设计院有限公司 | Road survey method and system |
CN111652940A (en) * | 2020-04-30 | 2020-09-11 | 平安国际智慧城市科技股份有限公司 | Target abnormity identification method and device, electronic equipment and storage medium |
WO2021217859A1 (en) * | 2020-04-30 | 2021-11-04 | 平安国际智慧城市科技股份有限公司 | Target anomaly identification method and apparatus, and electronic device and storage medium |
CN111652940B (en) * | 2020-04-30 | 2024-06-04 | 平安国际智慧城市科技股份有限公司 | Target abnormality recognition method, target abnormality recognition device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110568447B (en) | Visual positioning method, device and computer readable medium | |
Papazov et al. | Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features | |
CN109559347A (en) | Object identifying method, device, system and storage medium | |
Zia et al. | Detailed 3d representations for object recognition and modeling | |
US10515259B2 (en) | Method and system for determining 3D object poses and landmark points using surface patches | |
Huang et al. | A coarse-to-fine algorithm for matching and registration in 3D cross-source point clouds | |
CN107990899A (en) | A kind of localization method and system based on SLAM | |
US20170337701A1 (en) | Method and system for 3d capture based on structure from motion with simplified pose detection | |
JP2012128744A (en) | Object recognition device, object recognition method, learning device, learning method, program and information processing system | |
GB2512460A (en) | Position and orientation measuring apparatus, information processing apparatus and information processing method | |
CN111476827A (en) | Target tracking method, system, electronic device and storage medium | |
CN115063482A (en) | Article identification and tracking method and system | |
US20140300597A1 (en) | Method for the automated identification of real world objects | |
CN114766042A (en) | Target detection method, device, terminal equipment and medium | |
KR20210046217A (en) | Method and apparatus for detecting an object using detection of a plurality of regions | |
Jiang et al. | Learned local features for structure from motion of uav images: A comparative evaluation | |
CN104182747A (en) | Object detection and tracking method and device based on multiple stereo cameras | |
Cui et al. | Precise landing control of UAV based on binocular visual SLAM | |
Zhang et al. | Dense 3d mapping for indoor environment based on feature-point slam method | |
GB2523776A (en) | Methods for 3D object recognition and registration | |
JP7265143B2 (en) | Display control method, display control program and information processing device | |
Wang et al. | Stereo rectification based on epipolar constrained neural network | |
Dubenova et al. | D-inloc++: Indoor localization in dynamic environments | |
Wang et al. | [Retracted] Aided Evaluation of Motion Action Based on Attitude Recognition | |
Si et al. | [Retracted] Multifeature Fusion Human Pose Tracking Algorithm Based on Motion Image Analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |