CN109190502A - Method and apparatus for generating location information - Google Patents
Method and apparatus for generating location information Download PDFInfo
- Publication number
- CN109190502A CN109190502A CN201810906602.1A CN201810906602A CN109190502A CN 109190502 A CN109190502 A CN 109190502A CN 201810906602 A CN201810906602 A CN 201810906602A CN 109190502 A CN109190502 A CN 109190502A
- Authority
- CN
- China
- Prior art keywords
- image
- sample
- iris
- detected
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating location information.One specific embodiment of this method includes: acquisition image to be detected, wherein includes iris image in image to be detected;Image to be detected is input to iris detection model trained in advance, obtain the location information for characterizing position of the iris image in image to be detected, wherein, iris detection model is for characterizing the corresponding relationship of image to be detected and iris image between the position in image to be detected.The embodiment realizes the position by iris detection model orientation iris image in image to be detected.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for generating location information.
Background technique
The iris feature one of intrinsic as numerous biologies, can be used for object identification.For example, by positioning image
Iris image, realize the identification of object.The prior art generally requires the feature of prior Manual definition's iris image, then mentions
The feature of image to be detected is taken, then carries out template matching, and then realizes the positioning to the iris image in image to be detected.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for generating location information.
In a first aspect, the embodiment of the present application provides a kind of method for generating location information, this method comprises: obtaining
Image to be detected, wherein include iris image in image to be detected;Image to be detected is input to iris detection trained in advance
Model obtains the location information for characterizing position of the iris image in image to be detected, wherein iris detection model is used for
Characterize the corresponding relationship of image to be detected and iris image between the position in image to be detected.
In some embodiments, training obtains iris detection model as follows: obtaining sample set, wherein sample
It include iris image in sample image, sample markup information is for characterizing iris figure including sample image and sample markup information
As the position in corresponding sample image;Sample is chosen from sample set, and executes following training step: by the sample of selection
This sample image is input to initial neural network, obtains the position letter for characterizing position of the iris image in sample image
Breath;Based on location information and sample markup information corresponding with sample image, determine whether initial neural network trains completion;It rings
It should be completed in the initial neural metwork training of determination, the initial neural network that training is completed is as iris detection model.
In some embodiments, it is based on location information and sample markup information corresponding with sample image, determines initial mind
Completion whether is trained through network, comprising: is marked in response to position indicated by location information and sample corresponding with sample image
Error between position indicated by information is less than preset error value, determines that initial neural metwork training is completed.
In some embodiments, this method further include: training is not completed in response to the initial neural network of determination, and adjustment is initial
The relevant parameter of neural network, and original sample is chosen from sample set, initial neural network adjusted is made
For initial neural network, training step is continued to execute.
In some embodiments, it is based on location information and sample markup information corresponding with sample image, determines initial mind
Completion whether is trained through network, comprising: is marked in response to position indicated by location information and sample corresponding with sample image
Error between position indicated by information is greater than or equal to the preset error value, determines that initial neural network has not been trained
At.
In some embodiments, include pigeon iris image in image to be detected, include pigeon iris figure in sample image
Picture, sample markup information are used to indicate position of the pigeon iris image in corresponding sample image.
Second aspect, the embodiment of the present application provide a kind of for generating the device of location information, which includes: to be checked
Altimetric image acquiring unit is configured to obtain image to be detected, wherein includes iris image in image to be detected;Detection unit,
It is configured to for image to be detected being input to iris detection model trained in advance, obtains for characterizing iris image to be detected
The location information of position in image, wherein iris detection model is for characterizing image to be detected and iris image to be detected
The corresponding relationship between position in image.
In some embodiments, detection unit, comprising: sample set acquiring unit is configured to obtain sample set, wherein
Sample includes sample image and sample markup information, includes iris image in sample image, sample markup information is for characterizing rainbow
Position of the film image in corresponding sample image;Training unit is configured to choose sample from sample set, and executes such as
Lower training step: the sample image of the sample of selection is input to initial neural network, is obtained for characterizing iris image in sample
The location information of position in this image;Based on location information and sample markup information corresponding with sample image, determine initial
Whether neural network trains completion;It is completed in response to the initial neural metwork training of determination, the initial neural network that training is completed
As iris detection model.
In some embodiments, training unit is further configured to: in response to position indicated by location information and with
Error between position indicated by the corresponding sample markup information of sample image is less than preset error value, determines initial nerve net
Network training is completed.
In some embodiments, the device, is further configured to: not having trained in response to the initial neural network of determination
At, the relevant parameter of initial neural network is adjusted, and original sample is chosen from sample set, it will be adjusted initial
Neural network continues to execute training step as initial neural network.
In some embodiments, training unit is further configured to: in response to position indicated by location information and with
Error between position indicated by the corresponding sample markup information of sample image is greater than or equal to the preset error value, determines
Initial neural network not complete by training.
In some embodiments, include pigeon iris image in image to be detected, include pigeon iris figure in sample image
Picture, sample markup information are used to indicate position of the pigeon iris image in corresponding sample image.
The third aspect, the embodiment of the present application provide a kind of server, which includes: one or more processors;
Storage device is stored thereon with one or more programs;When one or more programs are executed by one or more processors, so that
One or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method as described in implementation any in first aspect is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating location information, by training iris detection mould in advance
Then image to be detected is input to iris detection model by type, to obtain iris image in image to be detected to be detected
The location information of position in image.This method and device are realized through iris detection model orientation iris image to be detected
Position in image.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating location information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating location information of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating location information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating location information of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the server of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can the method for generating location information using the application or the dress for generating location information
The exemplary architecture 100 set.
As shown in Figure 1, system architecture 100 may include image capture device 101, database server 102, server
103, terminal device 104,105 and network 106,107.Wherein, network 106 is in image capture device 101, database service
The medium of communication link is provided between device 102 and server 103.Network 107 to server 103 and terminal device 104,
The medium of communication link is provided between 105.Network 106,107 may include various connection types, such as wired, wireless communication link
Road or fiber optic cables etc..
Image capture device 101 can be arbitrary the electronic equipment with camera function.In practice, image capture device
101 acquired images can store in database server 102.Image capture device 101 can will be acquired by network 106
The image transmitting arrived is to database server 102.It should be noted that image capture device 101 can also be transmitted by other
Acquired image is transmitted to database server 102 by mode (such as being transmitted by data line).
Server 103 passes through available 101 acquired image of image capture device of network 106, also available number
According to the pre-stored image of library server 102.It should be noted that server 103 can also by other transmission modes (such as
Transmitted by data line) obtain 101 acquired image of image capture device.In addition, the collected figure of image capture device 101
Local as that can also be directly stored in server 103, server 103 can directly extract the local image stored and progress
Processing, at this point it is possible to which database server 102 is not present.
Terminal device 104,105 is interacted by network 107 with server 103, to receive or send message etc..Terminal device
104, various telecommunication customer end applications, such as the application of image recognition class etc. can be installed on 105.
Terminal device 104,105 can be hardware, be also possible to software.It, can be with when terminal device 104,105 is hardware
The various electronic equipments that there is display screen and support image display function, including but not limited to smart phone, tablet computer,
E-book reader, pocket computer on knee and desktop computer etc..It, can be with when terminal device 104,105 is software
It is mounted in above-mentioned cited electronic equipment.Multiple softwares or software module may be implemented into (such as providing distribution in it
Formula service), single software or software module also may be implemented into.It is not specifically limited herein.
Server 103 can be to provide the server of various services, for example, after the image recognition of terminal device 104,105
Platform server.Image recognition background server can carry out the processing such as feature extraction to the image got, and generate processing knot
Fruit, and feed back to terminal device 104,105.In practice, server 103 is stored in this to the processing result of image
Ground, at this point it is possible to which terminal device 104,105 is not present.
It should be noted that for generating the method for location information generally by server provided by the embodiment of the present application
103 execute, and correspondingly, the device for generating location information is generally positioned in server 103.
It should be noted that server 103 can be hardware, it is also possible to software.It, can when server 103 is hardware
To be implemented as the distributed server cluster that multiple servers form, individual server also may be implemented into.When server is soft
When part, multiple softwares or software module (such as providing Distributed Services) may be implemented into, also may be implemented into single soft
Part or software module.It is not specifically limited herein.
It should be understood that the number of image capture device, database server, network, server and terminal device in Fig. 1
It is only schematical.According to realize needs, can have any number of image capture device, database server, network,
Server and terminal device.
With continued reference to Fig. 2, the stream of one embodiment of the method for generating location information according to the application is shown
Journey 200.This be used for generate location information method the following steps are included:
Step 201, image to be detected is obtained.
It in the present embodiment, can for generating the executing subject (server 103 as shown in Figure 1) of the method for location information
To obtain image to be detected by various modes.Wherein, image to be detected generally comprises iris image.Optionally, mapping to be checked
As may include pigeon iris image.For another example image to be detected also may include the iris image of other biological.
In the present embodiment, above-mentioned executing subject can be from other server (databases as shown in Figure 1 of communication connection
Server 102) or terminal device (image capture device 101 as shown in Figure 1) acquisition image to be detected.It is to be detected in practice
It is local that image is stored in above-mentioned executing subject, at this point, above-mentioned executing subject directly can obtain mapping to be checked from local
Picture.
Step 202, image to be detected is input to iris detection model trained in advance, is obtained for characterizing iris image
The location information of position in image to be detected.
In the present embodiment, image to be detected can be input to iris detection mould trained in advance by above-mentioned executing subject
Type obtains the location information for characterizing position of the iris image in image to be detected.Wherein, iris detection model is used for table
Levy the corresponding relationship of image to be detected and iris image between the position in image to be detected.For example, iris detection model can
Be the position of target image and iris image in the target image location information between mapping table.Wherein, target
Image can be the image of the preassigned preset quantity of technical staff.Believe the position of the position of iris image in the target image
Breath (such as coordinate, callout box) can be what technical staff had marked in advance.At this point, above-mentioned executing subject can arrive above-mentioned correspondence
Matched and searched is carried out in relation table, then determines target image corresponding with image to be detected, and then obtain iris image and exist
The location information of position in image to be detected.Optionally, iris detection model can also be trained by machine learning method
It arrives, as shown in step 401-402.
It should be noted that the location information of position of the iris image in image to be detected can be various forms, example
Such as coordinate, callout box.
It is one of the application scenarios of the method according to the present embodiment for generating location information with continued reference to Fig. 3, Fig. 3
Schematic diagram, for detecting pigeon iris.In the application scenarios of Fig. 3, server 300 is previously stored with iris detection model.
Herein, iris detection model can be regarded as the position of the position of characterization target image and pigeon iris image in the target image
Mapping table (such as table 301) between information, wherein the position of target image and pigeon iris image in the target image
Location information associated storage.It, can be with for generating the executing subject of method of location information when getting image to be detected 302
It is searched to the table 301 in server 300, and then determines target image 303 corresponding with image to be detected 302 and pigeon rainbow
The location information (such as callout box 304) of position of the film image in target image 303.Therefore, above-mentioned executing subject obtains pigeon
The location information (such as callout box 305,306) of position of the iris image in image to be detected 302.
The method provided by the above embodiment of the application is by being input to characterization target image and iris for image to be detected
Mapping table between the location information of the position of image in the target image, and then iris image is obtained in image to be detected
In position location information.The method achieve determine position of the iris image in image to be detected by iris detection model
It sets.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating location information.
This is used to generate the process 400 of the method for location information, comprising the following steps:
Step 401, sample set is obtained.
In the present embodiment, the sample in sample set includes sample image and sample markup information.Wherein, in sample image
Include iris image.Sample markup information is for characterizing position of the iris image in corresponding sample image.In practice, sample
Markup information can be generated by existing annotation tool (such as Matlab).
In the present embodiment, the executing subject of training iris detection model can obtain sample set by various modes.Example
Such as, sample set is obtained from the server (database server 102 as shown in Figure 1) for being stored with sample set.For example, passing through terminal
Equipment (image capture device 101 as shown in Figure 1) acquires image, is then handled, obtains sample set.It needs to illustrate
It is that the executing subject of training iris detection model can be identical with the executing subject of the method for generating location information, it can also
With difference.
Step 402, sample is chosen from sample set, and executes following training step: by the sample graph of the sample of selection
As being input to initial neural network, the location information for characterizing position of the iris image in sample image is obtained;Based on position
Confidence breath and sample markup information corresponding with sample image, determine whether initial neural network trains completion;In response to determination
Initial neural metwork training is completed, and the initial neural network that training is completed is as iris detection model.
In the present embodiment, the executing subject of training iris detection model can choose sample from sample set, then will
The sample image of the sample of selection is input to initial neural network, obtains the position for characterizing iris image in sample image
Location information (such as coordinate, callout box).
In the present embodiment, initial neural network may include convolutional neural networks.Herein, convolutional neural networks can wrap
Include convolutional layer, pond layer and full articulamentum.Firstly, the executing subject of training iris detection model can be successively defeated by sample image
Enter to convolutional layer, pond layer, the operation such as feature extraction, dimensionality reduction is carried out to sample image, generates characteristic pattern.In addition, training iris
The executing subject of detection model can extract a certain number of candidate frames (such as rectangle from sample image by various methods
Frame), for example, by SS (Selective Search, selective search) algorithm, RPN (Region Proposal Network,
Candidate region network) algorithm etc. extracts candidate frame from sample image.Further, the executing subject of training iris detection model can
The candidate frame extracted to be mapped in features described above figure.Then, the characteristic pattern after mapping candidate frame is input to full connection
Layer, integrates the feature in each candidate frame.To which the executing subject of training iris detection model can pass through various sides
Method determine include target object candidate frame, such as pass through SVM (Support Vector Machines, support vector machines)
Algorithm, Softmax algorithm etc. determine the candidate frame including target object.Herein, target object typically refers to sample image
The object (such as iris image in sample image) that foreground image includes.It is appreciated that the candidate frame determined includes
The candidate frame of iris image.Therefore, the executing subject of training iris detection model can be by the candidate frame determined and and sample
The corresponding markup information of image is compared, obtain include iris image candidate frame.It is appreciated that iris image is in sample graph
The location information of position as in can be the candidate frame (i.e. callout box) including iris image, be also possible to include iris image
Candidate frame any one vertex coordinate.
In some optional implementations of the present embodiment, it is based on location information and sample mark corresponding with sample image
Information is infused, determines whether initial neural network trains completion, comprising: in response to position indicated by location information and and sample graph
Error between the position as indicated by corresponding sample markup information is less than preset error value, determines initial neural metwork training
It completes.
In these implementations, if between obtained location information and sample markup information corresponding with sample image
Error is less than preset error value, and the executing subject of training iris detection model can terminate to train.
In some optional implementations of the present embodiment, it is based on location information and sample mark corresponding with sample image
Information is infused, determines whether initial neural network trains completion, comprising: in response to position indicated by location information and and sample graph
Error between the position as indicated by corresponding sample markup information is greater than or equal to the preset error value, determines initial mind
Through network, training is not completed.
In these implementations, if between obtained location information and sample markup information corresponding with sample image
Error is greater than or equal to preset error value, and the executing subject of training iris detection model can determine that initial neural network is not trained
It completes.
At this point, the executing subject of training iris detection model can further adjust the relevant parameter of initial neural network,
And original sample is chosen from sample set.Then using initial neural network adjusted as initial neural network,
Above-mentioned training step is continued to execute, until initial neural metwork training is completed.In practice, the execution master of training iris detection model
Body can adjust the relevant parameter of initial neural network by various methods.For example, by BP (Back Propagation, instead
To propagating) the adjustment relevant parameter such as algorithm, SGD (Stochastic Gradient Descent, stochastic gradient descent) algorithm.
In addition, the executing subject of training iris detection model can also by adjusting the candidate frame including iris image, into
And reduce error of the iris image between the location information and sample markup information of the position in sample image, it realizes to initial
The adjustment of the relevant parameter of neural network.
In some optional implementations of the present embodiment, pigeon iris image may include in sample image, at this point,
Sample markup information is used to indicate position of the pigeon iris region in corresponding sample image.
Step 403, image to be detected is obtained.
Step 404, image to be detected is input to iris detection model trained in advance, is obtained for characterizing iris image
The location information of position in image to be detected.
It in the present embodiment, can for generating the executing subject (server 103 as shown in Figure 1) of the method for location information
To obtain image to be detected, image to be detected is then input to the iris detection model using step 401-402 training, is obtained
For characterizing the location information of position of the iris image in image to be detected.Above-mentioned steps 403-404 it is specific processing and its
Brought technical effect can be with reference to the step 201-202 in the corresponding embodiment of Fig. 2, and details are not described herein.
Figure 4, it is seen that being used to generate location information in the present embodiment compared with the corresponding embodiment of Fig. 2
The process 400 of method highlights the training step of iris detection model.Thus, it is possible to train the good iris detection mould of robustness
Type, and then realize and pass through position of the iris detection model orientation iris image in image to be detected.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides for generating position letter
One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 provided in this embodiment for generating location information includes that image to be detected obtains list
Member 501 and detection unit 502.Wherein, image to be detected acquiring unit 501 is configured to obtain image to be detected, wherein to
It include iris image in detection image;Detection unit 502 is configured to for image to be detected being input to iris inspection trained in advance
Model is surveyed, the location information for characterizing position of the iris image in image to be detected is obtained, wherein iris detection model is used
In the corresponding relationship of characterization image to be detected and iris image between the position in image to be detected.
In the present embodiment, in the device 500 for generating location information: image to be detected acquiring unit 501 and detection
The specific processing of unit 502 and its brought technical effect can be respectively with reference to the steps 201 and step in Fig. 2 corresponding embodiment
202 related description, details are not described herein.
In some optional implementations of the present embodiment, detection unit 502 includes: sample set acquiring unit (in figure
It is not shown) and training unit (not shown).Wherein, sample set acquiring unit may be configured to obtain sample set, wherein
Sample includes sample image and sample markup information, includes iris image in sample image, sample markup information is for characterizing rainbow
Position of the film image in corresponding sample image.Training unit may be configured to choose sample from sample set, and hold
The following training step of row: the sample image of the sample of selection is input to initial neural network, is obtained for characterizing iris image
The location information of position in sample image;Based on location information and sample markup information corresponding with sample image, determine
Whether initial neural network trains completion;It is completed in response to the initial neural metwork training of determination, the initial nerve that training is completed
Network is as iris detection model.
In some optional implementations of the present embodiment, training unit can be further configured to: in response to position
Error between position indicated by the indicated position of confidence breath and sample markup information corresponding with sample image is less than pre-
If error amount, determine that initial neural metwork training is completed.
In some optional implementations of the present embodiment, device 500 can be further configured to: in response to determination
Initial neural network not complete by training, adjusts the relevant parameter of initial neural network, and choose and be not used from sample set
Sample continue to execute training step using initial neural network adjusted as initial neural network.
In some optional implementations of the present embodiment, training unit can be further configured to: in response to position
Error between position indicated by the indicated position of confidence breath and sample markup information corresponding with sample image be greater than or
Equal to the preset error value, determine that initial neural network not complete by training.
It include pigeon iris image, sample graph in image to be detected in some optional implementations of the present embodiment
It include pigeon iris image as in, sample markup information is used to indicate position of the pigeon iris image in corresponding sample image
It sets.
The device provided by the above embodiment of the application is obtained by image to be detected acquiring unit 501 to be detected first
Image then obtains the location information of position of the iris image in image to be detected by the detection of detection unit 502.The device
Realize position of the positioning iris image in image to be detected.
Below with reference to Fig. 6, it illustrates the computer systems 600 for the server for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Server shown in Fig. 6 is only an example, should not function and use scope band to the embodiment of the present application
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.
It should be noted that the computer-readable medium of the application can be computer-readable signal media or computer
Readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but it is unlimited
In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or any above combination.It calculates
The more specific example of machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, portable of one or more conducting wires
Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, computer readable storage medium can be it is any include or storage program
Tangible medium, which can be commanded execution system, device or device use or in connection.And in this Shen
Please in, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable
Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by
Instruction execution system, device or device use or program in connection.The journey for including on computer-readable medium
Sequence code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object-oriented programming language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor, packet
Include image to be detected acquiring unit and detection unit.Wherein, the title of these units is not constituted under certain conditions to the list
The restriction of member itself, for example, image to be detected acquiring unit is also described as " obtaining the unit of image to be detected ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in server described in above-described embodiment;It is also possible to individualism, and without in the supplying server.It is above-mentioned
Computer-readable medium carries one or more program, when said one or multiple programs are executed by the server,
So that the server: obtaining image to be detected, wherein include iris image in image to be detected;Image to be detected is input to
Trained iris detection model in advance, obtains the location information for characterizing position of the iris image in image to be detected,
In, iris detection model is used to characterize the corresponding pass of image to be detected and iris image between the position in image to be detected
System.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of method for generating location information, comprising:
Obtain image to be detected, wherein include iris image in image to be detected;
Described image to be detected is input to iris detection model trained in advance, is obtained for characterizing the iris image in institute
State the location information of the position in image to be detected, wherein the iris detection model is for characterizing image to be detected and iris
Corresponding relationship of the image between the position in image to be detected.
2. according to the method described in claim 1, wherein, training obtains the iris detection model as follows:
Obtain sample set, wherein sample includes sample image and sample markup information, includes iris image, sample in sample image
This markup information is for characterizing position of the iris image in corresponding sample image;
Sample is chosen from the sample set, and executes following training step: the sample image of the sample of selection is input to
Initial neural network obtains the location information for characterizing position of the iris image in sample image;Based on location information and
Sample markup information corresponding with sample image, determines whether initial neural network trains completion;In response to determining initial nerve
Network training is completed, and the initial neural network that training is completed is as iris detection model.
3. described to be marked based on location information and sample corresponding with sample image according to the method described in claim 2, wherein
Information, determines whether initial neural network trains completion, comprising:
In response between position indicated by position indicated by location information and sample markup information corresponding with sample image
Error be less than preset error value, determine that initial neural metwork training is completed.
4. according to the method described in claim 2, wherein, the method also includes:
In response to the initial neural network of determination, training is completed, and adjusts the relevant parameter of initial neural network, and from the sample
This concentration chooses original sample, using initial neural network adjusted as initial neural network, continues to execute described
Training step.
5. described to be marked based on location information and sample corresponding with sample image according to the method described in claim 4, wherein
Information, determines whether initial neural network trains completion, comprising:
In response between position indicated by position indicated by location information and sample markup information corresponding with sample image
Error be greater than or equal to the preset error value, determine initial neural network training complete.
6. according to the method any in claim 2-5, wherein it include pigeon iris image in described image to be detected,
It include pigeon iris image in the sample image, the sample markup information is used to indicate pigeon iris image in corresponding sample
Position in this image.
7. a kind of for generating the device of location information, comprising:
Image to be detected acquiring unit is configured to obtain image to be detected, wherein includes iris image in image to be detected;
Detection unit is configured to for described image to be detected being input to iris detection model trained in advance, obtains for table
Levy the location information of position of the iris image in described image to be detected, wherein the iris detection model is used for table
Levy the corresponding relationship of image to be detected and iris image between the position in image to be detected.
8. device according to claim 7, wherein the detection unit, comprising:
Sample set acquiring unit is configured to obtain sample set, wherein sample includes sample image and sample markup information, sample
It include iris image in this image, sample markup information is for characterizing position of the iris image in corresponding sample image;
Training unit is configured to choose sample from the sample set, and executes following training step: by the sample of selection
Sample image be input to initial neural network, obtain for characterizing position of the iris image in sample image position letter
Breath;Based on location information and sample markup information corresponding with sample image, determine whether initial neural network trains completion;It rings
It should be completed in the initial neural metwork training of determination, the initial neural network that training is completed is as iris detection model.
9. device according to claim 8, wherein the training unit is further configured to:
In response between position indicated by position indicated by location information and sample markup information corresponding with sample image
Error be less than preset error value, determine that initial neural metwork training is completed.
10. device according to claim 8, wherein described device is further configured to:
In response to the initial neural network of determination, training is completed, and adjusts the relevant parameter of initial neural network, and from the sample
This concentration chooses original sample, using initial neural network adjusted as initial neural network, continues to execute described
Training step.
11. device according to claim 10, wherein the training unit is further configured to:
In response between position indicated by position indicated by location information and sample markup information corresponding with sample image
Error be greater than or equal to the preset error value, determine initial neural network training complete.
12. according to the device any in claim 8-11, wherein include pigeon iris figure in described image to be detected
Picture, includes pigeon iris image in the sample image, and the sample markup information is used to indicate pigeon iris image in correspondence
Sample image in position.
13. a kind of server, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method as claimed in any one of claims 1 to 6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Such as method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810906602.1A CN109190502A (en) | 2018-08-10 | 2018-08-10 | Method and apparatus for generating location information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810906602.1A CN109190502A (en) | 2018-08-10 | 2018-08-10 | Method and apparatus for generating location information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109190502A true CN109190502A (en) | 2019-01-11 |
Family
ID=64920857
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810906602.1A Pending CN109190502A (en) | 2018-08-10 | 2018-08-10 | Method and apparatus for generating location information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109190502A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111079526A (en) * | 2019-11-07 | 2020-04-28 | 中央财经大学 | Carrier pigeon genetic relationship analysis method, device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105303185A (en) * | 2015-11-27 | 2016-02-03 | 中国科学院深圳先进技术研究院 | Iris positioning method and device |
CN106504233A (en) * | 2016-10-18 | 2017-03-15 | 国网山东省电力公司电力科学研究院 | Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN |
CN106650575A (en) * | 2016-09-19 | 2017-05-10 | 北京小米移动软件有限公司 | Face detection method and device |
US20180068198A1 (en) * | 2016-09-06 | 2018-03-08 | Carnegie Mellon University | Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network |
-
2018
- 2018-08-10 CN CN201810906602.1A patent/CN109190502A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105303185A (en) * | 2015-11-27 | 2016-02-03 | 中国科学院深圳先进技术研究院 | Iris positioning method and device |
US20180068198A1 (en) * | 2016-09-06 | 2018-03-08 | Carnegie Mellon University | Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network |
CN106650575A (en) * | 2016-09-19 | 2017-05-10 | 北京小米移动软件有限公司 | Face detection method and device |
CN106504233A (en) * | 2016-10-18 | 2017-03-15 | 国网山东省电力公司电力科学研究院 | Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN |
Non-Patent Citations (1)
Title |
---|
王林等: "Faster R-CNN模型在车辆检测中的应用", 《计算机应用》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111079526A (en) * | 2019-11-07 | 2020-04-28 | 中央财经大学 | Carrier pigeon genetic relationship analysis method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038469B (en) | Method and apparatus for detecting human body | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108985208A (en) | The method and apparatus for generating image detection model | |
CN108830235A (en) | Method and apparatus for generating information | |
CN108898186A (en) | Method and apparatus for extracting image | |
CN108509915A (en) | The generation method and device of human face recognition model | |
CN109086719A (en) | Method and apparatus for output data | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN109191514A (en) | Method and apparatus for generating depth detection model | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN108986169A (en) | Method and apparatus for handling image | |
CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
CN108363995A (en) | Method and apparatus for generating data | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN108984399A (en) | Detect method, electronic equipment and the computer-readable medium of interface difference | |
CN109086780A (en) | Method and apparatus for detecting electrode piece burr | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN108171211A (en) | Biopsy method and device | |
CN108062544A (en) | For the method and apparatus of face In vivo detection | |
CN109344762A (en) | Image processing method and device | |
CN109255337A (en) | Face critical point detection method and apparatus | |
CN108509921A (en) | Method and apparatus for generating information | |
CN108960110A (en) | Method and apparatus for generating information | |
CN108446658A (en) | The method and apparatus of facial image for identification | |
CN109255767A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190111 |
|
RJ01 | Rejection of invention patent application after publication |