CN109377519A - Target tracking method, device, target tracking equipment and storage medium - Google Patents
Target tracking method, device, target tracking equipment and storage medium Download PDFInfo
- Publication number
- CN109377519A CN109377519A CN201811144949.3A CN201811144949A CN109377519A CN 109377519 A CN109377519 A CN 109377519A CN 201811144949 A CN201811144949 A CN 201811144949A CN 109377519 A CN109377519 A CN 109377519A
- Authority
- CN
- China
- Prior art keywords
- image
- determined
- facial image
- original image
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/243—Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/179—Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses a kind of target tracking method, device, target tracking equipment and storage medium, and this method includes obtaining original image information, and the original image information includes at least two width original images;When recognizing the first original image, there are the associated facial image to be determined of the target object when target object, is intercepted in first original image;Inquire the second original image in the original image information, if recognized, there are the target objects, it then identifies in second original image with the presence or absence of the facial image to be determined, if there is, then export the facial image to be determined, this programme reasonable utilization image information of acquired original, perfect target tracking scheme, improves the acquisition efficiency of tracked information.
Description
Technical field
The invention relates to computer technology more particularly to a kind of target tracking method, device, target tracking equipment
And storage medium.
Background technique
With the fast development of video monitoring and network transmission technology, in the street in cities at different levels, crossing, station, important
Building generally installs filming apparatus.The content of filming apparatus shooting is analyzed by manual type to carry out the identification of target object
It is the main method of current tracking target person.
Existing target tracking scheme, does not only close other useful informations by being tracked to target object
Reason utilizes, and results in the not perfect of tracked information, needs to improve.
Summary of the invention
This application provides a kind of target tracking method, device, target tracking equipment and storage medium, reasonable utilization original
Begin the image information of acquisition, and perfect target tracking scheme improves the acquisition efficiency of tracked information.
In a first aspect, the embodiment of the present application provides a kind of target tracking method, comprising:
Original image information is obtained, the original image information includes at least two width original images;
When recognizing the first original image there are when target object, the target pair is intercepted in first original image
As associated facial image to be determined;
The second original image inquired in the original image information is known if recognized there are the target object
It whether there is the facial image to be determined in not described second original image, if it is present the output face to be determined
Image.
Second aspect, the embodiment of the present application also provides a kind of target follow up mechanism, comprising:
Image collection module, for obtaining original image information, the original image information includes at least two width original graphs
Picture;
Affiliated partner determining module, it is former described first for when recognizing the first original image there are when target object
The associated facial image to be determined of the target object is intercepted in beginning image;
Information feedback module, the second original image for inquiring in the original image information, if recognizing presence
The target object is then identified with the presence or absence of the facial image to be determined in second original image, if it is present defeated
The facial image to be determined out.
The third aspect, the embodiment of the present application also provides a kind of target tracking equipment, comprising: processor, memory and
The computer program that can be run on a memory and on a processor is stored, the processor executes real when the computer program
The now target tracking method as described in the embodiment of the present application.
Fourth aspect, the embodiment of the present application also provides a kind of, and the storage comprising target tracking machine executable instructions is situated between
Matter, the target tracking machine executable instructions by target tracking device handler when being executed for executing the embodiment of the present application
The target tracking method.
In the present solution, obtaining original image information, the original image information includes at least two width original images;Work as identification
To the first original image there are when target object, it is associated to be determined in first original image to intercept the target object
Facial image;The second original image inquired in the original image information is known if recognized there are the target object
It whether there is the facial image to be determined in not described second original image, if it is present the output face to be determined
Image, the reasonable utilization image information of acquired original, perfect target tracking scheme improve the acquisition effect of tracked information
Rate.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow chart of target tracking method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another target tracking method provided by the embodiments of the present application;
Fig. 3 is the flow chart of another target tracking method provided by the embodiments of the present application;
Fig. 4 is the flow chart of another target tracking method provided by the embodiments of the present application;
Fig. 5 is a kind of structural block diagram of target follow up mechanism provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of target tracking equipment provided by the embodiments of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is for explaining the application, rather than the restriction to the application.It also should be noted that for the ease of retouching
It states, part relevant to the application is illustrated only in attached drawing rather than entire infrastructure.
Fig. 1 is a kind of flow chart of target tracking method provided by the embodiments of the present application, is applicable to discovery and target pair
As there is the associate people centainly contacted, this method can be executed by target tracking equipment provided by the embodiments of the present application, should
The mode that software and/or hardware can be used in the target follow up mechanism of target tracking equipment is realized, as shown in Figure 1, the present embodiment mentions
The concrete scheme of confession is as follows:
Step S101, original image information is obtained, the original image information includes at least two width original images.
Wherein, original image information can be one or more settings (such as crossing, market, subway enter in a different positions
Mouthful, bus stop board) monitor camera device capture image composition.The original image information includes the facial image of candid photograph,
It includes at least two width original images.Illustratively, it can be the original image set arranged according to the sequencing on date
It closes.
Step S102, it when recognizing the first original image there are when target object, is intercepted in first original image
The associated facial image to be determined of target object.
Wherein, target object can be that the needs having determined are tracked or track the target finished, and first is former
Beginning image is any piece image in original image information, if identifying that there are targets pair in the first original image information
As then intercepting accordingly in first original image and the associated facial image to be determined of target object, wherein face to be determined
Corresponding target person is the uncertain target for whether needing to track in image, i.e., whether is tracked feedback to the target person
In state to be determined.
In one embodiment, the associated facial image to be determined of target object can be identifies in the first original image
The facial image that other occur in addition to target object arrived, such as identifies target object, while in mesh in the first original image
There is also other facial image (such as 3 facial images) in first original image that mark object occurs, then by 3 faces
Image is determined as and the associated facial image to be determined of target object.
Step S103, the second original image in the original image information is inquired, there are the targets if recognized
Object then identifies in second original image with the presence or absence of the facial image to be determined, if it is present output it is described to
Determine facial image.
Wherein, the second original image can be one or more image, as removed the first original image in original image information
Acquisition device other than acquisition device acquisition other original images, if recognized there are the target object, into one
It whether there is the facial image to be determined in the identification of step second original image, if it is present output is described to true
Determine facial image.In one embodiment, to facial image present in the second image and predetermined face figure to be determined
As being compared, if the comparison results are consistent then exports the facial image to be determined.
As shown in the above, by being associated the identification of facial image to the original image information for target object occur
And tracking is rationally transported for frequency of occurrence reaches target person twice or above and carries out output feedback simultaneously with target object
With the image information of acquired original, perfect target tracking scheme improves the acquisition efficiency of tracked information.
Fig. 2 is the flow chart of another target tracking method provided by the embodiments of the present application, optionally, the original image
Information further includes the acquisition time and collecting location of every width original image, correspondingly, the output facial image to be determined
It include: the acquisition time and collecting location for exporting the facial image to be determined and second original image.Such as Fig. 2 institute
Show, technical solution is specific as follows:
Step S201, original image information is obtained, the original image information is comprising at least two width original images and often
The acquisition time and collecting location of width original image.
In original image collection process, the acquisition time and collecting location of the original image are recorded accordingly, wherein adopt
Collection place can be the position (can be longitude and latitude scale designation or crossing position mark etc.) where monitor camera device, when acquisition
Between i.e. time point (such as 3:14:38) of original image when being captured, in original image information, in addition to including several original images
Outside, the acquisition time comprising every width original image and collecting location are fed back to subsequent accordingly.
Step S202, it when recognizing the first original image there are when target object, is intercepted in first original image
The associated facial image to be determined of target object.
Step S203, the second original image in the original image information is inquired, there are the targets if recognized
Object then identifies in second original image with the presence or absence of the facial image to be determined, if it is present output it is described to
Determine the acquisition time and collecting location of facial image and second original image.
In one embodiment, other than exporting facial image to be determined and being supplied to personnel query, further this is waited for
The acquisition time and collecting location for determining facial image are fed back together.
It can be seen from the above, target tracking personnel can lock in time with the associated target to be determined of target object, and according to anti-
The acquisition time and collecting location of feedback are effectively tracked.
Fig. 3 is that the flow chart of another target tracking method provided by the embodiments of the present application is optionally intercepting the mesh
After the mark associated facial image to be determined of object, further includes: carry out image flame detection to the facial image to be determined and rectified
Positive facial image is simultaneously saved.As shown in figure 3, technical solution is specific as follows:
Step S301, original image information is obtained, the original image information is comprising at least two width original images and often
The acquisition time and collecting location of width original image.
Step S302, it when recognizing the first original image there are when target object, is intercepted in first original image
The associated facial image to be determined of target object.
Step S303, correction facial image is obtained to the facial image progress image flame detection to be determined and saved.
In one embodiment, by judging the face in facial image whether symmetrically to determine the face to be determined intercepted
Whether image is front face image.Specifically, progress eyes and mouth are positioned and are obtained double first by taking eyes and mouth as an example
Line between eye and mouth, if the line between eyes is in horizontality, and the triangle of eyes and mouth line composition
It is symmetrical triangle that shape, which is in, it is determined that current facial image to be determined is front face image, is otherwise non-frontal face figure
Picture carries out image flame detection processing.
In one embodiment, by the characteristic point on the facial image to be determined of positioning, it is special that at least two faces are generated
A sign point coordinate, this feature point can choose the point for including in the human face five-sense-organ of positioning easy to identify, such as eyeball, the conduct of nose position
Characteristic point specifically can be used model location method in the prior art and realize;Based on at least two human face characteristic point coordinates
Face deflection angle is calculated, and facial image is reversely rotated according to the face deflection angle, facial image is in standard posture
When, it should be vertical that the line between any two characteristic point, which has the line in fixed directionality, such as nose and people,.Feature
Point line and the angle in horizontal or vertical direction are in a fixed range, such as line and level side in nose and people
To substantially 90 ° of angle.Therefore, they are found out in standard appearance according to the distributing position of human face five-sense-organ using Principle of Statistics
Average angle under state.The coordinate of two characteristic points on known facial image, that is, can determine the line of the two characteristic points, thus
The line of two characteristic points in facial image and the angle in horizontal or vertical direction are found out, is compared with average angle
Obtain face deflection angle.Face deflection angle is calculated, the position coordinates of two human face characteristic points is at least needed, can also be used two
The position coordinates of a above human face characteristic point are calculated, for example, then will according to the position coordinates of three human face characteristic points
Three human face characteristic point lines are a triangle, and standard when these three feature point criterion postures is obtained using Principle of Statistics
Triangle calculates the angle of two triangles, can calculate face deflection angle.After determining face deflection angle, accordingly
Face reversion processing is carried out, rotation angle is equal with face deflection angle, thus the facial image after being rotated;To reversed
Postrotational facial image picture carries out repair process and obtains correction facial image picture, can after reverse rotation is handled
Repair process is carried out using mirror-image fashion according to the facial symmetry characteristic side poor to brightness of image, contrast.Specifically, can
Brightness of image and contrast identification are carried out to the image after reverse rotation, by the face image of the higher side of brightness and contrast
(such as eyes image) carries out symmetry transformation processing to correct the image of the poor side of brightness and contrast, thus to finally obtain
Correct facial image.
Step S304, the second original image in the original image information is inquired, there are the targets if recognized
Object then identifies in second original image with the presence or absence of the facial image to be determined, if it is present output it is described to
Determine the acquisition time and collecting location of facial image and second original image.
It can be seen from the above, carrying out image flame detection in interception facial image to be determined associated with target object and obtaining just
In the correction facial image that subsequent identification compares, the precondition provided convenience is identified for subsequent query, is improved inquiry and is known
The accuracy rate of other efficiency and feedback result.
Fig. 4 is the flow chart of another target tracking method provided by the embodiments of the present application, optionally, former in the acquisition
After beginning image information, further includes: the facial image feature and figure and features feature for determining target object, it is special according to the facial image
The figure and features feature of seeking peace carries out image recognition in the original image information.As shown in figure 4, technical solution is specific as follows:
Step S401, original image information is obtained, the original image information is comprising at least two width original images and often
The acquisition time and collecting location of width original image.
Step S402, the facial image feature and figure and features feature for determining target object, according to the facial image feature and
The figure and features feature carries out image recognition in the original image information.
Wherein, feature extraction is carried out to obtain the figure and features feature of target object to target object, which is different from
Facial image feature comprising the height feature of target object wears feature and limbs feature etc..
In one embodiment, determine that the figure and features feature mode of target object may is that through default training pattern to mesh
Mark object carries out feature extraction and obtains target object figure and features feature, wherein target object figure and features feature includes at least two, described
Default training pattern is obtained in training process by different figure and features feature samples joint trainings.In another embodiment, may be used also
To be: carrying out feature extraction to target object and determine target object figure and features feature by default template library, wherein default template
Include training obtained multiple attribute values and corresponding template characteristic vector in database, specifically, by target object into
Row feature extraction obtains corresponding target feature vector, determines the target feature vector and multiple template characteristic vectors
The corresponding attribute value of template characteristic vector for meeting preset condition is determined as the figure and features feature of target object by Euclidean distance.
Step S403, it when recognizing the first original image there are when target object, is intercepted in first original image
The associated facial image to be determined of target object.
Wherein, during being identified to the first original image, the identification including the figure and features feature to target object, i.e.,
To the figure and features feature of human body image present in original image carry out feature extraction with and the figure and features feature of target object compare
Right, if the comparison results are consistent, it is determined that recognizes the target object.
Step S404, correction facial image is obtained to the facial image progress image flame detection to be determined and saved.
Step S405, the second original image in the original image information is inquired, there are the targets if recognized
Object then identifies in second original image with the presence or absence of the facial image to be determined, if it is present output it is described to
Determine the acquisition time and collecting location of facial image and second original image.
It can be seen from the above, during determining target object, other than using conventional facial image identification method, also
The figure and features feature for being combined with target object identified, for the original image for not collecting target object facial image,
It can recognize that target object, expand the search range of target object.
Fig. 5 is a kind of structural block diagram of target follow up mechanism provided by the embodiments of the present application, and the device is above-mentioned for executing
The target tracking method that embodiment provides, has the corresponding functional module of execution method and beneficial effect.As shown in figure 5, the dress
It sets and specifically includes: image collection module 101, affiliated partner determining module 102 and information feedback module 103, wherein
Image collection module 101, for obtaining original image information, the original image information includes at least two original
Image.
Wherein, original image information can be one or more settings (such as crossing, market, subway enter in a different positions
Mouthful, bus stop board) monitor camera device capture image composition.The original image information includes the facial image of candid photograph,
It includes at least two width original images.Illustratively, it can be the original image set arranged according to the sequencing on date
It closes.
Affiliated partner determining module 102, for when recognizing the first original image there are when target object, described first
The associated facial image to be determined of the target object is intercepted in original image.
Wherein, target object can be that the needs having determined are tracked or track the target finished, and first is former
Beginning image is any piece image in original image information, if identifying that there are targets pair in the first original image information
As then intercepting accordingly in first original image and the associated facial image to be determined of target object, wherein face to be determined
Corresponding target person is the uncertain target for whether needing to track in image, i.e., whether is tracked feedback to the target person
In state to be determined.
In one embodiment, the associated facial image to be determined of target object can be identifies in the first original image
The facial image occurred except target object for other arrived, such as identifies target object, while in mesh in the first original image
There is also other facial image (such as 3 facial images) in first original image that mark object occurs, then by 3 faces
Image is determined as and the associated facial image to be determined of target object.
Information feedback module 103, the second original image for inquiring in the original image information, is deposited if recognized
In the target object, then identify with the presence or absence of the facial image to be determined in second original image, if it is present
Export the facial image to be determined.
Wherein, the second original image can be one or more image, as removed the first original image in original image information
Acquisition device other than acquisition device acquisition other original images, if recognized there are the target object, into one
It whether there is the facial image to be determined in the identification of step second original image, if it is present output is described to true
Determine facial image.In one embodiment, to facial image present in the second image and predetermined face figure to be determined
As being compared, if the comparison results are consistent then exports the facial image to be determined.
As shown in the above, by being associated the identification of facial image to the original image information for target object occur
And tracking is rationally transported for frequency of occurrence reaches target person twice or above and carries out output feedback simultaneously with target object
With the image information of acquired original, perfect target tracking scheme improves the acquisition efficiency of tracked information.
In a possible embodiment, the original image information further includes the acquisition time of every width original image and adopts
Collect place, the information feedback module 103 is also used to::
Export the acquisition time and collecting location of the facial image to be determined and second original image.
In a possible embodiment, the affiliated partner determining module 102 is also used to:
Before intercepting the associated facial image to be determined of the target object in first original image, will with it is described
The nearest facial image of the image distance of target object is determined as and the associated facial image to be determined of the target object.
In a possible embodiment, the affiliated partner determining module 102 is also used to:
After intercepting the associated facial image to be determined of target object, figure is carried out to the facial image to be determined
As correction obtains correction facial image and is saved.
In a possible embodiment, the affiliated partner determining module 102 is specifically used for:
Whether the facial image to be determined for judging the interception is front face image, if it is not, then to described to be determined
Facial image carries out image flame detection and obtains correction facial image.
In a possible embodiment, the affiliated partner determining module 102 is specifically used for:
The characteristic point on the facial image to be determined is positioned, at least two human face characteristic point coordinates are generated;
Face deflection angle is calculated based on at least two human face characteristic point coordinates, and according to the face deflection angle
Reversely rotate the facial image to be determined;
Repair process is carried out to the facial image picture after reverse rotation and obtains correction facial image.
In a possible embodiment, the affiliated partner determining module 102 is also used to:
After the acquisition original image information, the facial image feature and figure and features feature of target object are determined, according to
The facial image feature and the figure and features feature carry out image recognition in the original image information.
The present embodiment provides a kind of target tracking equipment on the basis of the various embodiments described above, and Fig. 6 is that the application is implemented
Example provide a kind of target tracking equipment structural schematic diagram, as shown in fig. 6, the target tracking equipment include: memory 201,
Processor (Central Processing Unit, CPU) 202, Peripheral Interface 203, camera 205, power management chip 208,
Input/output (I/O) subsystem 209, touch screen 212, Wifi module 213, other input/control devicess 210 and outer end
Mouth 204, these components are communicated by one or more communication bus or signal wire 207.
It should be understood that diagram target tracking equipment is only an example of target tracking equipment, and target chases after
Track equipment can have than shown in the drawings more or less component, can combine two or more components, or
Person can have different component configurations.Various parts shown in the drawings can include one or more signal processings and/
Or it is realized in the combination of hardware including specific integrated circuit, software or hardware and software.
Just the target tracking equipment provided in this embodiment for target tracking is described in detail below.
Memory 201, the memory 201 can be accessed by CPU202, Peripheral Interface 203 etc., and the memory 201 can
It can also include nonvolatile memory to include high-speed random access memory, such as one or more disk memory,
Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU202 and deposited by Peripheral Interface 203, the Peripheral Interface 203
Reservoir 201.
I/O subsystem 209, the I/O subsystem 209 can be by the input/output peripherals in equipment, such as touch screen 212
With other input/control devicess 210, it is connected to Peripheral Interface 203.I/O subsystem 209 may include 2091 He of display controller
For controlling one or more input controllers 2092 of other input/control devicess 210.Wherein, one or more input controls
Device 2092 processed receives electric signal from other input/control devicess 210 or sends electric signal to other input/control devicess 210,
Other input/control devicess 210 may include physical button (push button, rocker buttons etc.), slide switch, control stick, point
Hit idler wheel.It is worth noting that input controller 2092 can with it is following any one connect: keyboard, infrared port, USB interface
And the indicating equipment of such as mouse.
Touch screen 212, the touch screen 212 are the input interface and output interface between user terminal and user, can
It is shown to user depending on output, visual output may include figure, text, icon, video etc..
Display controller 2091 in I/O subsystem 209 receives electric signal from touch screen 212 or sends out to touch screen 212
Electric signals.Touch screen 212 detects the contact on touch screen, and the contact that display controller 2091 will test is converted to and is shown
The interaction of user interface object on touch screen 212, i.e. realization human-computer interaction, the user interface being shown on touch screen 212
Object can be the icon of running game, the icon for being networked to corresponding network etc..It is worth noting that equipment can also include light
Mouse, light mouse are the extensions for the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen.
Power management chip 208, the hardware for being connected by CPU202, I/O subsystem and Peripheral Interface are powered
And power management.
This Shen can be performed in the target follow up mechanism and target tracking equipment of the target tracking equipment provided in above-described embodiment
Please target tracking equipment provided by any embodiment target tracking method, have execute the corresponding functional module of this method and
Beneficial effect.The not technical detail of detailed description in the above-described embodiments, reference can be made to mesh provided by the application any embodiment
Mark the target tracking method of tracing equipment.
The embodiment of the present application also provides a kind of storage medium comprising target tracking machine executable instructions, and the target chases after
Track machine executable instructions by target tracking device handler when being executed for executing a kind of target tracking method, this method packet
It includes:
Original image information is obtained, the original image information includes at least two width original images;
When recognizing the first original image there are when target object, the target pair is intercepted in first original image
As associated facial image to be determined;
The second original image inquired in the original image information is known if recognized there are the target object
It whether there is the facial image to be determined in not described second original image, if it is present the output face to be determined
Image.
In a possible embodiment, the original image information further includes the acquisition time of every width original image and adopts
Collect place, correspondingly, the output facial image to be determined includes:
Export the acquisition time and collecting location of the facial image to be determined and second original image.
In a possible embodiment, to intercept the target object in first original image associated to be determined
Before facial image, further includes:
By the facial image nearest with the image distance of the target object be determined as with the target object it is associated to
Determine facial image.
In a possible embodiment, it after intercepting the associated facial image to be determined of target object, also wraps
It includes:
Correction facial image is obtained to the facial image progress image flame detection to be determined and is saved.
In a possible embodiment, described that correction face is obtained to the facial image progress image flame detection to be determined
Image and carry out save include:
Whether the facial image to be determined for judging the interception is front face image, if it is not, then to described to be determined
Facial image carries out image flame detection and obtains correction facial image.
In a possible embodiment, described that correction face is obtained to the facial image progress image flame detection to be determined
Image includes:
The characteristic point on the facial image to be determined is positioned, at least two human face characteristic point coordinates are generated;
Face deflection angle is calculated based on at least two human face characteristic point coordinates, and according to the face deflection angle
Reversely rotate the facial image to be determined;
Repair process is carried out to the facial image picture after reverse rotation and obtains correction facial image.
In a possible embodiment, after the acquisition original image information, further includes:
The facial image feature and figure and features feature for determining target object, it is special according to the facial image feature and the figure and features
Sign carries out image recognition in the original image information.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap
It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as
DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium
(such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other
Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed,
Or can be located in different second computer systems, second computer system is connected to the by network (such as internet)
One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term
Matter " may include may reside in different location (such as by network connection different computer systems in) two or
More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement
For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application
The target tracking method operation that executable instruction is not limited to the described above, can also be performed provided by the application any embodiment
Relevant operation in target tracking method.
Note that above are only the preferred embodiment and institute's application technology principle of the application.It will be appreciated by those skilled in the art that
The application is not limited to specific embodiment described here, be able to carry out for a person skilled in the art it is various it is apparent variation,
The protection scope readjusted and substituted without departing from the application.Therefore, although being carried out by above embodiments to the application
It is described in further detail, but the application is not limited only to above embodiments, in the case where not departing from the application design, also
It may include more other equivalent embodiments, and scope of the present application is determined by the scope of the appended claims.
Claims (10)
1. target tracking method characterized by comprising
Original image information is obtained, the original image information includes at least two width original images;
It is closed when recognizing the first original image there are the target object when target object, is intercepted in first original image
The facial image to be determined of connection;
The second original image in the original image information is inquired, if recognized there are the target object, identifies institute
It states with the presence or absence of the facial image to be determined in the second original image, if it is present the output facial image to be determined.
2. the method according to claim 1, wherein the original image information further includes every width original image
Acquisition time and collecting location, correspondingly, the output facial image to be determined includes:
Export the acquisition time and collecting location of the facial image to be determined and second original image.
3. the method according to claim 1, wherein intercepting the target object in first original image
Before associated facial image to be determined, further includes:
The facial image nearest with the image distance of the target object is determined as associated to be determined with the target object
Facial image.
4. method according to any one of claim 1-3, which is characterized in that intercept the target object it is associated to
After determining facial image, further includes:
Correction facial image is obtained to the facial image progress image flame detection to be determined and is saved.
5. according to the method described in claim 4, it is characterized in that, described carry out image flame detection to the facial image to be determined
It obtains correcting facial image and save and includes:
Whether the facial image to be determined for judging the interception is front face image, if it is not, then to the face to be determined
Image carries out image flame detection and obtains correction facial image.
6. according to the method described in claim 4, it is characterized in that, described carry out image flame detection to the facial image to be determined
Obtaining correction facial image includes:
The characteristic point on the facial image to be determined is positioned, at least two human face characteristic point coordinates are generated;
Face deflection angle is calculated based on at least two human face characteristic point coordinates, and reversed according to the face deflection angle
Rotate the facial image to be determined;
Repair process is carried out to the facial image picture after reverse rotation and obtains correction facial image.
7. the method according to claim 1, wherein after the acquisition original image information, further includes:
The facial image feature and figure and features feature for determining target object, exist according to the facial image feature and the figure and features feature
Image recognition is carried out in the original image information.
8. target follow up mechanism characterized by comprising
Image collection module, for obtaining original image information, the original image information includes at least two width original images;
Affiliated partner determining module, for when recognizing the first original image there are when target object, in first original graph
The associated facial image to be determined of the target object is intercepted as in;
Information feedback module, the second original image for inquiring in the original image information, if recognized described in presence
Target object is then identified with the presence or absence of the facial image to be determined in second original image, if it is present output institute
State facial image to be determined.
9. a kind of target tracking equipment, comprising: processor, memory and storage can be run on a memory and on a processor
Computer program, which is characterized in that the processor is realized when executing the computer program as any in claim 1-7
Target tracking method described in.
10. a kind of storage medium comprising target tracking machine executable instructions, which is characterized in that the target tracking equipment can
It executes instruction when being executed by target tracking device handler for executing as target of any of claims 1-7 chases after
Track method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811144949.3A CN109377519A (en) | 2018-09-29 | 2018-09-29 | Target tracking method, device, target tracking equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811144949.3A CN109377519A (en) | 2018-09-29 | 2018-09-29 | Target tracking method, device, target tracking equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109377519A true CN109377519A (en) | 2019-02-22 |
Family
ID=65402498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811144949.3A Pending CN109377519A (en) | 2018-09-29 | 2018-09-29 | Target tracking method, device, target tracking equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109377519A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112866611A (en) * | 2020-12-31 | 2021-05-28 | 上海新住信机电集成有限公司 | Intelligent building monitoring system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605965A (en) * | 2013-11-25 | 2014-02-26 | 苏州大学 | Multi-pose face recognition method and device |
WO2014190494A1 (en) * | 2013-05-28 | 2014-12-04 | Thomson Licensing | Method and device for facial recognition |
US9432631B2 (en) * | 2011-04-04 | 2016-08-30 | Polaris Wireless, Inc. | Surveillance system |
CN105933650A (en) * | 2016-04-25 | 2016-09-07 | 北京旷视科技有限公司 | Video monitoring system and method |
CN106791706A (en) * | 2017-01-24 | 2017-05-31 | 上海木爷机器人技术有限公司 | Object lock method and system |
CN107480246A (en) * | 2017-08-10 | 2017-12-15 | 北京中航安通科技有限公司 | A kind of recognition methods of associate people and device |
CN108229335A (en) * | 2017-12-12 | 2018-06-29 | 深圳市商汤科技有限公司 | It is associated with face identification method and device, electronic equipment, storage medium, program |
-
2018
- 2018-09-29 CN CN201811144949.3A patent/CN109377519A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9432631B2 (en) * | 2011-04-04 | 2016-08-30 | Polaris Wireless, Inc. | Surveillance system |
WO2014190494A1 (en) * | 2013-05-28 | 2014-12-04 | Thomson Licensing | Method and device for facial recognition |
CN103605965A (en) * | 2013-11-25 | 2014-02-26 | 苏州大学 | Multi-pose face recognition method and device |
CN105933650A (en) * | 2016-04-25 | 2016-09-07 | 北京旷视科技有限公司 | Video monitoring system and method |
CN106791706A (en) * | 2017-01-24 | 2017-05-31 | 上海木爷机器人技术有限公司 | Object lock method and system |
CN107480246A (en) * | 2017-08-10 | 2017-12-15 | 北京中航安通科技有限公司 | A kind of recognition methods of associate people and device |
CN108229335A (en) * | 2017-12-12 | 2018-06-29 | 深圳市商汤科技有限公司 | It is associated with face identification method and device, electronic equipment, storage medium, program |
Non-Patent Citations (3)
Title |
---|
SATTA,RICCARDO: "Appearance Descriptors for Person Re-identification: a Comprehensive Review", 《EPRINT ARIXIV》 * |
吴凡: "多视角目标追踪在智能监控***中的研究和应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
齐力: "《公共安全大数据技术与应用》", 30 December 2017, 上海科学技术出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112866611A (en) * | 2020-12-31 | 2021-05-28 | 上海新住信机电集成有限公司 | Intelligent building monitoring system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10606380B2 (en) | Display control apparatus, display control method, and display control program | |
CN106846403B (en) | Method and device for positioning hand in three-dimensional space and intelligent equipment | |
TWI678099B (en) | Video processing method and device and storage medium | |
US20120259638A1 (en) | Apparatus and method for determining relevance of input speech | |
CN109325456A (en) | Target identification method, device, target identification equipment and storage medium | |
CN110045823B (en) | Motion guidance method and device based on motion capture | |
WO2020103526A1 (en) | Photographing method and device, storage medium and terminal device | |
WO2017113668A1 (en) | Method and device for controlling terminal according to eye movement | |
WO2017045258A1 (en) | Photographing prompting method, device and apparatus, and nonvolatile computer storage medium | |
US20150062010A1 (en) | Pointing-direction detecting device and its method, program and computer readable-medium | |
EP3628380B1 (en) | Method for controlling virtual objects, computer readable storage medium and electronic device | |
EP3477593B1 (en) | Hand detection and tracking method and device | |
JP2004094288A (en) | Instructed position detecting device and autonomous robot | |
TW201120681A (en) | Method and system for operating electric apparatus | |
EP2991027A1 (en) | Image processing program, image processing method and information terminal | |
US9836130B2 (en) | Operation input device, operation input method, and program | |
WO2019011073A1 (en) | Human face live detection method and related product | |
US20200242800A1 (en) | Determination apparatus and method for gaze angle | |
JP2012238293A (en) | Input device | |
CN109377518A (en) | Target tracking method, device, target tracking equipment and storage medium | |
TW201941026A (en) | Gesture patch-based remote control operation method and gesture patch-based remote control apparatus | |
WO2018076720A1 (en) | One-hand operation method and control system | |
CN109377519A (en) | Target tracking method, device, target tracking equipment and storage medium | |
CN104063041A (en) | Information processing method and electronic equipment | |
CN109284722A (en) | Image processing method, device, face recognition device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190222 |
|
RJ01 | Rejection of invention patent application after publication |