CN109784199A - Analysis method of going together and Related product - Google Patents
Analysis method of going together and Related product Download PDFInfo
- Publication number
- CN109784199A CN109784199A CN201811572443.2A CN201811572443A CN109784199A CN 109784199 A CN109784199 A CN 109784199A CN 201811572443 A CN201811572443 A CN 201811572443A CN 109784199 A CN109784199 A CN 109784199A
- Authority
- CN
- China
- Prior art keywords
- user
- colleague
- target
- camera
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application provides a kind of colleague's analysis method and Related product, wherein, the described method includes: being acquired within a preset period of time by the camera with the first camera identification, obtain multiple first images, and it is acquired in the preset time period by each camera in multiple cameras with second camera mark, obtain multiple second images, wherein each camera in the multiple camera collects at least second image;At least one first colleague user of the target user is determined by multiple described first images, and at least one second colleague user of the target user is determined by multiple described second images;According at least one described first colleague user and it is described at least one second go together user, determine therefore at least one target colleague user of the target user can be improved accuracy when determining colleague user.
Description
Technical field
This application involves technical field of image processing, and in particular to a kind of colleague's analysis method and Related product.
Background technique
With the continuous development of urban construction, the number of the population in city is also gradually increased.With the increasing of population support
Add, more security risk can be brought to community, for example, when user walks outdoors, often being had at one's side in terms of colleague
The user of colleague, the user of colleague may be the user of user understanding, it is also possible to which the user is made in criminal, attempt
It plunders, the criminal offences such as plunder, the serious personal safety as well as the property safety for threatening user of above-mentioned criminal offence.Existing scheme
In, when analyzing the colleague user of user, when analyzing the image of acquisition, sample it is comprehensive lower, cause really
Surely accuracy when colleague user is lower.
Summary of the invention
The embodiment of the present application provides a kind of colleague's analysis method and Related product, can be improved standard when determining colleague user
True property.
The first aspect of the embodiment of the present application provides a kind of colleague's analysis method, which comprises
It is acquired within a preset period of time by the camera with the first camera identification, obtains multiple first figures
Picture, and carried out in the preset time period by each camera in multiple cameras with second camera mark
Acquisition, obtains multiple second images, wherein each camera in the multiple camera collects at least second figure
Picture, the multiple camera with second camera mark are the child node of the camera with the first camera identification
Camera includes target user in the first image, second image;
At least one first colleague user of the target user is determined by multiple described first images, and is passed through
Multiple described second images determine at least one second colleague user of the target user;
According at least one described first colleague user and it is described at least one second go together user, determine the target
At least one target colleague user of user.
Optionally, in conjunction with the embodiment of the present application in a first aspect, in the first possible implementation of the first aspect,
Described at least one first colleague user that the target user is determined by multiple described first images, comprising:
The first activity trajectory of the target user is determined by multiple described first images, multiple first images determine
It is multiple with reference to the action message and second with reference to colleague user each in colleague user in target user's preset range out
Activity trajectory;
According to first activity trajectory and second activity trajectory, determine described each with reference to colleague user and institute
State the number and the minimum range of the maximum range value and lowest distance value and the maximum range value between target user
The number of value;
By the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value, determine
Each reference numerical value with reference to colleague user out;
The action message of the target user is obtained, and obtains each action message with reference to colleague user;
By the action message and each action message with reference to the user that goes together of the target user, determine described
Each colleague's corrected value with reference to colleague user;
According to each reference numerical value with reference to colleague user and each school of going together with reference to the user that goes together
Positive value determines at least one first colleague user of the target user.
Optionally, in conjunction with the first possible implementation of the first aspect of the embodiment of the present application, in first aspect
In second of possible implementation, the action message includes face orientation, headwork and foot action, described to pass through institute
The action message and each action message with reference to the user that goes together for stating target user are determined described each with reference to colleague's use
Colleague's corrected value at family, comprising:
By the face orientation of each face orientation with reference to colleague user and the target user, determine described
Each angle with reference between the face orientation of colleague user and the face orientation of the target user obtains multiple target folders
Angle;
The similarity between each headwork and preset headwork with reference to colleague user is obtained, is obtained more
A first similarity, and obtain it is described it is each with reference to colleague user footwork and the target user footwork it
Between similarity, obtain multiple second similarities;
Obtain the first weight of the multiple target angle, the second weight of the multiple first similarity, the multiple
The third weight of second similarity, first weight, the second weight, third weights sum are 1;
By first weight, the second weight, third weight to the multiple target angle, multiple first similarities,
Multiple second similarities carry out weight operation, obtain each colleague's corrected value with reference to colleague user.
The second aspect of the embodiment of the present application provides kind of colleague's analytical equipment, and described device includes acquisition unit, first
Determination unit and the second determination unit, wherein
The acquisition unit, for being adopted within a preset period of time by the camera with the first camera identification
Collection, obtains multiple first images, and passes through multiple cameras with second camera mark in the preset time period
In each camera be acquired, obtain multiple second images, wherein the acquisition of each camera in the multiple camera
To at least second image, the multiple camera with second camera mark is described with the first camera identification
Camera child node camera, include target user in the first image, second image;
First determination unit determines the target user at least one for multiple first images described in
First colleague user, and determine that at least one second colleague of the target user uses by multiple described second images
Family;
Second determination unit, for according at least one described first colleague user and described at least one is second same
Row user determines at least one target colleague user of the target user.
Optionally, in conjunction with the second aspect of the embodiment of the present application, in the first possible implementation of the second aspect,
In described at least one first colleague's customer-side for determining the target user by multiple described first images, described the
One determination unit is specifically used for:
The first activity trajectory of the target user is determined by multiple described first images, multiple first images determine
It is multiple with reference to the action message and second with reference to colleague user each in colleague user in target user's preset range out
Activity trajectory;
According to first activity trajectory and second activity trajectory, determine described each with reference to colleague user and institute
State the number and the minimum range of the maximum range value and lowest distance value and the maximum range value between target user
The number of value;
By the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value, determine
Each reference numerical value with reference to colleague user out;
The action message of the target user is obtained, and obtains each action message with reference to colleague user;
By the action message and each action message with reference to the user that goes together of the target user, determine described
Each colleague's corrected value with reference to colleague user;
According to each reference numerical value with reference to colleague user and each school of going together with reference to the user that goes together
Positive value determines at least one first colleague user of the target user.
In conjunction with the first possible implementation of the second aspect of the embodiment of the present application, second in second aspect can
In the implementation of energy, the action message includes face orientation, headwork and foot action, passes through the target described
The action message of user and each action message with reference to the user that goes together are determined described each with reference to the same of colleague user
In terms of row corrected value, first determination unit is specifically used for:
By the face orientation of each face orientation with reference to colleague user and the target user, determine described
Each angle with reference between the face orientation of colleague user and the face orientation of the target user obtains multiple target folders
Angle;
The similarity between each headwork and preset headwork with reference to colleague user is obtained, is obtained more
A first similarity, and obtain it is described it is each with reference to colleague user footwork and the target user footwork it
Between similarity, obtain multiple second similarities;
Obtain the first weight of the multiple target angle, the second weight of the multiple first similarity, the multiple
The third weight of second similarity, first weight, the second weight, third weights sum are 1;
By first weight, the second weight, third weight to the multiple target angle, multiple first similarities,
Multiple second similarities carry out weight operation, obtain each colleague's corrected value with reference to colleague user.
The third aspect of the embodiment of the present application provides a kind of terminal, including processor, input equipment, output equipment and storage
Device, the processor, input equipment, output equipment and memory are connected with each other, wherein the memory is for storing computer
Program, the computer program include program instruction, and the processor is configured for calling described program instruction, are executed such as this
Apply for the step instruction in embodiment first aspect.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, wherein above-mentioned computer can
Read the computer program that storage medium storage is used for electronic data interchange, wherein above-mentioned computer program executes computer
The step some or all of as described in the embodiment of the present application first aspect.
5th aspect of the embodiment of the present application provides a kind of computer program product, wherein above-mentioned computer program produces
Product include the non-transient computer readable storage medium for storing computer program, and above-mentioned computer program is operable to make to count
Calculation machine executes the step some or all of as described in the embodiment of the present application first aspect.The computer program product can be
One software installation packet.
Implement the embodiment of the present application, at least has the following beneficial effects:
By the embodiment of the present application, adopted within a preset period of time by the camera with the first camera identification
Collection, obtains multiple first images, and passes through multiple cameras with second camera mark in the preset time period
In each camera be acquired, obtain multiple second images, wherein the acquisition of each camera in the multiple camera
To at least second image, the multiple camera with second camera mark is described with the first camera identification
Camera child node camera, include target user in the first image, second image, by it is described multiple the
One image determines at least one first colleague user of the target user, and is determined by multiple described second images
The target user at least one second colleague user, according to it is described at least one first colleague user and it is described at least one
Second colleague user determines at least one target colleague user of the target user, accordingly, with respect in existing scheme,
The sample of analysis is comprehensive lower, in the present solution, by with the first camera identification camera first collect multiple first
Image, while multiple second images are collected by the camera that multiple second cameras identify, to the first image and the second figure
As being analyzed, at least one target colleague user is obtained, multiple first images and multiple second images are able to ascend sample
It is comprehensive, to improve accuracy when target colleague user determines to a certain extent.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 provides a kind of structural schematic diagram of analysis system of going together for the embodiment of the present application;
Fig. 2A provides a kind of flow diagram of analysis method of going together for the embodiment of the present application;
Fig. 2 B provides a kind of schematic diagram of characteristic point for the embodiment of the present application;
Fig. 3 provides the flow diagram of another colleague's analysis method for the embodiment of the present application;
Fig. 4 provides the flow diagram of another colleague's analysis method for the embodiment of the present application;
Fig. 5 provides the flow diagram of another colleague's analysis method for the embodiment of the present application;
Fig. 6 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application;
Fig. 7 provides a kind of structural schematic diagram of analytical equipment of going together for the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
The description and claims of this application and term " first " in above-mentioned attached drawing, " second " etc. are for distinguishing
Different objects, are not use to describe a particular order.In addition, term " includes " and " having " and their any deformations, it is intended that
It is to cover and non-exclusive includes.Such as the process, method, system, product or equipment for containing a series of steps or units do not have
It is defined in listed step or unit, but optionally further comprising the step of not listing or unit, or optionally also wrap
Include other step or units intrinsic for these process, methods, product or equipment.
" embodiment " mentioned in this application is it is meant that a particular feature, structure, or characteristic described can be in conjunction with the embodiments
Included at least one embodiment of the application.The phrase, which occurs, in each position in the description might not each mean phase
Same embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art are explicitly
Implicitly understand, embodiments described herein can be combined with other embodiments.
Electronic device involved by the embodiment of the present application may include the various handheld devices with wireless communication function,
Mobile unit, wearable device calculate equipment or are connected to other processing equipments and various forms of radio modem
User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal
Device) etc..For convenience of description, apparatus mentioned above is referred to as electronic device.
In order to better understand a kind of colleague's analysis method provided by the embodiments of the present application, first below to application colleague point
Colleague's analysis system of analysis method is briefly introduced.As shown in Figure 1, colleague's analysis system includes having the first camera identification
Camera 101, with second camera mark camera 102 and analytical equipment 103, wherein lead within a preset period of time
It crosses the camera 101 with the first camera identification to be acquired, obtains multiple first images, meanwhile, within a preset period of time
It is acquired by each camera in multiple cameras 102 with second camera mark, obtains multiple second images,
Wherein, each camera in the multiple camera collects at least second image, and it is equal to be also possible to each camera
Multiple second images are acquired, the camera 102 with second camera mark is the camera shooting with the first camera identification
It include target user, the camera shooting with the first camera identification in first 101 child node camera, the first image and the second image
Multiple first images are sent analytical equipment 103 by first 101, multiple cameras 102 with second camera mark by multiple
Second image is sent to analytical equipment 103, and analytical equipment 103 determines at least one of target user by multiple first images
First colleague user, analytical equipment 103 determine at least one second colleague user of target user by multiple second images,
Analytical equipment 103 according at least one first colleague user and at least one second go together user, determine target user extremely
Few target colleague user, accordingly, with respect in existing scheme, the sample of analysis is comprehensive lower, in the present solution, passing through tool
There is the camera of the first camera identification first to collect multiple first images, while the camera shooting identified by multiple second cameras
Head collects multiple second images, analyzes the first image and the second image, obtains at least one target colleague user, more
It opens the first image and multiple second images is able to ascend the comprehensive of sample, used to improve target colleague to a certain extent
Accuracy when family determines.
Fig. 2A is please referred to, Fig. 2A provides a kind of flow diagram of analysis method of going together for the embodiment of the present application.Such as figure
Shown in 2A, colleague's analysis method includes step 201-203, specific as follows:
201, within a preset period of time by being acquired with the camera of the first camera identification, obtain multiple first
Image, and in the preset time period by it is multiple with second camera mark cameras in each camera into
Row acquisition, obtains multiple second images, wherein each camera in the multiple camera collects at least second figure
Picture, the multiple camera with second camera mark are the child node of the camera with the first camera identification
Camera includes target user in the first image, second image.
Wherein, when preset time period can collect target user for the camera with the first camera identification,
Period until target user leaves with the camera acquisition range of the first camera identification.Preset time period can also
With the period to be set by empirical value or historical data.When acquiring the first image, according to the preset time interval into
Row acquisition, preset time period can be to be set by empirical value or historical data.
Optionally, child node camera is to understand are as follows: it is the second camera of the camera with the first camera identification
Head, make it possible to it is comprehensive to target user and colleague user carry out Image Acquisition, for example, target user enter certain cell
When, the camera with the first camera identification can be the camera being set at cell gate, have second camera mark
The camera of knowledge can be the camera (the side left side, side right side etc.) for being set to remaining each different direction, then can will have
The camera of two camera identifications is expressed as child node camera, in position not with the camera with the first camera identification
It together, is a kind of division in logic.
Optionally, each camera in camera with second camera mark can acquire second image,
Multiple second images can also be acquired, is also possible to part camera and acquires second image, part camera acquires multiple
Second image, and other modes, are not specifically limited herein.
Optionally, before carrying out the first Image Acquisition, the first camera meeting continuous collecting image, then to the figure of acquisition
As being analyzed, when determining then to carry out the acquisition of multiple the first images there are when the facial image of target user in image.
Wherein, a kind of possible image to acquisition is analyzed, and determines the face figure in image there are target user
The method of picture includes step A1-A3, specific as follows:
A1, recognition of face is carried out to the image of acquisition, obtains multiple facial images;
Optionally, when facial image is blocked, it can be identified with the following method, specifically include step
A10-A19, specific as follows:
A10, target facial image is repaired according to the symmetry principle of face, obtains the first facial image and mesh
Mark repairs coefficient, and the target repairs coefficient for stating facial image to the integrity degree of reparation;
Wherein, target facial image is to be extracted from the image of acquisition only including the facial image of part face.
A11, feature extraction is carried out to first facial image, obtains the first face feature set;
A12, feature extraction is carried out to the target facial image, obtains the second face characteristic collection;
A13, it scans for, obtains special with first face in the database according to the first face feature set
Collect the facial image of multiple objects of successful match;
A14, the second face characteristic collection is matched with the feature set of the facial image of the multiple object, is obtained
Multiple first matching values;
A15, the physical characteristic data for obtaining every an object in the multiple object, obtain multiple physical characteristic datas;
A16, by each human body in the corresponding physical characteristic data of the target face and the multiple physical characteristic data
Characteristic is matched, and multiple second matching values are obtained;
A17, according to the mapping relations between preset reparation coefficient and weight, it is corresponding to determine that the target repairs coefficient
First weight, and the second weight is determined according to first weight;
A18, it is matched according to first weight, second weight, the multiple first matching value, the multiple second
Value is weighted, and obtains multiple object matching values;
A19, maximum value is chosen from the multiple object matching value, and using the corresponding object of the maximum value as described in
The corresponding complete facial image of target facial image.
Optionally, mirror transformation processing can be carried out to target facial image according to the symmetry principle of face, is carrying out mirror
Face reparation is carried out based on the model for generating confrontation network as later target facial image after conversion process, can will be handled,
It obtains the first facial image and target repairs coefficient, wherein target repairs the picture at the face position that coefficient can be completed for reparation
Element accounts for the ratio value of the sum of all pixels of entire face, and the model for generating confrontation network may include consisting of part: discriminator, language
Adopted regularization network etc., is not limited thereto.
Optionally, the method for carrying out feature extraction to the first facial image may include following at least one: LBP (Local
Binary Patterns, local binary patterns) feature extraction algorithm, HOG (Histogram of Oriented Gradient,
Histograms of oriented gradients) feature extraction algorithm, LoG (Laplacian of Gaussian, second order Laplce-Gauss) feature
Extraction algorithm etc., it is not limited here.
Wherein, the preset mapping relations repaired between coefficient and weight can be corresponding for each preset reparation coefficient
One weight, and each it is preset repair coefficient weight between and be 1, it is preset repair coefficient weight can be by user
Self-setting or system default specifically according to the mapping relations between preset reparation coefficient and weight, determine that target is repaired
Corresponding first weight of complex coefficient, and determine that the second weight, the second weight can be the second matching value pair according to first weight
The weight answered, it is between the first weight and the second weight and be 1, by the first weight respectively with multiple first matching values weight, and
Second weight is weighted with multiple second matching values respectively, obtains the corresponding multiple object matchings of multiple objects
Value, choosing the corresponding object of maximum matching value in multiple matching values is the corresponding complete facial image of target facial image.
In this example, by repairing to imperfect facial image, passing through the facial image after repairing, progress
Match, obtain the facial image of multiple objects, in the comparison by characteristics of human body, to determine that target facial image is corresponding complete
Facial image obtains whole person to the end so that matched image screens after to reparation by repairing to face
Face image can promote accuracy when facial image obtains to a certain extent.
A2, multiple described facial images are compared with the facial image of the target user in database, obtain every
Similarity between facial image and the facial image of target user;
Wherein, when multiple facial images being compared with the facial image of target user, every picture can be carried out
It splits, is split as multiple subgraphs, is then compared simultaneously, obtain the similarity of each subgraph, then by the phase of subgraph
Like degree mean value as the similarity between facial image and target user's facial image, certainly, after determining similarity, also
The similarity that can differentiate the special subgraph of some of which then directly determines the people when the similarity is lower than preset value
The facial image of face image and target user are dissimilar, and special subgraph can be the subgraph for including mouth, eye, nose
Picture, preset value are to be set by empirical value setting or historical data.
A3, if it exists similarity are greater than the facial image of default similarity value, it is determined that there are target users in image out
Facial image.
In this example, through the above way come determine in image with the presence or absence of target user facial image in, by right
The facial image being blocked is identified, and can promote target use to a certain degree by the way of piecemeal comparison when comparison
The accuracy when facial image at family is differentiated, and promote efficiency of the facial image of target user when differentiating.
202, at least one first colleague user of the target user is determined by multiple described first images, and
At least one second colleague user of the target user is determined by multiple described second images.
Optionally, a kind of side of possible at least one colleague user that target user is determined by multiple first images
Method includes step B1-B6, specific as follows:
B1, the first activity trajectory that the target user is determined by multiple described first images, multiple first images
Determine in target user's preset range it is multiple with reference to colleague users in it is each with reference to colleague user action message and
Second activity trajectory;
Optionally, a kind of method of possible the first activity trajectory that target user is determined according to multiple first images includes
Step B10-B13, specific as follows:
B10, it is determined by multiple described first images, location point of the target user in every first image obtains
Multiple location points include at least one path between the multiple location point;
B11, the initial motion speed for obtaining target user, and the target user is obtained in the movement of each location point
Speed, and according to the facial image of target user, obtain age and the facial expression of the target user;
Wherein, when initial motion speed can be understood as judging to occur target user, movement possessed by target user
Speed.
Optionally, a kind of method packet of the possible facial expression that target user is obtained according to the facial image of target user
Step B110-B112 is included, specific as follows:
B110, extract facial image feature locations at image;
Wherein, feature locations may include mouth etc..
B111, the image at feature locations is analyzed, obtains expression element;
Wherein, the method analyzed to feature locations can be with are as follows: is illustrated by taking mouth as an example, mouth includes the corners of the mouth, lip
Deng, wherein if lip opening is larger, expression element can be surprised;Corners of the mouth recess, lip conjunction are closed, then expression element is micro-
It laughs at, lip parts a little, corners of the mouth micro-pits, then expression element is to laugh at, etc..Wherein, lip opens larger it is to be understood that opening degree
General greater than lip maximum expanded diameter, lip parts a little it is to be understood that opening degree is lip maximum expanded diameter
20% or so etc..
B112, according to expression element, obtain the facial expression of target user.
Optionally, the relationship between facial expression and expression element is determined, the facial expression of expression element, for example,
Expression element is to laugh at, then facial expression is smiling face, and expression is to smile, then facial expression is to smile, and expression element is surprised, then face
Portion's expression is surprised expression.The facial expression of user may also include that tired out etc..
Optionally, a kind of method at the possible age that target user is obtained according to facial image can be with are as follows: can pass through
Facial image differentiates the wrinkle ratio of user's face, according to wrinkle ratio and the mapping relations between the age, determines that target is used
At the age at family, can also determine the age of target user by the pigment of face skin in facial image, wrinkle ratio with
Mapping relations between age are to be obtained by historical data.
B12, according to the initial motion speed of the target user, the movement velocity of each location point, age and facial table
Feelings are determined destination path from least one path between the two neighboring location point in the multiple location point, are obtained
A plurality of destination path;
Wherein, according to initial motion speed, the movement velocity of each position, age and facial expression, target road is determined
The method of diameter includes step B121-B123, specific as follows:
B121, according to movement velocity and time interval, determine travel of the user between two location points;
Wherein, time interval is time interval when acquiring the first image.
B122, the travel is matched with the length of each path, it will be with the difference L of path length and travel
Absolute value be less than preset length path as reference path;
Wherein, the length of each path is the length of each path at least one path.
B123, according to the mapping relations between preset age, facial expression and path, determine destination path.
Optionally, the mapping relations between age, facial expression and path are as shown in table 1:
Table 1
Age | Facial expression | Path |
5-10 years old | It is tired out | B < L≤C |
10-20 years old | It is tired out | A < L≤B |
20-40 years old | It is tired out | C < L≤D |
40 years old or more | It is tired out | D < L≤E |
As shown in table 1, facial expression be it is tired out when, the mapping relations between age, facial expression and path, A, B, C,
D is the length points lower than preset length, and E is preset length.Using the corresponding path difference L of distance as destination path.
B13, a plurality of destination path is attached, obtains the first activity trajectory of the target user.
Wherein, it when a plurality of destination path being attached, is attached according to the sequencing of time, to obtain target
The activity trajectory figure of user, the sequencing of time can be understood as sequencing when acquisition image.
It in this example, is determined by facial expression, age and movement velocity, the target road between two location points
Diameter can more specifically reflect user movement state, therefore can promote accuracy when determining path determines.
Optionally, when obtaining the second activity trajectory with reference to colleague user, it is referred to the determination of the first activity trajectory
Method is obtained, to obtain the second activity trajectory.
B2, according to first activity trajectory and second activity trajectory, determine described each with reference to colleague user
The number and the minimum of maximum range value and lowest distance value and the maximum range value between the target user
The number of distance value;
Wherein, when at the time of maximum range value and lowest distance value being identical, the first activity trajectory and the second activity trajectory
Between maximum range value and lowest distance value.
B3, by the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value, really
Make each reference numerical value with reference to colleague user;
Optionally, a kind of possible determining method for referring to numerical value with reference to colleague user includes step B30-B31, tool
Body is as follows:
B30, the distance between maximum range value and lowest distance value difference are obtained, and obtains the number of maximum range value
Number difference between the number of lowest distance value;
Wherein, the number difference between the number of maximum range value and the number of lowest distance value can be integer, negative
Or 0, minuend is the number of maximum range value, and subtrahend is the number of lowest distance value.
B31, according to the number difference and the distance difference, determine with reference to numerical value;
Optionally, it can be determined according to the mapping relations between preset number difference, distance difference and numerical value
With reference to numerical value.Wherein, described to be used to reflect the colleague user for referring to that colleague user is target user with reference to numerical value
Probability.The mapping relations can be obtained by neural network model training, wherein can be with when being trained to neural network model
Including positive training and reverse train, neural network model may include N layers of neural network, can be by sample number in training
According to the first layer for being input to the N layers of neural network, the first operation result is obtained after carrying out forward operation by first layer, then will
First operation result is input to the second layer and carries out forward operation, obtains second as a result, with this, until N-1 result is input to N
Layer carries out forward operation, obtains N operation result, is executing reverse train to N operation result, is repeating positive instruction with this
Experienced and reverse train, until neural network model training is completed.Sample data is number difference, distance difference and probability of going together.
B4, the action message for obtaining the target user, and obtain each action message with reference to colleague user;
Optionally, the method for obtaining the action message of target user can be with are as follows: extracts characteristic point from multiple first images
Location parameter, the action message of target user is determined according to the location parameter of characteristic point.Wherein, characteristic point can be with are as follows: hand
Portion's characteristic point and foot's characteristic point, hand-characteristic point may include the characteristic point of arm, and foot's characteristic point can be the spy of shank
The characteristic point of sign point and thigh, as shown in Figure 2 B.Action message may include that face orientation, headwork, foot action, hand are dynamic
Make etc., headwork may include normal header movement, the inclined head in left side, the inclined head in right side, wherein the inclined head in left side can be understood as head
The angle of portion's middle line and vertical direction is greater than 10 degree, and on the left side of face orientation, the inclined head in right side can be understood as head upright
The angle of middle line and vertical direction is greater than 10 degree, and on the right of face orientation, normal header movement is understood that be located at left side
Headwork between inclined head and the inclined head in right side in divergent angle ranges, i.e. range of 10 degree of the left side between 10 degree of right side.
B5, action message and each action message with reference to the user that goes together by the target user, are determined
Each colleague's corrected value with reference to colleague user;
Optionally, action message may include face orientation, headwork and foot action, then a kind of possible determination is gone together
The method of corrected value includes step B50-B53, specific as follows:
B50, gone together by each reference face orientation of user and the face orientation of the target user, are determined
Each angle with reference between the face orientation of colleague user and the face orientation of the target user, obtains multiple targets
Angle;
Wherein, with reference to colleague user face orientation and target user face orientation between angle can be understood as by
Face is abstracted as a plane, and the direction of the vertical line of the plane is face orientation, the direction that the direction of vertical line is facial pair, because
This, is the angle between vertical line with reference to the angle between the face orientation of colleague user and the face orientation of target user.
Similarity between B51, acquisition each headwork and preset headwork with reference to colleague user, obtains
To multiple first similarities, and obtain described each dynamic with reference to the footwork of colleague user and the step of the target user
Similarity between work obtains multiple second similarities;
Wherein, the similarity of headwork can be determined according to deviation angle, and deviation angle is identical with offset direction
Closer, then similarity is bigger, and deviation angle and offset direction are remoter, then similarity is smaller.Similarity between foot action,
Can similarity between paces, paces it is to be understood that when walking each step span, i.e., the distance of each step, and
Which foot is stepping, and the difference between span is smaller, and similarity is bigger, and the difference between span is bigger, and similarity is smaller, if
Identical foot steps, then similarity is bigger, and the foot stepped is different, then similarity is smaller.
B52, the first weight for obtaining the multiple target angle, the multiple first similarity the second weight, described
The third weight of multiple second similarities, first weight, the second weight, third weights sum are 1;
Wherein, the first weight, the second weight, third weight can set based on experience value or be carried out according to historical data
Setting.
B53, by first weight, the second weight, third weight to the multiple target angle, multiple first similar
Degree, multiple second similarities carry out weight operation, obtain each colleague's corrected value with reference to colleague user.
Wherein, corrected value can be the numerical value greater than 1, or the numerical value less than 1.
In this example, by face orientation, headwork and foot action, to determine colleague's corrected value, face orientation
Can reflect out the relevance between user, headwork and foot action are able to reflect out the walking data between user, because
This, is corrected value by above-mentioned parameter and determines, is able to ascend accuracy when corrected value obtains.
B6, the reference numerical value according to each reference colleague user and each going together with reference to the user that goes together
Corrected value determines at least one first colleague user of the target user.
Optionally, target numerical value will be obtained with reference to numerical value multiplied by colleague's corrected value.At target numerical value
In preset numerical value, it is determined that going out corresponding with the target numerical value with reference to the user that goes together is first to go together user.In advance
If numerical value can pass through empirical value setting or historical data setting.
In this example, by activity trajectory, the reference numerical value with reference to colleague user is determined, then according to reference
Colleague's corrected value of user, come from least refer to first colleague user in determine the first colleague user, therefore, pass through active rail
Mark and reference correction value determine that first goes together user, and it is accurate when the first colleague user determines to be promoted to a certain extent
Property.
Optionally, it determines the method for the second colleague user and determines that the first method of user of going together is identical, it is no longer superfluous herein
It states.
203, according at least one described first colleague user and it is described at least one second go together user, determine described
At least one target colleague user of target user.
Optionally, by least one described first colleague user and it is described at least one second go together in user, there is phase
The user of same facial image, as target colleague user.The user of identical facial image if it does not exist then uses the first colleague
User's conduct of target numerical value peak in family, target colleague user, and target in the second colleague user is gone together
User's conduct of numerical value peak, target colleague user.
In one possible example, after determining at least one target colleague user, it can go together user's to target
Danger classes determined, and according to determining that result, therefore, can be in danger classes compared with Gao Shixiang to target user's pushed information
Target user sends a warning message, and so as to remind target user, target user can take after receiving warning information
Corresponding action, with this, it is possible to reduce the probability of happening of hazard event.This method specifically includes step C1-C4, specific as follows:
The user information of each target colleague user in the colleague user of at least one target described in C1, acquisition;
C2, the user information of user of being gone together according to each target, are determined described by preset customer relationship spectrum
Customer relationship between at least one target colleague user;
It wherein, include the customer relationship information of colleague user in preset customer relationship spectrum, it can determine and target
Go together the associated user of user.For example, partner's information of target colleague user, friend information etc., partner's information is understood that
For, target go together user be with criminal offence when, the user information for the other users of crime together with user of going together with the target.
C3, the danger classes that at least one target colleague user is determined by the customer relationship;
Optionally, if determining dangerous user from least one target colleague user, and dangerous user
Association user, it is determined that the danger classes for going out target colleague user is the 4th danger classes, if going together from least one target
Dangerous user is determined in user, then the danger classes of target colleague user is the danger etc. that highest danger classes is low level-one
Grade if there is no dangerous users from least one target colleague user, but there are the association user of the target user, is then endangered
Dangerous grade is the danger classes of the low second level of highest danger classes, if dangerous user is not present at least one target colleague user,
And there is no the association users with target user, then danger classes is the danger classes of the low three-level of highest danger classes, with such
It pushes away, carries out dangerous grade classification, danger classes is lower, and dangerous smaller, danger classes is higher, dangerous higher.
It is issued there are the first colleague user to the target user in user if C4, at least one described target are gone together
Warning information, the first colleague user are that danger classes is higher than the first default danger classes at least one target colleague user
Target go together user.
Wherein, the first default danger classes is at least one target colleague user, and there is no dangerous users, but existing should
Danger classes corresponding to the association user of target user.
In one possible example, colleague's analysis method further includes following method, specifically includes step D1-D3, specifically such as
Under:
If D1, at least one described target are gone together, there are third colleague users in user, obtain the third colleague and use
The zone of action of family within a preset time interval, and obtain behaviour area of the target user in the prefixed time interval
Domain, the third colleague user are that the danger classes at least one target colleague user is lower than the second default danger classes
Colleague user, the second default danger classes be lower than the described first default danger classes;
Wherein, the second default danger classes can be gone together at least one target in user, and there is no dangerous users, and not
In the presence of danger classes corresponding to the association user with target user.Zone of action can be understood as the region that user is passed through.
If D2, the third are gone together, zone of action and the target user preset user described within a preset time interval
The similarity between zone of action in time interval is greater than default similarity, then obtains the interest letter of the third colleague user
Breath, and obtain the interest information of the target user;
Wherein, the similarity between zone of action it is to be understood that user pass through all areas in same area
Number is bigger, and similarity is bigger, and the number of same area is smaller, and similarity is smaller.Prefixed time interval, default similarity can lead to
Cross empirical value setting or historical data setting.Interest information, it can be understood as, the hobby of user, for example, sports, daily
Activity, reading, object for appreciation game etc..
If D3, the third are gone together, the interest information of user is identical as the interest information of the target user, will be described
The user information pushing of third colleague user gives the target user.
In this example, target colleague user in determine third go together user, third go together user hobby and
Target go together user hobby it is identical, then by the third colleague user be pushed to target user, therefore, can intelligence from
Friend recommendation is carried out for target user in target colleague user, the side to make friends so as to a degree of promotion target user
Formula, to promote user experience.
Referring to Fig. 3, Fig. 3 provides the flow diagram of another colleague's analysis method for the embodiment of the present application.Such as Fig. 3
Shown, colleague's analysis method may include step 301-308, specific as follows:
301, within a preset period of time by being acquired with the camera of the first camera identification, obtain multiple first
Image, and in the preset time period by it is multiple with second camera mark cameras in each camera into
Row acquisition, obtains multiple second images;
Wherein, each camera in the multiple camera collects at least second image, the multiple to have
The child node camera of camera of the camera of second camera mark for described in the first camera identification, described first
It include target user in image, second image.
302, the first activity trajectory of the target user, multiple first images are determined by multiple described first images
Determine in target user's preset range it is multiple with reference to colleague users in it is each with reference to colleague user action message and
Second activity trajectory;
303, it according to first activity trajectory and second activity trajectory, determines described each with reference to colleague user
The number and the minimum of maximum range value and lowest distance value and the maximum range value between the target user
The number of distance value;
304, by the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value,
Determine each reference numerical value with reference to colleague user;
305, the action message of the target user is obtained, and obtains each movement letter with reference to colleague user
Breath;
306, it by the action message of the target user and each action message with reference to the user that goes together, determines
Each colleague's corrected value with reference to colleague user;
307, according to each reference numerical value with reference to colleague user and described each with reference to going together the same of user
Row corrected value determines at least one first colleague user of the target user, and true by multiple described second images
Make at least one second colleague user of the target user;
308, according at least one described first colleague user and it is described at least one second go together user, determine described
At least one target colleague user of target user.
In this example, by activity trajectory, the reference numerical value with reference to colleague user is determined, then according to reference
Colleague's corrected value of user, come from least refer to first colleague user in determine the first colleague user, therefore, pass through active rail
Mark and reference correction value determine that first goes together user, and it is accurate when the first colleague user determines to be promoted to a certain extent
Property.
Referring to Fig. 4, Fig. 4 provides the flow diagram of another colleague's analysis method for the embodiment of the present application.Such as Fig. 4
Shown, colleague's analysis method may include step 401-407, specific as follows:
401, within a preset period of time by being acquired with the camera of the first camera identification, obtain multiple first
Image, and in the preset time period by it is multiple with second camera mark cameras in each camera into
Row acquisition, obtains multiple second images;
Wherein, each camera in the multiple camera collects at least second image, the multiple to have
The child node camera of camera of the camera of second camera mark for described in the first camera identification, described first
It include target user in image, second image.
402, at least one first colleague user of the target user is determined by multiple described first images, and
At least one second colleague user of the target user is determined by multiple described second images;
403, according at least one described first colleague user and it is described at least one second go together user, determine described
At least one target colleague user of target user;
404, the user information of each target colleague user at least one target colleague user is obtained;
405, it according to the user information of each target colleague user, is determined by preset customer relationship spectrum described
Customer relationship between at least one target colleague user;
406, the danger classes of at least one target colleague user is determined by the customer relationship;
If 407, being issued there are the first colleague user to the target user at least one target colleague user
Warning information, the first colleague user are that danger classes is higher than the first default danger classes at least one target colleague user
Target go together user.
In this example, after determining at least one target colleague user, the danger classes for the user that can go together to target
Determined, and according to judgement result to target user's pushed information, it therefore, can be in danger classes compared with Gao Shixiang target user
It sends a warning message, so as to remind target user, target user can take corresponding row after receiving warning information
It is dynamic, with this, it is possible to reduce the probability of happening of hazard event
Refering to Fig. 5, Fig. 5 provides the flow diagram of another colleague's analysis method for the embodiment of the present application.Such as Fig. 5 institute
To show, colleague's analysis method includes step 501-506, specific as follows:
501, within a preset period of time by being acquired with the camera of the first camera identification, obtain multiple first
Image, and in the preset time period by it is multiple with second camera mark cameras in each camera into
Row acquisition, obtains multiple second images;
Wherein, each camera in the multiple camera collects at least second image, the multiple to have
The child node camera of camera of the camera of second camera mark for described in the first camera identification, described first
It include target user in image, second image.
502, at least one first colleague user of the target user is determined by multiple described first images, and
At least one second colleague user of the target user is determined by multiple described second images;
503, according at least one described first colleague user and it is described at least one second go together user, determine described
At least one target colleague user of target user;
If 504, there are third colleague users at least one target colleague user, obtains the third colleague and use
The zone of action of family within a preset time interval, and obtain behaviour area of the target user in the prefixed time interval
Domain;
Wherein, the third colleague user is that the danger classes at least one target colleague user is pre- lower than second
If the colleague user of danger classes, the second default danger classes is lower than the described first default danger classes.
If 505, third colleague user within a preset time interval zone of action with the target user described pre-
If the similarity between zone of action in time interval is greater than default similarity, then the interest of the third colleague user is obtained
Information, and obtain the interest information of the target user;
If 506, the interest information of the third colleague user is identical as the interest information of the target user, will be described
The user information pushing of third colleague user gives the target user.
In this example, target colleague user in determine third go together user, third go together user hobby and
Target go together user hobby it is identical, then by the third colleague user be pushed to target user, therefore, can intelligence from
Friend recommendation is carried out for target user in target colleague user, the side to make friends so as to a degree of promotion target user
Formula, to promote user experience.
It is consistent with above-described embodiment, referring to Fig. 6, Fig. 6 is that a kind of structure of terminal provided by the embodiments of the present application is shown
It is intended to, as shown, including processor, input equipment, output equipment and memory, the processor, input equipment, output are set
Standby and memory is connected with each other, wherein for the memory for storing computer program, the computer program includes that program refers to
It enables, the processor is configured for calling described program instruction, and above procedure includes the instruction for executing following steps;
It is acquired within a preset period of time by the camera with the first camera identification, obtains multiple first figures
Picture, and carried out in the preset time period by each camera in multiple cameras with second camera mark
Acquisition, obtains multiple second images, wherein each camera in the multiple camera collects at least second figure
Picture, the multiple camera with second camera mark are the child node of the camera with the first camera identification
Camera includes target user in the first image, second image;
At least one first colleague user of the target user is determined by multiple described first images, and is passed through
Multiple described second images determine at least one second colleague user of the target user;
According at least one described first colleague user and it is described at least one second go together user, determine the target
At least one target colleague user of user.
In this example, it is acquired, is obtained more by the camera with the first camera identification within a preset period of time
The first image is opened, and is taken the photograph in the preset time period by multiple each of cameras with second camera mark
As head is acquired, multiple second images are obtained, determine at least the one of the target user by multiple described first images
A first colleague user, and determine that at least one second colleague of the target user uses by multiple described second images
Family, according at least one described first colleague user and it is described at least one second go together user, determine the target user
At least one target go together user, accordingly, with respect in existing scheme, the sample of analysis is comprehensive lower, in the present solution, logical
It crosses the camera with the first camera identification and first collects multiple first images, while being identified by multiple second cameras
Camera collects multiple second images, analyzes the first image and the second image, obtains at least one target colleague and uses
Family, multiple first images and multiple second images are able to ascend the comprehensive of sample, to improve target to a certain extent
Accuracy when colleague user determines.
It is above-mentioned that mainly the scheme of the embodiment of the present application is described from the angle of method side implementation procedure.It is understood that
, in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software module for terminal.This
Field technical staff should be readily appreciated that, in conjunction with each exemplary unit and algorithm of embodiment description presented herein
Step, the application can be realized with the combining form of hardware or hardware and computer software.Some function actually with hardware also
It is the mode of computer software driving hardware to execute, the specific application and design constraint depending on technical solution.Profession
Technical staff can specifically realize described function to each using distinct methods, but this realization should not be recognized
For beyond scope of the present application.
The embodiment of the present application can carry out the division of functional unit according to above method example to terminal, for example, can be right
The each functional unit of each function division is answered, two or more functions can also be integrated in a processing unit.
Above-mentioned integrated unit both can take the form of hardware realization, can also realize in the form of software functional units.It needs
Illustrate, is schematical, only a kind of logical function partition to the division of unit in the embodiment of the present application, it is practical to realize
When there may be another division manner.
Consistent with the above, referring to Fig. 7, Fig. 7 provides a kind of structure of analytical equipment of going together for the embodiment of the present application
Schematic diagram, described device include acquisition unit 701, the first determination unit 702 and the second determination unit 703, wherein
The acquisition unit 701, for being carried out within a preset period of time by the camera with the first camera identification
Acquisition, obtains multiple first images, and passes through multiple camera shootings with second camera mark in the preset time period
Each camera in head is acquired, and obtains multiple second images, wherein each camera in the multiple camera is adopted
Collect at least second image, the multiple camera with second camera mark is described with the first camera shooting leader
The child node camera of the camera of knowledge includes target user in the first image, second image;
First determination unit 702, for determining the target user at least by multiple described first images
One first colleague user, and by multiple described second images determine the target user at least one second colleague
User;
Second determination unit 703, for according at least one described first colleague user and it is described at least one the
Two colleague users determine at least one target colleague user of the target user.
Optionally, in described at least one first colleague for determining the target user by multiple described first images
Customer-side, first determination unit 702 are specifically used for:
The first activity trajectory of the target user is determined by multiple described first images, multiple first images determine
It is multiple with reference to the action message and second with reference to colleague user each in colleague user in target user's preset range out
Activity trajectory;
According to first activity trajectory and second activity trajectory, determine described each with reference to colleague user and institute
State the number and the minimum range of the maximum range value and lowest distance value and the maximum range value between target user
The number of value;
By the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value, determine
Each reference numerical value with reference to colleague user out;
The action message of the target user is obtained, and obtains each action message with reference to colleague user;
By the action message and each action message with reference to the user that goes together of the target user, determine described
Each colleague's corrected value with reference to colleague user;
According to each reference numerical value with reference to colleague user and each school of going together with reference to the user that goes together
Positive value determines at least one first colleague user of the target user.
Optionally, the action message includes face orientation, headwork and foot action, passes through the target described
The action message of user and each action message with reference to the user that goes together are determined described each with reference to the same of colleague user
In terms of row corrected value, first determination unit 702 is specifically used for:
By the face orientation of each face orientation with reference to colleague user and the target user, determine described
Each angle with reference between the face orientation of colleague user and the face orientation of the target user obtains multiple target folders
Angle;
The similarity between each headwork and preset headwork with reference to colleague user is obtained, is obtained more
A first similarity, and obtain it is described it is each with reference to colleague user footwork and the target user footwork it
Between similarity, obtain multiple second similarities;
Obtain the first weight of the multiple target angle, the second weight of the multiple first similarity, the multiple
The third weight of second similarity, first weight, the second weight, third weights sum are 1;
By first weight, the second weight, third weight to the multiple target angle, multiple first similarities,
Multiple second similarities carry out weight operation, obtain each colleague's corrected value with reference to colleague user.
Optionally, colleague's analytical equipment also particularly useful for:
Obtain the user information of each target colleague user at least one target colleague user;
According to the user information of each target colleague user, composed by preset customer relationship described in determining at least
Customer relationship between one target colleague user;
The danger classes of at least one target colleague user is determined by the customer relationship;
If issuing and alerting to the target user there are the first colleague user at least one target colleague user
Information, the first colleague user are the mesh that danger classes is higher than the first default danger classes at least one target colleague user
Mark colleague user.
Optionally, colleague's analytical equipment also particularly useful for:
If there are third colleague users at least one target colleague user, obtains the third colleague user and exist
Zone of action in prefixed time interval, and zone of action of the target user in the prefixed time interval is obtained,
The third colleague user is that the danger classes at least one target colleague user is lower than the second default danger classes
Go together user, and the second default danger classes is lower than the described first default danger classes;
If third colleague user within a preset time interval zone of action and the target user when described default
Between similarity between zone of action in interval be greater than default similarity, then obtain the interest letter of the third colleague user
Breath, and obtain the interest information of the target user;
If the interest information of the third colleague user is identical as the interest information of the target user, by the third
The user information pushing of colleague user gives the target user.
The embodiment of the present application also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, it is as any in recorded in above method embodiment which execute computer
A kind of some or all of colleague's analysis method step.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer readable storage medium of machine program, the computer program make computer execute such as above method embodiment
Some or all of any colleague's analysis method of middle record step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because
According to the application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, related actions and modules not necessarily the application
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the unit, it is only a kind of
Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit,
It can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, applying for that each functional unit in bright each embodiment can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, the technical solution of the application substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the application
Step.And memory above-mentioned includes: USB flash disk, read-only memory (read-only memory, ROM), random access memory
The various media that can store program code such as (random access memory, RAM), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
It may include: flash disk, read-only memory, random access device, disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas;
At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the application
There is change place, in conclusion the contents of this specification should not be construed as limiting the present application.
Claims (10)
1. a kind of colleague's analysis method, which is characterized in that the described method includes:
It is acquired within a preset period of time by the camera with the first camera identification, obtains multiple first images, with
And be acquired in the preset time period by each camera in multiple cameras with second camera mark,
Obtain multiple second images, wherein each camera in the multiple camera collects at least second image, described
Multiple cameras with second camera mark are the child node camera of the camera with the first camera identification,
It include target user in the first image, second image;
At least one first colleague user of the target user is determined by multiple described first images, and by described
Multiple second images determine at least one second colleague user of the target user;
According at least one described first colleague user and it is described at least one second go together user, determine the target user
At least one target go together user.
2. the method according to claim 1, wherein described determine the mesh by multiple described first images
Mark at least one first colleague user of user, comprising:
The first activity trajectory of the target user is determined by multiple described first images, multiple first images determine institute
It states multiple with reference to each with reference to the action message of colleague user and the second activity in colleague user in target user's preset range
Track;
According to first activity trajectory and second activity trajectory, determine described each with reference to colleague user and the mesh
Mark the number and the lowest distance value of the maximum range value and lowest distance value and the maximum range value between user
Number;
By the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value, institute is determined
State each reference numerical value with reference to colleague user;
The action message of the target user is obtained, and obtains each action message with reference to colleague user;
By the action message and each action message with reference to the user that goes together of the target user, determine described each
With reference to colleague's corrected value of colleague user;
According to each reference numerical value with reference to colleague user and described each with reference to the corrected value of going together of user of going together,
Determine at least one first colleague user of the target user.
3. according to the method described in claim 2, it is characterized in that, the action message include face orientation, headwork and
Foot action, the action message and each action message with reference to the user that goes together by the target user, determines
Each colleague's corrected value with reference to colleague user out, comprising:
By the face orientation of each face orientation with reference to colleague user and the target user, determine described each
With reference to the angle between the face orientation of colleague user and the face orientation of the target user, multiple target angles are obtained;
The similarity between each headwork and preset headwork with reference to colleague user is obtained, obtains multiple the
One similarity, and obtain described each with reference between the footwork of colleague user and the footwork of the target user
Similarity obtains multiple second similarities;
Obtain the first weight of the multiple target angle, the second weight of the multiple first similarity, the multiple second
The third weight of similarity, first weight, the second weight, third weights sum are 1;
By first weight, the second weight, third weight to the multiple target angle, multiple first similarities, multiple
Second similarity carries out weight operation, obtains each colleague's corrected value with reference to colleague user.
4. method according to any one of claims 1 to 3, which is characterized in that the method also includes:
Obtain the user information of each target colleague user at least one target colleague user;
According to the user information of each target colleague user, at least one described in determining is composed by preset customer relationship
Customer relationship between target colleague user;
The danger classes of at least one target colleague user is determined by the customer relationship;
If there are the first colleague users at least one target colleague user, alarm letter is issued to the target user
Breath, the first colleague user are the target that danger classes is higher than the first default danger classes at least one target colleague user
Go together user.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
If there are third colleague users at least one target colleague user, the third colleague user is obtained default
Zone of action in time interval, and zone of action of the target user in the prefixed time interval is obtained, it is described
Third colleague user is colleague of the danger classes at least one target colleague user lower than the second default danger classes
User, the second default danger classes are lower than the described first default danger classes;
If third colleague user within a preset time interval zone of action and the target user between the preset time
Similarity between interior zone of action is greater than default similarity, then obtains the interest information of the third colleague user, with
And obtain the interest information of the target user;
If the interest information of the third colleague user is identical as the interest information of the target user, the third is gone together
The user information pushing of user gives the target user.
6. a kind of colleague's analytical equipment, which is characterized in that described device includes that acquisition unit, the first determination unit and second are determining
Unit, wherein
The acquisition unit is obtained for being acquired within a preset period of time by the camera with the first camera identification
To multiple the first images, and by every in multiple cameras with second camera mark in the preset time period
A camera is acquired, and obtains multiple second images, wherein each camera in the multiple camera collects at least
One the second image, the multiple camera with second camera mark are the camera shooting with the first camera identification
The child node camera of head includes target user in the first image, second image;
First determination unit, for by multiple described first images determine the target user at least one first
Go together user, and at least one second colleague user of the target user is determined by multiple described second images;
Second determination unit, for according at least one described first colleague user and it is described at least one second go together use
At least one target colleague user of the target user is determined at family.
7. device according to claim 6, which is characterized in that it is described determined by multiple described first images it is described
At least one first colleague's customer-side of target user, first determination unit are specifically used for:
The first activity trajectory of the target user is determined by multiple described first images, multiple first images determine institute
It states multiple with reference to each with reference to the action message of colleague user and the second activity in colleague user in target user's preset range
Track;
According to first activity trajectory and second activity trajectory, determine described each with reference to colleague user and the mesh
Mark the number and the lowest distance value of the maximum range value and lowest distance value and the maximum range value between user
Number;
By the maximum range value, lowest distance value, the number of maximum range value, the number of lowest distance value, institute is determined
State each reference numerical value with reference to colleague user;
The action message of the target user is obtained, and obtains each action message with reference to colleague user;
By the action message and each action message with reference to the user that goes together of the target user, determine described each
With reference to colleague's corrected value of colleague user;
According to each reference numerical value with reference to colleague user and described each with reference to the corrected value of going together of user of going together,
Determine at least one first colleague user of the target user.
8. device according to claim 7, which is characterized in that the action message include face orientation, headwork and
Foot action, in the action message by the target user and each action message with reference to the user that goes together, really
In terms of making each colleague's corrected value with reference to colleague user, first determination unit is specifically used for:
By the face orientation of each face orientation with reference to colleague user and the target user, determine described each
With reference to the angle between the face orientation of colleague user and the face orientation of the target user, multiple target angles are obtained;
The similarity between each headwork and preset headwork with reference to colleague user is obtained, obtains multiple the
One similarity, and obtain described each with reference between the footwork of colleague user and the footwork of the target user
Similarity obtains multiple second similarities;
Obtain the first weight of the multiple target angle, the second weight of the multiple first similarity, the multiple second
The third weight of similarity, first weight, the second weight, third weights sum are 1;
By first weight, the second weight, third weight to the multiple target angle, multiple first similarities, multiple
Second similarity carries out weight operation, obtains each colleague's corrected value with reference to colleague user.
9. a kind of terminal, which is characterized in that the processor, defeated including processor, input equipment, output equipment and memory
Enter equipment, output equipment and memory to be connected with each other, wherein the memory is for storing computer program, the computer
Program includes program instruction, and the processor is configured for calling described program instruction, is executed such as any one of claim 1-5
The method.
10. a kind of computer readable storage medium, which is characterized in that the computer storage medium is stored with computer program,
The computer program includes program instruction, and described program instruction makes the processor execute such as right when being executed by a processor
It is required that the described in any item methods of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811572443.2A CN109784199B (en) | 2018-12-21 | 2018-12-21 | Peer-to-peer analysis method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811572443.2A CN109784199B (en) | 2018-12-21 | 2018-12-21 | Peer-to-peer analysis method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109784199A true CN109784199A (en) | 2019-05-21 |
CN109784199B CN109784199B (en) | 2020-11-24 |
Family
ID=66497957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811572443.2A Active CN109784199B (en) | 2018-12-21 | 2018-12-21 | Peer-to-peer analysis method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784199B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532930A (en) * | 2019-08-23 | 2019-12-03 | 深圳市驱动新媒体有限公司 | A kind of passenger flow management method and device and equipment |
CN110532929A (en) * | 2019-08-23 | 2019-12-03 | 深圳市驱动新媒体有限公司 | A kind of same pedestrian's analysis method and device and equipment |
CN110532931A (en) * | 2019-08-23 | 2019-12-03 | 深圳市驱动新媒体有限公司 | A kind of passenger flow analysing method and apparatus and equipment |
CN110837512A (en) * | 2019-11-15 | 2020-02-25 | 北京市商汤科技开发有限公司 | Visitor information management method and device, electronic equipment and storage medium |
CN111104915A (en) * | 2019-12-23 | 2020-05-05 | 云粒智慧科技有限公司 | Method, device, equipment and medium for peer analysis |
CN111191601A (en) * | 2019-12-31 | 2020-05-22 | 深圳云天励飞技术有限公司 | Method, device, server and storage medium for identifying peer users |
CN111488835A (en) * | 2020-04-13 | 2020-08-04 | 北京爱笔科技有限公司 | Method and device for identifying fellow persons |
CN112016443A (en) * | 2020-08-26 | 2020-12-01 | 深圳市商汤科技有限公司 | Method and device for identifying same lines, electronic equipment and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070092245A1 (en) * | 2005-10-20 | 2007-04-26 | Honeywell International Inc. | Face detection and tracking in a wide field of view |
CN102254169A (en) * | 2011-08-23 | 2011-11-23 | 东北大学秦皇岛分校 | Multi-camera-based face recognition method and multi-camera-based face recognition system |
CN103839049A (en) * | 2014-02-26 | 2014-06-04 | 中国计量学院 | Double-person interactive behavior recognizing and active role determining method |
CN105931429A (en) * | 2016-07-18 | 2016-09-07 | 四川君逸数码科技股份有限公司 | Intelligent nighttime approach recognition and alarming method and device |
CN106203260A (en) * | 2016-06-27 | 2016-12-07 | 南京邮电大学 | Pedestrian's recognition and tracking method based on multiple-camera monitoring network |
CN107016322A (en) * | 2016-01-28 | 2017-08-04 | 浙江宇视科技有限公司 | A kind of method and device of trailing personnel analysis |
CN107153824A (en) * | 2017-05-22 | 2017-09-12 | 中国人民解放军国防科学技术大学 | Across video pedestrian recognition methods again based on figure cluster |
CN107481154A (en) * | 2017-07-17 | 2017-12-15 | 广州特道信息科技有限公司 | The analysis method and device of social networks interpersonal relationships |
CN108280435A (en) * | 2018-01-25 | 2018-07-13 | 盛视科技股份有限公司 | A kind of passenger's abnormal behaviour recognition methods based on human body attitude estimation |
CN108629791A (en) * | 2017-03-17 | 2018-10-09 | 北京旷视科技有限公司 | Pedestrian tracting method and device and across camera pedestrian tracting method and device |
CN108830974A (en) * | 2017-04-27 | 2018-11-16 | 胡渐佳 | Identify inlet/outlet electronics end system |
CN108924507A (en) * | 2018-08-02 | 2018-11-30 | 高新兴科技集团股份有限公司 | A kind of personnel's system of path generator and method based on multi-cam scene |
-
2018
- 2018-12-21 CN CN201811572443.2A patent/CN109784199B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070092245A1 (en) * | 2005-10-20 | 2007-04-26 | Honeywell International Inc. | Face detection and tracking in a wide field of view |
CN102254169A (en) * | 2011-08-23 | 2011-11-23 | 东北大学秦皇岛分校 | Multi-camera-based face recognition method and multi-camera-based face recognition system |
CN103839049A (en) * | 2014-02-26 | 2014-06-04 | 中国计量学院 | Double-person interactive behavior recognizing and active role determining method |
CN107016322A (en) * | 2016-01-28 | 2017-08-04 | 浙江宇视科技有限公司 | A kind of method and device of trailing personnel analysis |
CN106203260A (en) * | 2016-06-27 | 2016-12-07 | 南京邮电大学 | Pedestrian's recognition and tracking method based on multiple-camera monitoring network |
CN105931429A (en) * | 2016-07-18 | 2016-09-07 | 四川君逸数码科技股份有限公司 | Intelligent nighttime approach recognition and alarming method and device |
CN108629791A (en) * | 2017-03-17 | 2018-10-09 | 北京旷视科技有限公司 | Pedestrian tracting method and device and across camera pedestrian tracting method and device |
CN108830974A (en) * | 2017-04-27 | 2018-11-16 | 胡渐佳 | Identify inlet/outlet electronics end system |
CN107153824A (en) * | 2017-05-22 | 2017-09-12 | 中国人民解放军国防科学技术大学 | Across video pedestrian recognition methods again based on figure cluster |
CN107481154A (en) * | 2017-07-17 | 2017-12-15 | 广州特道信息科技有限公司 | The analysis method and device of social networks interpersonal relationships |
CN108280435A (en) * | 2018-01-25 | 2018-07-13 | 盛视科技股份有限公司 | A kind of passenger's abnormal behaviour recognition methods based on human body attitude estimation |
CN108924507A (en) * | 2018-08-02 | 2018-11-30 | 高新兴科技集团股份有限公司 | A kind of personnel's system of path generator and method based on multi-cam scene |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532930A (en) * | 2019-08-23 | 2019-12-03 | 深圳市驱动新媒体有限公司 | A kind of passenger flow management method and device and equipment |
CN110532929A (en) * | 2019-08-23 | 2019-12-03 | 深圳市驱动新媒体有限公司 | A kind of same pedestrian's analysis method and device and equipment |
CN110532931A (en) * | 2019-08-23 | 2019-12-03 | 深圳市驱动新媒体有限公司 | A kind of passenger flow analysing method and apparatus and equipment |
CN110532930B (en) * | 2019-08-23 | 2023-08-29 | 深圳市驱动新媒体有限公司 | Passenger flow management method, device and equipment |
CN110837512A (en) * | 2019-11-15 | 2020-02-25 | 北京市商汤科技开发有限公司 | Visitor information management method and device, electronic equipment and storage medium |
CN111104915A (en) * | 2019-12-23 | 2020-05-05 | 云粒智慧科技有限公司 | Method, device, equipment and medium for peer analysis |
CN111104915B (en) * | 2019-12-23 | 2023-05-16 | 云粒智慧科技有限公司 | Method, device, equipment and medium for peer analysis |
CN111191601A (en) * | 2019-12-31 | 2020-05-22 | 深圳云天励飞技术有限公司 | Method, device, server and storage medium for identifying peer users |
CN111488835A (en) * | 2020-04-13 | 2020-08-04 | 北京爱笔科技有限公司 | Method and device for identifying fellow persons |
CN111488835B (en) * | 2020-04-13 | 2023-10-10 | 北京爱笔科技有限公司 | Identification method and device for staff |
CN112016443A (en) * | 2020-08-26 | 2020-12-01 | 深圳市商汤科技有限公司 | Method and device for identifying same lines, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109784199B (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784199A (en) | Analysis method of going together and Related product | |
Wang et al. | Hybrid transfer learning and broad learning system for wearing mask detection in the COVID-19 era | |
Li et al. | Person search with natural language description | |
Bhattacharya et al. | Recognition of complex events: Exploiting temporal dynamics between underlying concepts | |
Ouyang et al. | Multi-source deep learning for human pose estimation | |
Wu et al. | The machine knows what you are hiding: an automatic micro-expression recognition system | |
CN105975932B (en) | Gait Recognition classification method based on time series shapelet | |
CN106548164A (en) | The relevance recognition methods of facial image and mobile device | |
Bertoni et al. | Perceiving humans: from monocular 3d localization to social distancing | |
Wrobel et al. | Using a probabilistic neural network for lip-based biometric verification | |
Sayed | Biometric Gait Recognition Based on Machine Learning Algorithms. | |
Doshi et al. | From deep learning to episodic memories: Creating categories of visual experiences | |
Shengtao et al. | Facial expression recognition based on global and local feature fusion with CNNs | |
CN116311400A (en) | Palm print image processing method, electronic device and storage medium | |
CN115187910A (en) | Video classification model training method and device, electronic equipment and storage medium | |
Hirzi et al. | Literature study of face recognition using the viola-jones algorithm | |
Kviatkovsky et al. | Person identification from action styles | |
Goud et al. | Smart attendance notification system using SMTP with face recognition | |
Vo et al. | VQASTO: Visual question answering system for action surveillance based on task ontology | |
Kadhim et al. | A multimodal biometric database and case study for face recognition based deep learning | |
Amjad et al. | A technique and architectural design for criminal detection based on lombroso theory using deep learning | |
CN115188031A (en) | Fingerprint identification method, computer program product, storage medium and electronic device | |
CN114694008A (en) | Remote face recognition system | |
CN114022905A (en) | Attribute-aware domain expansion pedestrian re-identification method and system | |
Li et al. | Pedestrian detection based on clustered poselet models and hierarchical and–or grammar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |