CN110427908A - A kind of method, apparatus and computer readable storage medium of person detecting - Google Patents
A kind of method, apparatus and computer readable storage medium of person detecting Download PDFInfo
- Publication number
- CN110427908A CN110427908A CN201910733765.9A CN201910733765A CN110427908A CN 110427908 A CN110427908 A CN 110427908A CN 201910733765 A CN201910733765 A CN 201910733765A CN 110427908 A CN110427908 A CN 110427908A
- Authority
- CN
- China
- Prior art keywords
- frame
- personage
- face
- face frame
- overlapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present invention proposes the method, apparatus and computer readable storage medium of a kind of person detecting, and wherein method includes: and in the picture, detects to personage, obtains personage's frame;In described image, face is detected, obtains face frame;According to the degree of overlapping of the face frame and personage's frame, matching result is obtained;In the case where matching result is failure, the face frame is extended, be expanded frame.By upper, personage in the picture is intensive and only exposes face in the case that body part is blocked, and is extended according to the face identified, so that target person be recognized accurately.
Description
Technical field
The present invention relates to the method, apparatus and computer of technical field of image processing more particularly to a kind of person detecting can
Read storage medium.
Background technique
With the continuous development of internet and artificial intelligence technology, more and more fields start to be related to automation calculate with
Analysis, wherein monitoring safety-security area is mostly important one of scene.
Monitoring safety-security area, common person detecting only rely on detector output as a result, when density of stream of people it is excessive or
When blocking more serious mutually between person personage, detector the case where it is easy to appear missing inspection and erroneous detections.Improved plan
It is the method using Face datection, those human targets blocked by large area is positioned by face, improve person detector
Handle bad situation.
Existing video person detecting technology is all the independent carry out Detection task of picture to the input of every frame, is obtained current
The location information of personage's frame in picture.When personage is very intensive, between each target when serious shielding, the body of many personages
Part is all blocked, and detector is difficult to detect this kind of target.
Summary of the invention
The embodiment of the present invention provides the method, apparatus and computer readable storage medium of a kind of person detecting, existing to solve
There are one or more technical problems in technology.
In a first aspect, the embodiment of the invention provides a kind of methods of person detecting, comprising:
In the picture, personage is detected, obtains personage's frame;
In described image, face is detected, obtains face frame;
According to the degree of overlapping of the face frame and personage's frame, matching result is obtained;
In the case where matching result is failure, the face frame is extended, be expanded frame.
In one embodiment, in the case where matching result is successful situation, according to personage's frame and the face frame
Obtain matching frame.
In one embodiment, the degree of overlapping according to the face frame and personage's frame, obtains matching result,
Include:
According to the coordinate information of the coordinate information of the face frame and personage's frame, the face frame and personage's frame are calculated
Degree of overlapping;
The matching result is obtained according to the degree of overlapping.
In one embodiment, described according to the coordinate information of the face frame and the coordinate information of personage's frame, it calculates
The degree of overlapping of the face frame and personage's frame, comprising:
According to the coordinate information of the coordinate information of the face frame and personage's frame, the face frame and personage's frame are calculated
The area of intersection;
Calculate the area of the face frame;
The ratio for calculating the area of the intersection and the area of the face frame, obtains degree of overlapping.
It is in one embodiment, described that the matching result is obtained according to the degree of overlapping, comprising:
The degree of overlapping is compared with threshold value;
In the case where the degree of overlapping is not less than the threshold value, matching result is successfully;
In the case where the degree of overlapping is less than the threshold value, matching result is failure.
It in one embodiment, is multiple face frames and personage's frame successful match in matching result
In the case of, the method also includes:
Calculate separately the distance between the multiple face frame and personage's frame;
The matching result is obtained according to the distance.
In one embodiment, described to calculate separately the distance between the multiple face frame and personage's frame, packet
It includes:
According to the coordinate information of the coordinate information of each face frame and personage's frame, determine in each face frame
The center line of heart line and personage's frame;
The center line of each face frame and the distance between the center line of personage's frame are calculated separately, is obtained described each
The distance between face frame and personage's frame.
It is in one embodiment, described that the matching result is obtained according to the distance, comprising:
, apart from nearest face frame, the face frame with personage's frame successful match will be determined with personage's frame.
Second aspect, the embodiment of the invention provides a kind of devices of person detecting, comprising:
First detection module, in the picture, being detected to personage, obtaining personage's frame;
Second detection module, for being detected to face, obtaining face frame in described image;
Matching module obtains matching result for the degree of overlapping according to the face frame and personage's frame;
Expansion module, for being extended, being expanded to the face frame in the case where matching result is failure
Frame.
In one embodiment, further includes:
Frame synthesis module is matched, is used in the case where matching result is successful situation, according to personage's frame and the face
Frame obtains matching frame.
In one embodiment, the matching module, comprising:
Overlapping Calculation submodule, for calculating according to the coordinate information of the face frame and the coordinate information of personage's frame
The degree of overlapping of the face frame and personage's frame;
Implementation sub-module is matched, for obtaining the matching result according to the degree of overlapping.
In one embodiment, the Overlapping Calculation submodule, comprising:
Intersection areal calculation unit, for calculating according to the coordinate information of the face frame and the coordinate information of personage's frame
The area of the face frame and personage's frame intersection;
Face frame areal calculation unit, for calculating the area of the face frame;
Overlapping Calculation execution unit is obtained for calculating the ratio of the area of the intersection and the area of the face frame
Degree of overlapping.
In one embodiment, the matching implementation sub-module, comprising:
Comparing unit, for the degree of overlapping to be compared with threshold value;
Judging unit, for judging matching result for success in the case where the degree of overlapping is not less than the threshold value;
The judging unit is also used in the case where the degree of overlapping is less than the threshold value, judges matching result to lose
It loses.
In one embodiment, further includes:
Distance calculation module, for calculating separately the distance between the multiple face frame and personage's frame;
The matching implementation sub-module is also used to obtain the matching result according to the distance.
In one embodiment, the distance calculation module, comprising:
Center line determines submodule, for being believed according to the coordinate information of each face frame and the coordinate of personage's frame
Breath, determines the center line of each face frame and the center line of personage's frame;
Distance calculates implementation sub-module, for calculating separately the center line of each face frame and the center of personage's frame
The distance between line obtains the distance between each face frame and personage's frame.
The third aspect, the embodiment of the invention provides a kind of device of person detecting, the function of described device can pass through
Hardware realization can also execute corresponding software realization by hardware.The hardware or software include it is one or more with it is above-mentioned
The corresponding module of function.
It include processor and memory in the structure of described device in a possible design, the memory is used for
Storage supports described device to execute the program of the method for above-mentioned person detecting, the processor is configured to for executing described deposit
The program stored in reservoir.Described device can also include communication interface, be used for and other equipment or communication.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage mediums, for storing person detecting
Computer software instructions used in device comprising for executing program involved in the method for above-mentioned person detecting.
A technical solution in above-mentioned technical proposal has the following advantages that or the utility model has the advantages that personage in the picture is intensive
And only expose face in the case that body part is blocked, it is extended according to the face identified, to be recognized accurately
Target person.
Above-mentioned general introduction is merely to illustrate that the purpose of book, it is not intended to be limited in any way.Except foregoing description
Schematical aspect, except embodiment and feature, by reference to attached drawing and the following detailed description, the present invention is further
Aspect, embodiment and feature, which will be, to be readily apparent that.
Detailed description of the invention
In the accompanying drawings, unless specified otherwise herein, otherwise indicate the same or similar through the identical appended drawing reference of multiple attached drawings
Component or element.What these attached drawings were not necessarily to scale.It should be understood that these attached drawings depict only according to the present invention
Disclosed some embodiments, and should not serve to limit the scope of the present invention.
Fig. 1 shows the flow chart of the method for person detecting according to an embodiment of the present invention.
Fig. 2 shows the flow charts of the method for person detecting according to an embodiment of the present invention.
Fig. 3 shows the flow chart of the method for person detecting according to an embodiment of the present invention.
Fig. 4 shows face frame and the matched schematic diagram of personage's frame.
Fig. 5 shows the flow chart of the method for person detecting according to an embodiment of the present invention.
Fig. 6 shows the flow chart of the method for person detecting according to an embodiment of the present invention.
Fig. 7 shows the flow chart of the method for person detecting according to an embodiment of the present invention.
Fig. 8 shows multiple face frames and the matched schematic diagram of personage's frame.
Fig. 9 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.
Figure 10 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.
Figure 11 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.
Figure 12 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.
Figure 13 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.
Specific embodiment
Hereinafter, certain exemplary embodiments are simply just described.As one skilled in the art will recognize that
Like that, without departing from the spirit or scope of the present invention, described embodiment can be modified by various different modes.
Therefore, attached drawing and description are considered essentially illustrative rather than restrictive.
Fig. 1 shows the flow chart of the method for person detecting according to an embodiment of the present invention.As shown in Figure 1, this method includes
Following steps:
S101: in the picture, detecting personage, obtains personage's frame.
Image may include the picture frame extracted in video.Video can be acquired in different scenes, such as is driven automatically
Sail the video of vehicle acquisition or the video etc. of wisdom shop acquisition.After getting image, person detector can use
Image is detected, the personage in image is obtained and is positioned.Position fixing process may include: that figure is chosen in the form of box
Personage as in using the box as personage's frame, and determines the location information of personage's frame.The location information of personage's frame may include
Coordinate information, width value and height value etc..For example, the coordinate, width value and height value according to personage's frame upper left corner can determine
The position of personage's frame.For another example, the position of personage's frame can be determined according to the coordinate at four angles of personage's frame.
Personage in image may include the positions such as face and body.Image is inputted into person detector, by personage
Detector is detected, and has the image of personage's frame in the output end output of person detector.Person detector may include
Mobilenet-SSD (the direct multi-target detection based on the efficient convolutional neural networks for the application of mobile terminal vision),
Shufflenet-SSD (multi-target detections of the extremely efficient convolutional neural networks of facing mobile apparatus) etc..
It in one embodiment, further include the confidence level for having personage's frame in the image that person detector is exported.At this
In step, if person detector exports multiple personage's frames, personage's frame that confidence level is higher than preset value can be filtered out.
S102: in described image, detecting face, obtains face frame.
After getting image, it can use human-face detector and image detected, the face obtained in image is gone forward side by side
Row positioning.Position fixing process may include the face chosen in image in the form of box, using the box as face frame.Determine people
The location information of face frame, for example, can determine the position of face frame according to the coordinate, width value and height value in the face frame upper left corner
It sets.For another example, the position of face frame can be determined according to the coordinate at four angles of face frame.
Human-face detector may include PyramidBox (Face datection deep learning algorithm model), DSFD (Dual Shot
Face Detector, double branch's human-face detectors) etc..
It in one embodiment, further include the confidence level for having face frame in the image that human-face detector is exported.At this
In step, the face frame that confidence level is higher than preset value can be filtered out.
S103: according to the degree of overlapping of the face frame and personage's frame, matching result is obtained.
It may determine that whether face frame and personage's frame belong to same people by degree of overlapping.Degree of overlapping can be face frame and people
The ratio of face frame shared by intersection between object frame.For example, indicating that face frame and personage's frame do not have intersection when ratio is 0.When
When ratio is 100%, indicate that face frame is completely overlapped on personage's frame.Therefore, proportion threshold value can be preset.For example, ratio
Threshold value can be 50%.If degree of overlapping is greater than the proportion threshold value, then it represents that successful match, face frame and personage's frame belong to same
Personage.If degree of overlapping is less than the proportion threshold value, then it represents that it fails to match, and face frame and personage's frame are not belonging to same personage.
S104: in the case where matching result is failure, the face frame is extended, be expanded frame.
If it fails to match for face frame and personage's frame, such as the body of some personage is blocked, and only exposes face.Thus
The face frame of the personage can not be with personage's frame successful match of other personages.It, can be by face frame with personage based on such situation
The size of frame is that reference is extended.For example, by the left and right edges of face frame respectively to two sides expand, by the lower edge of face frame to
Downside is expanded.The left and right edges of face frame respectively expand to two sides can be on the basis of face frame center line, by left and right edges
The distance expansion of distance center line is twice.The lower edge of face frame to downside carry out expand can with the lower edge of face frame with it is upper
The distance at edge is to expand radix, and lower edge is moved down to 5 times of expansion radix.Face frame after extension is as detecting
Personage.
In one embodiment, this method further include: in the case where matching result is successful situation, according to personage's frame
Matching frame is obtained with the face frame.
Matching result is successful situation, can indicate that personage's frame and face frame belong to same personage, the two is closed
And obtain matching frame.According to the quantity of matching frame and the quantity of expansion subrack, the total quantity of person detecting can be counted.
As shown in Fig. 2, in one embodiment, step S103 includes:
S1031: according to the coordinate information of the coordinate information of the face frame and personage's frame, the face frame and described is calculated
The degree of overlapping of personage's frame.
S1032: the matching result is obtained according to the degree of overlapping.
The coordinate information of face frame and personage's frame can be obtained respectively by the output end of person detector and human-face detector.
The coordinate information of face frame can be expressed as (xminhuman, yminhuman, whuman, hhunan).Wherein xminhuman,
yminhumanThe abscissa and ordinate in the upper left corner of face frame, w can be respectively indicatedhuman, hhunanFace can be respectively indicated
The width value and height value of frame.
The coordinate information of personage's frame can be expressed as (xminface, yminface, wface, hface).Wherein xminface,
yminfaceThe abscissa and ordinate in the upper left corner of face frame, w can be respectively indicatedface, hfaceFace frame can be respectively indicated
Width value and height value.
According to the coordinate information of face frame and personage's frame, region corresponding to face frame and personage's frame can be calculated separately out
Position and size.The weight of the two can be calculated according to the position in region corresponding to face frame and personage's frame and size
Folded degree, to judge whether face frame and personage's frame belong to same personage according to being matched.
As shown in figure 3, in one embodiment, step S1031 includes:
S10311: according to the coordinate information of the coordinate information of the face frame and personage's frame, the face frame and institute are calculated
State the area of personage's frame intersection.
S10312: the area of the face frame is calculated.
S10313: the ratio of the area of the intersection and the area of the face frame is calculated, degree of overlapping is obtained.
It can be calculated separately to obtain personage's frame and face frame according to the coordinate information of the coordinate information of face frame and personage's frame
Four vertex coordinate, so as to both judge whether to have intersection.In the case where judging has intersection, according to coordinate
Calculate the width value and height value of intersection.
Face frame and personage's frame schematic diagram as shown in Figure 4.For example, the coordinate on four vertex of face frame can distinguish table
It is shown as (x1, y1), (x2, y1), (x1, y2), (x2, y2).The coordinate on four vertex of personage's frame can be expressed as (x ' 1,
Y ' 1), (x ' 2, y ' 1), (x ' 1, y ' 2), (x ' 2, y ' 2).In the case where judging has intersection, the coordinate on four vertex of intersection
Can be expressed as (x ' 1, y ' 1), (x2, y ' 1), (x ' 1, y2), (x2, y2).The width value of intersection can be expressed as | x2-
X ' 1 |, the height value of intersection can be expressed as | y ' 1-y2 |.According to the product of the width value of intersection and height value, intersection is obtained
Area s1.According to the product of the width value of face frame and height value, the area s2 of face frame is obtained.It is available to calculate s1/s2
Degree of overlapping.
As shown in figure 5, in one embodiment, step S1032 includes:
S10321: the degree of overlapping is compared with threshold value.
S10322: in the case where the degree of overlapping is not less than the threshold value, matching result is successfully.
S10323: in the case where the degree of overlapping is less than the threshold value, matching result is failure.
A degree of overlapping threshold value is preset, the size of degree of overlapping and the threshold value is calculated.If degree of overlapping is greater than the threshold value, table
It lets others have a look at face frame and personage's frame belongs to same people.If degree of overlapping is less than the threshold value, then it represents that face frame and personage's frame are not belonging to together
One people.
As shown in fig. 6, being multiple face frames and personage's frame in matching result in one embodiment
In the case where successful match, the method also includes:
S601: the distance between the multiple face frame and personage's frame are calculated separately.
S602: the matching result is obtained according to the distance.
As shown in fig. 7, in one embodiment, step S601 includes:
S6011: according to the coordinate information of the coordinate information of each face frame and personage's frame, each face is determined
The center line of the center line of frame and personage's frame.
S6012: the center line of each face frame and the distance between the center line of personage's frame are calculated separately, is obtained
The distance between each face frame and personage's frame.
In one embodiment, step S602 further comprises: by with personage's frame apart from nearest face frame, really
The fixed face frame with personage's frame successful match.
The schematic diagram for the case where being illustrated in figure 8 two face frames and personage's frame successful match.When carrying out person recognition,
The abscissa of the abscissa of the center line of personage's frame and the center line of two face frames is calculated first.For example, in personage's frame
The abscissa of heart line can be expressed as (x ' 2-x ' 1)/2.The abscissa of the center line of first face frame can be expressed as (x2-
x1)/2.The abscissa of the center line of second face frame can be expressed as (x4-x3)/2.Calculate the center line and first of personage's frame
The distance between the center line of the center line and the second face frame of the distance between the center line of face frame l1 and personage's frame
l2.The order of magnitude for comparing l1 and l2, | l1 | < | l2 | in the case where, then it can be by the first face frame and personage's frame
Match.Otherwise | l1 | > | l2 | in the case where, then the second face frame and personage's frame can be matched.
Fig. 9 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.As shown in figure 9, the device packet
It includes:
First detection module 901, in the picture, being detected to personage, obtaining personage's frame.
Second detection module 902, for being detected to face, obtaining face frame in described image.
Matching module 903 obtains matching result for the degree of overlapping according to the face frame and personage's frame.
Expansion module 904, for being extended, being expanded to the face frame in the case where matching result is failure
Open up frame.
In one embodiment, the device further include: matching frame synthesis module, for being successful feelings in matching result
Under condition, matching frame is obtained according to personage's frame and the face frame.
As shown in Figure 10, in one embodiment, the matching module 903, comprising:
Overlapping Calculation submodule 9031, for according to the coordinate information of the face frame and the coordinate information of personage's frame,
Calculate the degree of overlapping of the face frame and personage's frame.
Implementation sub-module 9032 is matched, for obtaining the matching result according to the degree of overlapping.
As shown in figure 11, in one embodiment, the Overlapping Calculation submodule 9031, comprising:
Intersection areal calculation unit 90311, for according to the coordinate information of the face frame and the coordinate information of personage's frame,
Calculate the area of the face frame and personage's frame intersection.
Face frame areal calculation unit 90312, for calculating the area of the face frame.
Overlapping Calculation execution unit 90313, for calculating the ratio of the area of the intersection and the area of the face frame,
Obtain degree of overlapping.
As shown in figure 12, in one embodiment, the matching implementation sub-module 9032, comprising:
Comparing unit 90321, for the degree of overlapping to be compared with threshold value.
Judging unit 90322, for the degree of overlapping be not less than the threshold value in the case where, judge matching result at
Function.
The judging unit 90322 is also used to judge matching result in the case where the degree of overlapping is less than the threshold value
For failure.
In one embodiment, the device further include:
Distance calculation module, for calculating separately the distance between the multiple face frame and personage's frame;
The matching implementation sub-module 9032 is also used to obtain the matching result according to the distance.
In one embodiment, the distance calculation module, comprising:
Center line determines submodule, for being believed according to the coordinate information of each face frame and the coordinate of personage's frame
Breath, determines the center line of each face frame and the center line of personage's frame.
Distance calculates implementation sub-module, for calculating separately the center line of each face frame and the center of personage's frame
The distance between line obtains the distance between each face frame and personage's frame.
The matching implementation sub-module 9032 be further used for will with personage's frame apart from nearest face frame, determine with
The face frame of personage's frame successful match.
Figure 13 shows the structural block diagram of the device of person detecting according to an embodiment of the present invention.As shown in figure 13, the device
Include: memory 1310 and processor 1320, the computer journey that can be run on processor 1320 is stored in memory 1310
Sequence.The processor 1320 realizes the person detecting in above-described embodiment method when executing the computer program.It is described to deposit
The quantity of reservoir 1310 and processor 1320 can be one or more.
The device further include:
Communication interface 1330 carries out data interaction for being communicated with external device.
Memory 1310 may include high speed RAM memory, it is also possible to further include nonvolatile memory (non-
Volatile memory), a for example, at least magnetic disk storage.
If memory 1310, processor 1320 and the independent realization of communication interface 1330, memory 1310, processor
1320 and communication interface 1330 can be connected with each other by bus and complete mutual communication.The bus can be industrial mark
Quasi- architecture (ISA, Industry Standard Architecture) bus, external equipment interconnection (PCI,
Peripheral Component Interconnect) bus or extended industry-standard architecture (EISA, Extended
Industry Standard Architecture) bus etc..The bus can be divided into address bus, data/address bus, control
Bus etc..Only to be indicated with a thick line in Figure 13 convenient for indicating, it is not intended that an only bus or a type of total
Line.
Optionally, in specific implementation, if memory 1310, processor 1320 and communication interface 1330 are integrated in one piece
On chip, then memory 1310, processor 1320 and communication interface 1330 can complete mutual communication by internal interface.
The embodiment of the invention provides a kind of computer readable storage mediums, are stored with computer program, the program quilt
Processor realizes any method in above-described embodiment when executing.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.Moreover, particular features, structures, materials, or characteristics described
It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this
The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples
Sign is combined.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic." first " is defined as a result, the feature of " second " can be expressed or hidden
It include at least one this feature containing ground.In the description of the present invention, the meaning of " plurality " is two or more, unless otherwise
Clear specific restriction.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable read-only memory
(CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other suitable Jie
Matter, because can then be edited, be interpreted or when necessary with other for example by carrying out optical scanner to paper or other media
Suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium
In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer
In readable storage medium storing program for executing.The storage medium can be read-only memory, disk or CD etc..
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in its various change or replacement,
These should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the guarantor of the claim
It protects subject to range.
Claims (17)
1. a kind of method of person detecting characterized by comprising
In the picture, personage is detected, obtains personage's frame;
In described image, face is detected, obtains face frame;
According to the degree of overlapping of the face frame and personage's frame, matching result is obtained;
In the case where matching result is failure, the face frame is extended, be expanded frame.
2. the method according to claim 1, wherein further include:
In the case where matching result is successful situation, matching frame is obtained according to personage's frame and the face frame.
3. the method according to claim 1, wherein described overlapping with personage's frame according to the face frame
Degree, obtains matching result, comprising:
According to the coordinate information of the coordinate information of the face frame and personage's frame, the weight of the face frame and personage's frame is calculated
Folded degree;
The matching result is obtained according to the degree of overlapping.
4. according to the method described in claim 3, it is characterized in that, the coordinate information and personage's frame according to the face frame
Coordinate information, calculate the degree of overlapping of the face frame and personage's frame, comprising:
According to the coordinate information of the coordinate information of the face frame and personage's frame, the face frame and personage's frame intersection are calculated
Area;
Calculate the area of the face frame;
The ratio for calculating the area of the intersection and the area of the face frame, obtains degree of overlapping.
5. according to the method described in claim 3, it is characterized in that, described obtain the matching result according to the degree of overlapping,
Include:
The degree of overlapping is compared with threshold value;
In the case where the degree of overlapping is not less than the threshold value, matching result is successfully;
In the case where the degree of overlapping is less than the threshold value, matching result is failure.
6. according to the method described in claim 3, it is characterized in that, being described in multiple face frames and one in matching result
In the case where personage's frame successful match, the method also includes:
Calculate separately the distance between the multiple face frame and personage's frame;
The matching result is obtained according to the distance.
7. according to the method described in claim 6, it is characterized in that, described calculate separately the multiple face frame and the personage
The distance between frame, comprising:
According to the coordinate information of the coordinate information of each face frame and personage's frame, the center line of each face frame is determined
With the center line of personage's frame;
The center line of each face frame and the distance between the center line of personage's frame are calculated separately, each face is obtained
The distance between frame and personage's frame.
8. according to the method described in claim 6, it is characterized in that, described obtain the matching result according to the distance, packet
It includes:
, apart from nearest face frame, the face frame with personage's frame successful match will be determined with personage's frame.
9. a kind of device of person detecting characterized by comprising
First detection module, in the picture, being detected to personage, obtaining personage's frame;
Second detection module, for being detected to face, obtaining face frame in described image;
Matching module obtains matching result for the degree of overlapping according to the face frame and personage's frame;
Expansion module, for being extended to the face frame, be expanded frame in the case where matching result is failure.
10. device according to claim 9, which is characterized in that further include:
Frame synthesis module is matched, for being obtained according to personage's frame and the face frame in the case where matching result is successful situation
To matching frame.
11. device according to claim 9, which is characterized in that the matching module, comprising:
Overlapping Calculation submodule, for according to the coordinate information of the face frame and the coordinate information of personage's frame, described in calculating
The degree of overlapping of face frame and personage's frame;
Implementation sub-module is matched, for obtaining the matching result according to the degree of overlapping.
12. device according to claim 11, which is characterized in that the Overlapping Calculation submodule, comprising:
Intersection areal calculation unit, for according to the coordinate information of the face frame and the coordinate information of personage's frame, described in calculating
The area of face frame and personage's frame intersection;
Face frame areal calculation unit, for calculating the area of the face frame;
Overlapping Calculation execution unit is overlapped for calculating the ratio of the area of the intersection and the area of the face frame
Degree.
13. device according to claim 11, which is characterized in that the matching implementation sub-module, comprising:
Comparing unit, for the degree of overlapping to be compared with threshold value;
Judging unit, for judging matching result for success in the case where the degree of overlapping is not less than the threshold value;
The judging unit is also used to judge matching result for failure in the case where the degree of overlapping is less than the threshold value.
14. device according to claim 11, which is characterized in that further include:
Distance calculation module, for calculating separately the distance between the multiple face frame and personage's frame;
The matching implementation sub-module is also used to obtain the matching result according to the distance.
15. device according to claim 14, which is characterized in that the distance calculation module, comprising:
Center line determines submodule, for according to the coordinate information of each face frame and the coordinate information of personage's frame, really
The center line of fixed each face frame and the center line of personage's frame;
Distance calculate implementation sub-module, for calculate separately each face frame center line and personage's frame center line it
Between distance, obtain the distance between each face frame and personage's frame.
16. a kind of device of person detecting characterized by comprising
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors
Realize such as method described in any item of the claim 1 to 8.
17. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the program is held by processor
Such as method described in any item of the claim 1 to 8 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910733765.9A CN110427908A (en) | 2019-08-08 | 2019-08-08 | A kind of method, apparatus and computer readable storage medium of person detecting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910733765.9A CN110427908A (en) | 2019-08-08 | 2019-08-08 | A kind of method, apparatus and computer readable storage medium of person detecting |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110427908A true CN110427908A (en) | 2019-11-08 |
Family
ID=68415229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910733765.9A Pending CN110427908A (en) | 2019-08-08 | 2019-08-08 | A kind of method, apparatus and computer readable storage medium of person detecting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110427908A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111124862A (en) * | 2019-12-24 | 2020-05-08 | 北京安兔兔科技有限公司 | Intelligent equipment performance testing method and device and intelligent equipment |
CN111144215A (en) * | 2019-11-27 | 2020-05-12 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111353473A (en) * | 2020-03-30 | 2020-06-30 | 浙江大华技术股份有限公司 | Face detection method and device, electronic equipment and storage medium |
CN111814612A (en) * | 2020-06-24 | 2020-10-23 | 浙江大华技术股份有限公司 | Target face detection method and related device thereof |
CN111950491A (en) * | 2020-08-19 | 2020-11-17 | 成都飞英思特科技有限公司 | Personnel density monitoring method and device and computer readable storage medium |
CN113196292A (en) * | 2020-12-29 | 2021-07-30 | 商汤国际私人有限公司 | Object detection method and device and electronic equipment |
WO2021164395A1 (en) * | 2020-02-18 | 2021-08-26 | 上海商汤临港智能科技有限公司 | Image processing method and apparatus, electronic device, and computer program product |
TWI778652B (en) * | 2021-04-12 | 2022-09-21 | 新加坡商鴻運科股份有限公司 | Method for calculating overlap, electronic equipment and storage medium |
JP2022542668A (en) * | 2019-09-18 | 2022-10-06 | ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド | Target object matching method and device, electronic device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447695A (en) * | 2016-09-23 | 2017-02-22 | 广州视源电子科技股份有限公司 | Same object determining method and device in multi-object tracking |
CN107145816A (en) * | 2017-02-24 | 2017-09-08 | 北京悉见科技有限公司 | Object identifying tracking and device |
CN109740516A (en) * | 2018-12-29 | 2019-05-10 | 深圳市商汤科技有限公司 | A kind of user identification method, device, electronic equipment and storage medium |
-
2019
- 2019-08-08 CN CN201910733765.9A patent/CN110427908A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447695A (en) * | 2016-09-23 | 2017-02-22 | 广州视源电子科技股份有限公司 | Same object determining method and device in multi-object tracking |
CN107145816A (en) * | 2017-02-24 | 2017-09-08 | 北京悉见科技有限公司 | Object identifying tracking and device |
CN109740516A (en) * | 2018-12-29 | 2019-05-10 | 深圳市商汤科技有限公司 | A kind of user identification method, device, electronic equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
张众: "监控视频中行人检测算法研究", 《CNKI》 * |
王凯南: "监控视频中的行人再识别技术研究", 《CNKI》 * |
陈奇: "复杂场景中监控视频事件检测", 《CNKI》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7262659B2 (en) | 2019-09-18 | 2023-04-21 | ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド | Target object matching method and device, electronic device and storage medium |
JP2022542668A (en) * | 2019-09-18 | 2022-10-06 | ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド | Target object matching method and device, electronic device and storage medium |
CN111144215A (en) * | 2019-11-27 | 2020-05-12 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111144215B (en) * | 2019-11-27 | 2023-11-24 | 北京迈格威科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111124862A (en) * | 2019-12-24 | 2020-05-08 | 北京安兔兔科技有限公司 | Intelligent equipment performance testing method and device and intelligent equipment |
CN111124862B (en) * | 2019-12-24 | 2024-01-30 | 北京安兔兔科技有限公司 | Intelligent device performance testing method and device and intelligent device |
WO2021164395A1 (en) * | 2020-02-18 | 2021-08-26 | 上海商汤临港智能科技有限公司 | Image processing method and apparatus, electronic device, and computer program product |
JP2022526347A (en) * | 2020-02-18 | 2022-05-24 | シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド | Image processing methods, equipment, electronic devices and computer program products |
JP7235892B2 (en) | 2020-02-18 | 2023-03-08 | シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド | Image processing method, apparatus, electronic equipment and computer program product |
CN111353473A (en) * | 2020-03-30 | 2020-06-30 | 浙江大华技术股份有限公司 | Face detection method and device, electronic equipment and storage medium |
CN111353473B (en) * | 2020-03-30 | 2023-04-14 | 浙江大华技术股份有限公司 | Face detection method and device, electronic equipment and storage medium |
CN111814612A (en) * | 2020-06-24 | 2020-10-23 | 浙江大华技术股份有限公司 | Target face detection method and related device thereof |
CN111950491A (en) * | 2020-08-19 | 2020-11-17 | 成都飞英思特科技有限公司 | Personnel density monitoring method and device and computer readable storage medium |
CN111950491B (en) * | 2020-08-19 | 2024-04-02 | 成都飞英思特科技有限公司 | Personnel density monitoring method and device and computer readable storage medium |
CN113196292A (en) * | 2020-12-29 | 2021-07-30 | 商汤国际私人有限公司 | Object detection method and device and electronic equipment |
TWI778652B (en) * | 2021-04-12 | 2022-09-21 | 新加坡商鴻運科股份有限公司 | Method for calculating overlap, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427908A (en) | A kind of method, apparatus and computer readable storage medium of person detecting | |
CN110427905B (en) | Pedestrian tracking method, device and terminal | |
CN107358149B (en) | Human body posture detection method and device | |
Jafari et al. | Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras | |
US10192314B2 (en) | Method, system and apparatus for determining a lowest point of a target object in an image | |
CN103679636B (en) | Based on point, the fast image splicing method of line double characteristic | |
CN103093212B (en) | The method and apparatus of facial image is intercepted based on Face detection and tracking | |
CN100361138C (en) | Method and system of real time detecting and continuous tracing human face in video frequency sequence | |
CN112883819A (en) | Multi-target tracking method, device, system and computer readable storage medium | |
CN109584300A (en) | A kind of method and device of determining headstock towards angle | |
US20130243343A1 (en) | Method and device for people group detection | |
US20130230245A1 (en) | People counting device, people counting method and people counting program | |
CN110443210A (en) | A kind of pedestrian tracting method, device and terminal | |
CN104123529A (en) | Human hand detection method and system thereof | |
CN102592147A (en) | Method and device for detecting human face | |
CN110210474A (en) | Object detection method and device, equipment and storage medium | |
CN106447701A (en) | Methods and devices for image similarity determining, object detecting and object tracking | |
CN101655910A (en) | Training system, training method and detection method | |
CN109241345A (en) | Video locating method and device based on recognition of face | |
CN105787876A (en) | Panorama video automatic stitching method based on SURF feature tracking matching | |
CN109190639A (en) | A kind of vehicle color identification method, apparatus and system | |
Krinidis et al. | A robust and real-time multi-space occupancy extraction system exploiting privacy-preserving sensors | |
CN110428442A (en) | Target determines method, targeting system and monitoring security system | |
Schiele | Model-free tracking of cars and people based on color regions | |
Zoidi et al. | Stereo object tracking with fusion of texture, color and disparity information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191108 |