CN109886208A - Method, apparatus, computer equipment and the storage medium of object detection - Google Patents
Method, apparatus, computer equipment and the storage medium of object detection Download PDFInfo
- Publication number
- CN109886208A CN109886208A CN201910137428.3A CN201910137428A CN109886208A CN 109886208 A CN109886208 A CN 109886208A CN 201910137428 A CN201910137428 A CN 201910137428A CN 109886208 A CN109886208 A CN 109886208A
- Authority
- CN
- China
- Prior art keywords
- point
- anchor point
- characteristic
- detectable substance
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The disclosure is directed to a kind of method, apparatus of object detection, computer equipment and storage mediums, belong to technical field of computer vision.The described method includes: determining the characteristic pattern of target image;Determine multiple characteristic points in the characteristic pattern;Corresponding each characteristic point, is determined multiple reference points respectively, is determined at least one anchor point respectively centered on each reference point;Based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains the corresponding location information of each detectable substance for including in the target image and detectable substance classification.Using the disclosure, when carrying out the detection of intensive wisp, it is possible to reduce the omission of object.
Description
Technical field
This disclosure relates to which technical field of computer vision more particularly to a kind of method, apparatus of object detection, computer are set
Standby and storage medium.
Background technique
Object detection is a key problem of computer vision field.The target of object detection is picture to be detected first
In whether include object to be detected, also, if in picture include object to be detected, it is also necessary to determine the position of the object
And type.
The method of object detection in the related technology are as follows: firstly, determining multiple anchors centered on the characteristic point in characteristic pattern
Point.Then, it is detected for each anchor point, there are in the case where detectable substance in anchor point, exports the location information of detectable substance
With detectable substance classification.
One anchor point can only identify an object, when in the corresponding multiple anchor points of a characteristic point include multiple objects
When, since the central point of these anchor points is identical, so the detection zone that these anchor points are responsible for has very big coincidence, to be directed to this
When a little anchor points are detected, it may only be possible to detect identical object, cause the omission of remaining object.
Summary of the invention
Present disclose provides a kind of method, apparatus of object detection, computer equipment and storage mediums, are able to solve existing
When the approach application of object detection is detected to intensive wisp, the technical issues of object is omitted is often resulted in.
According to the first aspect of the embodiments of the present disclosure, a kind of method of object detection is provided, comprising:
Determine the characteristic pattern of target image;
Determine multiple characteristic points in the characteristic pattern;
Corresponding each characteristic point, determines multiple reference points respectively, determines at least one respectively centered on each reference point
Anchor point;
Based on each anchor point determined, object detection is carried out to the characteristic pattern, obtain include in the target image
The corresponding location information of each detectable substance and detectable substance classification.
Optionally, each characteristic point of the correspondence determines multiple reference points respectively, true respectively centered on each reference point
At least one fixed anchor point, comprising:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor centered on each reference point respectively in each initial anchor point
Point.
Optionally, described to determine that multiple reference points are determined respectively centered on each reference point in each initial anchor point
At least one anchor point, comprising:
Multiple equally distributed reference points are determined in each initial anchor point, based on the reference point in each initial anchor point,
Each initial anchor point is respectively divided into multiple anchor points, the central point of each anchor point divided is a reference point.
Optionally, each characteristic point of the correspondence determines multiple reference points respectively, true respectively centered on each reference point
At least one fixed anchor point, comprising:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point, determination is more respectively
A reference point determines at least one anchor point centered on each reference point respectively.
Optionally, the characteristic pattern of the determining target image, comprising:
Determine the characteristic pattern of the multiple and different scales of target image.
Optionally, described based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains the target figure
After the corresponding location information of each detectable substance and detectable substance classification that include as in, further includes:
It shows the target image, the corresponding location information of each detectable substance and detectable substance classification is based on, in the mesh
Each detectable substance is added in logo image and is marked.
Optionally, described based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains the target figure
The corresponding location information of each detectable substance and detectable substance classification for including as in, comprising:
The feature graph region for including by each anchor point determined is input to the corresponding detection model of different testing sample classification
In, show that each anchor point corresponds to the testing result of different detection models;
The testing result that different detection models are corresponded to based on each anchor point determines each detection for including in the target image
The corresponding location information of object and detectable substance classification.
According to the second aspect of an embodiment of the present disclosure, a kind of device of object detection is provided, including
Determination unit is configured to determine that the characteristic pattern of target image, determines multiple characteristic points in the characteristic pattern, right
Each characteristic point is answered, determines multiple reference points respectively, determines at least one anchor point respectively centered on each reference point;
Detection unit, is configured as based on each anchor point determined, carries out object detection to the characteristic pattern, obtains described
The corresponding location information of each detectable substance and detectable substance classification for including in target image.
Optionally, the determination unit, is configured as:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor centered on each reference point respectively in each initial anchor point
Point.
Optionally, the determination unit, is configured as:
Multiple equally distributed reference points are determined in each initial anchor point, based on the reference point in each initial anchor point,
Each initial anchor point is respectively divided into multiple anchor points, the central point of each anchor point divided is a reference point.
Optionally, the determination unit, is configured as:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point, determination is more respectively
A reference point determines at least one anchor point centered on each reference point respectively.
Optionally, the determination unit, is configured as:
Determine the characteristic pattern of the multiple and different scales of target image.
Optionally, described device further include:
Marking unit is configured as showing the target image, is based on the corresponding location information of each detectable substance and inspection
It is other to survey species, each detectable substance is added in the target image and is marked.
Optionally, the detection unit, is configured as:
The feature graph region for including by each anchor point determined is input to the corresponding detection model of different testing sample classification
In, show that each anchor point corresponds to the testing result of different detection models;
The testing result that different detection models are corresponded to based on each anchor point determines each detection for including in the target image
The corresponding location information of object and detectable substance classification.
According to the third aspect of an embodiment of the present disclosure, a kind of computer equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Execute method described in the first aspect of the embodiment of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, it is special
Sign is, when the instruction in the storage medium is executed by the processor of computer equipment, computer equipment is held
Method described in row embodiment of the present disclosure first aspect.
According to a fifth aspect of the embodiments of the present disclosure, a kind of application program, including one or more instruction are provided, this one
Item or a plurality of instruction can be executed by the processor of server, to complete method described in the first aspect of the embodiment of the present disclosure.
The technical scheme provided by this disclosed embodiment can include the following benefits:
It in the embodiment of the present disclosure, is primarily based on each characteristic point and determines multiple reference points, then, with each reference point be
The heart generates at least one anchor point.So that each characteristic point corresponds to multiple anchor points not concentrically.
Compared with technical solution in the related technology, since each characteristic point corresponds to the different anchor point in multiple centers, from
And the anchor point of different location is made to be responsible for the object detection in respective region, the coincidence of the responsible detection zone of the anchor point of different location
It is less, so that when applying to the detection of intensive wisp, the omission of object is less when the method that the embodiment of the present disclosure provides.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is a kind of flow chart of the method for object detection shown according to an exemplary embodiment.
Fig. 2 is a kind of block diagram of the device of object detection shown according to an exemplary embodiment.
Fig. 3 is a kind of structural block diagram of terminal shown according to an exemplary embodiment.
Fig. 4 is a kind of structural block diagram of computer equipment shown according to an exemplary embodiment.
Fig. 5 is the characteristic pattern of target image shown according to an exemplary embodiment.
Fig. 6 is the characteristic pattern shown according to an exemplary embodiment comprising anchor point.
Fig. 7 is the characteristic pattern shown according to an exemplary embodiment comprising anchor point.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended
The example of device and method being described in detail in claims, some aspects of the invention are consistent.
The embodiment of the present disclosure provides a kind of method of object detection, and this method can be realized by computer equipment.Wherein,
The computer equipment can be the mobile terminals such as mobile phone, tablet computer, notebook and monitoring device, be also possible to desktop computer
Equal fixed terminals, are also possible to server.
The method that the embodiment of the present disclosure provides can be applied in the scene for carrying out object detection to image, for example, can be with
Applied in the scenes such as intelligent traffic system, intelligent monitor system, military target detection and medical navigation operation.Moreover, this
The method that open embodiment provides, is particularly suitable for the scene that the image that wherein there are multiple wisps is detected and identified
In, the Face datection such as in group photo greatly, the field intensive number of people in public place detection and the density of the shoal of fish is estimated
Jing Zhong.
Fig. 1 is a kind of flow chart of the method for object detection shown according to an exemplary embodiment, as shown in Figure 1, should
Method is for including the following steps in computer equipment.
In a step 101, the characteristic pattern of target image is determined.
Wherein, target image refers to the image of object detection to be carried out.
In an implementation, before the characteristic pattern for determining target image, it is also necessary to obtain target image.Target image can pass through
The mode acquired in real time obtains, and this mode is mainly used in the computer equipments such as monitoring device, and monitoring device acquires in real time
Monitor video, and continue to obtain the picture frame in monitor video as target image.Target image can also be pre- by extracting
The mode of the image document or video data that are first stored in computer equipment obtains.
After obtaining target image, target image can be input in neural network model, generate characteristic pattern.This embodiment party
Neural network model in formula can be CNN (Convolutional Neural Network, convolutional neural networks) model,
It can be VGG (Visual Geometry Group, visual geometric group) model.It further, can be in order to reduce calculation amount
First target image is zoomed in and out, then the target image after scaling is input in neural network model and carries out object detection.
It include multistage convolutional layer in neural network model, target image is input to after neural network model, neural network
Model can successively carry out process of convolution to target image by the convolutional layer of level-one level-one, and can successively obtain convolutional layers at different levels
Characteristic pattern.Choose in the characteristic pattern of convolutional layers at different levels characteristic pattern for being determined as target image.
For persistently obtaining the case where picture frame is as target image from monitor video, as soon as every acquisition target image,
The target image is input in neural network model, thus, obtain the corresponding characteristic pattern of each target image.
Optionally, in order to enable the result of object detection is more accurate, the feature of target image different scale can be used
Figure carries out object detection, and the feature degree of different scale is made to be each responsible for the detection of various sizes of object, corresponding treatment process
It can be such that the characteristic pattern of the multiple and different scales of determining target image.
It in an implementation, include multistage convolutional layer in neural network model, target image is input to after neural network model,
Process of convolution can successively be carried out to target image by the convolutional layer of level-one level-one, and can successively obtain the spy of convolutional layers at different levels
Sign figure.Wherein, the scale of the corresponding characteristic pattern of the forward convolutional layer of the convolution number of plies is larger, compares smaller suitable for detecting size
Object.The scale of the corresponding characteristic pattern of the convolutional layer of the convolution number of plies rearward is smaller, compares biggish suitable for detecting size
Object.
From the corresponding characteristic pattern of convolutional layers at different levels, the characteristic pattern of multiple and different scales is chosen, target image is determined as
Characteristic pattern makes the characteristic pattern of different scale be each responsible for the detection of various sizes of object, thus, improve the accurate of object detection
Rate.
Specific operating method can be with are as follows: target image is input in vgg16 neural network model first, is then utilized
Ssd (single shot multibox detector, the more frame detectors of single-point) frame, extract conv3_3, conv4_3 and
Tri- layers of characteristic pattern of conv5_3, as the characteristic pattern of target image, to improve the accuracy rate of object detection.
In a step 102, multiple characteristic points in characteristic pattern are determined.
In step 103, corresponding each characteristic point, determines multiple reference points respectively, centered on each reference point respectively
Determine at least one anchor point.
Wherein, anchor point can also be known as pre-selection frame and anchor etc..
In an implementation, each characteristic point can correspond to n reference point, and each reference point should have m anchor point, it is assumed that feature
The number of point is p, then p × m × n anchor point has been determined in this feature figure altogether, and characteristic pattern is divided into p × m × n by these anchor points
Feature graph region.
It in the embodiment of the present disclosure, is primarily based on each characteristic point and determines multiple reference points, then, with each reference point be
The heart generates at least one anchor point.So that each characteristic point corresponds to multiple anchor points not concentrically.
Compared with technical solution in the related technology, since each characteristic point corresponds to the different anchor point in multiple centers, from
And the anchor point of different location is made to be responsible for the object detection in respective region, the coincidence of the responsible detection zone of the anchor point of different location
It is less, so that when applying to the detection of intensive wisp, the omission of object is less when the method that the embodiment of the present disclosure provides.
Optionally, the mode of this anchor point division then can be generated into anchor by pre-generating initial anchor point
Point, corresponding treatment process can be such that corresponding each characteristic point, determine at least one initial anchor point;In each initial anchor point
The middle multiple reference points of determination determine at least one anchor point centered on each reference point respectively.
In an implementation, corresponding each characteristic point, it is first determined at least one anchor dot center point, then with each initial
Centered on anchor point central point, at least one initial anchor point is generated.When generating initial anchor point, it is also necessary to design the scale of initial anchor point
Information and percent information, wherein dimensional information characterizes the size of the area of initial anchor point, and percent information characterizes the length of initial anchor point
Width is than that (can be width with the size of initial anchor point in the vertical direction using the size of initial anchor point in the horizontal direction as length
Degree).Multiple initial anchor points can be generated based on anchor dot center point, the dimensional information of initial anchor point and percent information.For example,
The center of characteristic point can be determined as to the central point of initial anchor point, set the area of initial anchor point as 1, length-width ratio 1:1, such as
Shown in Fig. 6.
After generating initial anchor point, need to choose multiple reference points in initial anchor point.It, can be with first when choosing reference point
Four vertex of beginning anchor point or the central point of initial anchor point are origin, using horizontal direction as x-axis, using vertical direction as y-axis, are come true
The coordinate of fixed each reference point.
After determining reference point, centered on these reference points, and designs the area of anchor point and length-width ratio and (can be existed with anchor point
Size in horizontal direction is length, using the size of anchor point in the vertical direction as width), at least one anchor point is determined respectively.
Optionally, each initial anchor point can be uniformly divided into several anchor points, corresponding treatment process can be such that
Multiple equally distributed reference points are determined in each initial anchor point, it, will each just based on the reference point in each initial anchor point
Beginning anchor point is respectively divided into multiple anchor points, and the central point of each anchor point divided is a reference point.
In an implementation, after generating initial anchor point, several reference points are uniformly determined in initial anchor point, it is then several with this
Centered on a reference point, an anchor point is determined respectively.Assuming that the number of the reference point determined in an initial anchor point is k, then one
A anchor point is divided into k anchor point, and the k anchor point shape is identical, and the area of each anchor point is initial anchor point area
1/k.
For example, as shown in fig. 6, the center of each characteristic point is determined as anchor dot center point, then with each initial
Centered on anchor point central point, an initial anchor point is generated.The scale of the initial anchor point is 1, length-width ratio 1.Namely each feature
The corresponding initial anchor point of point, which is the square framework that an area is 1.In the initial anchor point, uniformly
Four reference points have been selected, using horizontal direction as x-axis, and have to the right been positive direction of the x-axis using the upper left corner of initial anchor point as origin,
Using vertical direction as y-axis, and be downwards positive direction of the y-axis, the coordinates of this four reference points be respectively (0.25,0.25), (0.25,
0.75), (0.75,0.25) and (0.75,0.75).Centered on this four reference points, anchor point is divided into 4 size phases
Deng the square anchor point that area is 0.25.
Optionally, multiple reference points can be directly determined, then centered on these reference points, determine at least one respectively
Anchor point, corresponding treatment process can be such that corresponding each characteristic point, based on preset multiple reference points relative to characteristic point
Location information determines multiple reference points respectively, centered on each reference point, determines at least one anchor point respectively.
In an implementation, can location information of the first preset reference point relative to characteristic point be then based on these location informations
Determine multiple reference points.
Can be using the center of each characteristic point as origin, using horizontal direction as x-axis, and be to the right positive direction of the x-axis, with vertical
Direction is y-axis, and is downwards positive direction of the y-axis, to determine the coordinate of reference point.For example, as shown in fig. 7, determining the seat of reference point
It is designated as (0.25, -0.25), (- 0.25,0.25) (- 0.25, -0.25) and (0.25,0.25), then the reference point is located at right with it
The surrounding for the characteristic point answered, and in the horizontal direction away from this feature point be 0.25, be away from this feature point in the vertical direction
0.25。
After determining reference point, the area information and percent information of anchor point are designed, the area that can preset anchor point is 0.25, long
Width is than being 1:1, then a reference point answers an anchor point, as shown in Figure 7.
The anchor point of multiple and different areas and different length-width ratios, the quantity of Lai Zengjia anchor point can also be preset.For example, design anchor
The area of point has 1 and 2 two kind, and length-width ratio has 1:2 and two kinds of 2:1, then a reference point answers four anchor points, i.e. an area is
1 and length-width ratio be 1:2 anchor point, the anchor point that an area is 1 and length-width ratio is 2:1, an area is 2 and length-width ratio is 1:2
Anchor point and an area is 2 and length-width ratio is 2:1 anchor point.
At step 104, based on each anchor point determined, object detection is carried out to characteristic pattern, obtains wrapping in target image
The corresponding location information of each detectable substance and detectable substance classification included.
In an implementation, characteristic pattern is divided into multiple and different feature graph regions, feature graph region by each anchor point determined
Quantity it is identical as the quantity for the anchor point determined.
Successively the feature graph region for including in each anchor point is detected, a detection is obtained based on each feature graph region
As a result.Each testing result includes the corresponding location information of detectable substance and detectable substance classification that each feature graph region includes.So
Afterwards, all testing results are integrated and is handled, finally obtain the corresponding position of each detectable substance for including in target image
Information and detectable substance classification.
Optionally, the feature graph region that can include by each anchor point determined, is detected with different detection models,
Corresponding treatment process can be such that the feature graph region that each anchor point that will be determined includes, and be input to different testing sample classification
In corresponding detection model, show that each anchor point corresponds to the testing result of different detection models;It is corresponding different based on each anchor point
The testing result of detection model determines the corresponding location information of each detectable substance for including in target image and detectable substance classification.
Wherein, different classes of detection model is responsible for the detection of different classes of object, and detection model can be classifier.
In an implementation, the feature graph region all anchor points determined, is sequentially inputted to different classes of detection model
In, every kind of detection model detects the feature graph region for including in each anchor point, and every kind of detection model is for each feature
Region obtains a testing result, includes the detectable substance for belonging to the corresponding object category of this detection model in the testing result
Location information, if this feature graph region does not include to belong to the detectable substance of the corresponding object category of this detection model, should
Location information is empty information.Then, testing result of every class detection model based on all feature graph regions carries out location information
Duplicate removal processing.
Finally, it is corresponding to obtain each detectable substance for including in target image according to the testing result that all disaggregated models obtain
Location information and detectable substance classification.
Optionally, after determining the corresponding location information of each detectable substance for including in target image and detectable substance classification,
The position for the detectable substance that can be will test in the target image and type mark come out, and corresponding treatment process can be such that
Displaying target image is based on the corresponding location information of each detectable substance and detectable substance classification, adds in the target image to each detectable substance
It labels.
In an implementation, it is some can be in the scene of displaying target image, can be in the target image of display to wherein
Detectable substance be marked.Label can be position mark and type mark to detectable substance in target image, do not need into
When row category label, label such as to face in big group photo only can also carry out position mark to detectable substance.
The form of position mark can be in the target image, outline detectable substance with rectangle frame.The form of category label can
To be the classification belonging to the text importing detectable substance by the rectangle frame of position mark.
For applying and offender is marked in Intellectualized monitoring scene, each of monitor video is schemed
After carrying out above-mentioned object detection processing as frame, when detecting crime one's share of expenses for a joint undertaking in picture frame, square is used in the picture frame
Shape frame outlines crime one's share of expenses for a joint undertaking, then display treated picture frame.
Fig. 2 is a kind of device block diagram of object detection shown according to an exemplary embodiment.Referring to Fig. 2, the device packet
Include determination unit 201 and detection unit 202.
Determination unit 201 is configured to determine that the characteristic pattern of target image, determines multiple features in the characteristic pattern
Point, corresponding each characteristic point, is determined multiple reference points respectively, is determined at least one anchor point respectively centered on each reference point;
Detection unit 202, is configured as based on each anchor point determined, carries out object detection to the characteristic pattern, obtains
The corresponding location information of each detectable substance and detectable substance classification for including in the target image.
Optionally, determination unit 201 are configured as:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor centered on each reference point respectively in each initial anchor point
Point.
Optionally, determination unit 201 are configured as:
Multiple equally distributed reference points are determined in each initial anchor point, based on the reference point in each initial anchor point,
Each initial anchor point is respectively divided into multiple anchor points, the central point of each anchor point divided is a reference point.
Optionally, determination unit 201 are configured as:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point, determination is more respectively
A reference point determines at least one anchor point centered on each reference point respectively.
Optionally, determination unit 201 are configured as:
Determine the characteristic pattern of the multiple and different scales of target image.
Optionally, described device further include:
Marking unit 203 is configured as showing the target image, based on the corresponding location information of each detectable substance and
Detectable substance classification is added each detectable substance in the target image and is marked.
Optionally, detection unit 202 are configured as:
The feature graph region for including by each anchor point determined is input to the corresponding detection model of different testing sample classification
In, show that each anchor point corresponds to the testing result of different detection models;
The testing result that different detection models are corresponded to based on each anchor point determines each detection for including in the target image
The corresponding location information of object and detectable substance classification.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 3 is a kind of structural block diagram of terminal shown according to an exemplary embodiment.The terminal 300 can be portable
Mobile terminal, such as: smart phone, tablet computer.Terminal 300 be also possible to referred to as user equipment, portable terminal etc. other
Title.
In general, terminal 300 includes: processor 301 and memory 302.
Processor 301 may include one or more processing cores, such as 4 core processors, 9 core processors etc..Place
Reason device 301 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 301 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 301 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 301 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 302 may include one or more computer readable storage mediums, which can
To be tangible and non-transient.Memory 302 may also include high-speed random access memory and nonvolatile memory,
Such as one or more disk storage equipments, flash memory device.In some embodiments, non-transient in memory 302
Computer readable storage medium for storing at least one instruction, at least one instruction for performed by processor 301 with
The method for realizing object detection provided herein.
In some embodiments, terminal 300 is also optional includes: peripheral device interface 303 and at least one peripheral equipment.
Specifically, peripheral equipment includes: radio circuit 304, touch display screen 305, camera 306, voicefrequency circuit 307, positioning component
At least one of 308 and power supply 309.
Peripheral device interface 303 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 301 and memory 302.In some embodiments, processor 301, memory 302 and peripheral equipment
Interface 303 is integrated on same chip or circuit board;In some other embodiments, processor 301, memory 302 and outer
Any one or two in peripheral equipment interface 303 can realize on individual chip or circuit board, the present embodiment to this not
It is limited.
Radio circuit 304 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates
Frequency circuit 304 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 304 turns electric signal
It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 304 wraps
It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip
Group, user identity module card etc..Radio circuit 304 can be carried out by least one wireless communication protocol with other terminals
Communication.The wireless communication protocol includes but is not limited to: WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G,
4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, it penetrates
Frequency circuit 304 can also include NFC (Near Field Communication, wireless near field communication) related circuit, this
Application is not limited this.
Touch display screen 305 is for showing UI (User Interface, user interface).The UI may include figure, text
Sheet, icon, video and its their any combination.Touch display screen 305 also have acquisition touch display screen 305 surface or
The ability of the touch signal of surface.The touch signal can be used as control signal and be input to processor 301 and be handled.Touching
Display screen 305 is touched for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or soft keyboard.In some embodiments
In, touch display screen 305 can be one, and the front panel of terminal 300 is arranged;In further embodiments, touch display screen 305
It can be at least two, be separately positioned on the different surfaces of terminal 300 or in foldover design;In still other embodiments, touch
Display screen 305 can be flexible display screen, be arranged on the curved surface of terminal 300 or on fold plane.Even, touch display screen
305 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Touch display screen 305 can use LCD (Liquid
Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)
Etc. materials preparation.
CCD camera assembly 306 is for acquiring image or video.Optionally, CCD camera assembly 306 include front camera and
Rear camera.In general, front camera is for realizing video calling or self-timer, rear camera is for realizing photo or video
Shooting.In some embodiments, rear camera at least two are main camera, depth of field camera, wide-angle imaging respectively
Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle
Pan-shot and VR (Virtual Reality, virtual reality) shooting function are realized in camera fusion.In some embodiments
In, CCD camera assembly 306 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp, be also possible to double-colored temperature flash of light
Lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for the light compensation under different-colour.
Voicefrequency circuit 307 is used to provide the audio interface between user and terminal 300.Voicefrequency circuit 307 may include wheat
Gram wind and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and converts sound waves into electric signal and be input to processor
301 are handled, or are input to radio circuit 304 to realize voice communication.For stereo acquisition or the purpose of noise reduction, wheat
Gram wind can be it is multiple, be separately positioned on the different parts of terminal 300.Microphone can also be array microphone or omnidirectional's acquisition
Type microphone.Loudspeaker is then used to that sound wave will to be converted to from the electric signal of processor 301 or radio circuit 304.Loudspeaker can
To be traditional wafer speaker, it is also possible to piezoelectric ceramic loudspeaker.When loudspeaker is piezoelectric ceramic loudspeaker, not only may be used
To convert electrical signals to the audible sound wave of the mankind, the sound wave that the mankind do not hear can also be converted electrical signals to survey
Away from etc. purposes.In some embodiments, voicefrequency circuit 307 can also include earphone jack.
Positioning component 308 is used for the current geographic position of positioning terminal 300, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 308 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), China dipper system or Russia Galileo system positioning group
Part.
Power supply 309 is used to be powered for the various components in terminal 300.Power supply 309 can be alternating current, direct current,
Disposable battery or rechargeable battery.When power supply 309 includes rechargeable battery, which can be wired charging electricity
Pond or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is by wireless
The battery of coil charges.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 300 further includes having one or more sensors 310.The one or more sensors
310 include but is not limited to: acceleration transducer 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314,
Optical sensor 315 and proximity sensor 316.
The acceleration that acceleration transducer 311 can detecte in three reference axis of the coordinate system established with terminal 300 is big
It is small.For example, acceleration transducer 311 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 301 can
With the acceleration of gravity signal acquired according to acceleration transducer 311, touch display screen 305 is controlled with transverse views or longitudinal view
Figure carries out the display of user interface.Acceleration transducer 311 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 312 can detecte body direction and the rotational angle of terminal 300, and gyro sensor 312 can
To cooperate with acquisition user to act the 3D of terminal 300 with acceleration transducer 311.Processor 301 is according to gyro sensor 312
Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shooting
Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 305 in terminal 300 can be set in pressure sensor 313.Work as pressure
When the side frame of terminal 300 is arranged in sensor 313, it can detecte user to the gripping signal of terminal 300, believed according to the gripping
Number carry out right-hand man's identification or prompt operation.When the lower layer of touch display screen 305 is arranged in pressure sensor 313, Ke Yigen
According to user to the pressure operation of touch display screen 305, realization controls the operability control on the interface UI.Operability
Control includes at least one of button control, scroll bar control, icon control, menu control.
Fingerprint sensor 314 is used to acquire the fingerprint of user, according to the identity of collected fingerprint recognition user.Knowing
Not Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation, the sensitive operation by processor 301
Including solution lock screen, check encryption information, downloading software, payment and change setting etc..End can be set in fingerprint sensor 314
Front, the back side or the side at end 300.When being provided with physical button or manufacturer Logo in terminal 300, fingerprint sensor 314 can
To be integrated with physical button or manufacturer Logo.
Optical sensor 315 is for acquiring ambient light intensity.In one embodiment, processor 301 can be according to optics
The ambient light intensity that sensor 315 acquires controls the display brightness of touch display screen 305.Specifically, when ambient light intensity is higher
When, the display brightness of touch display screen 305 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 305 is bright
Degree.In another embodiment, the ambient light intensity that processor 301 can also be acquired according to optical sensor 315, dynamic adjust
The acquisition parameters of CCD camera assembly 306.
Proximity sensor 316, also referred to as range sensor are generally arranged at the front of terminal 300.Proximity sensor 316 is used
In the distance between the front of acquisition user and terminal 300.In one embodiment, when proximity sensor 316 detects user
When the distance between front of terminal 300 gradually becomes smaller, touch display screen 305 is controlled by processor 301 and is cut from bright screen state
It is changed to breath screen state;When proximity sensor 316 detects user and the distance between the front of terminal 300 becomes larger, by
Processor 301 controls touch display screen 305 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal 300 of structure shown in Fig. 3, can wrap
It includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
Fig. 4 is a kind of structural schematic diagram of computer equipment shown according to an exemplary embodiment, the computer equipment
It can be the server in above-described embodiment.The computer equipment 400 can generate bigger difference because configuration or performance are different
It is different, it may include one or more processors (central processing units, CPU) 401 and one or one
Above memory 402, wherein at least one instruction is stored in the memory 402, at least one instruction is by described
Processor 401 loads and executes the method to realize above-mentioned object detection.
In the embodiment of the present disclosure, a kind of non-transitorycomputer readable storage medium is additionally provided, when the storage medium
In instruction by computer equipment processor execute when so that computer equipment is able to carry out to complete above-mentioned object detection
Method.
In the embodiment of the present disclosure, a kind of application program, including one or more instruction are additionally provided, one or more finger
Order can be executed by the processor of server, the method to complete above-mentioned object detection.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.
Claims (10)
1. a kind of method of object detection characterized by comprising
Determine the characteristic pattern of target image;
Determine multiple characteristic points in the characteristic pattern;
Corresponding each characteristic point, is determined multiple reference points respectively, is determined at least one anchor point respectively centered on each reference point;
Based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains each inspection for including in the target image
Survey the corresponding location information of object and detectable substance classification.
2. the method according to claim 1, wherein each characteristic point of the correspondence, determines multiple references respectively
Point determines at least one anchor point respectively centered on each reference point, comprising:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor point centered on each reference point respectively in each initial anchor point.
3. according to the method described in claim 2, it is characterized in that, described determine multiple reference points in each initial anchor point,
Centered on each reference point, at least one anchor point is determined respectively, comprising:
Determine that multiple equally distributed reference points will be every based on the reference point in each initial anchor point in each initial anchor point
A initial anchor point is respectively divided into multiple anchor points, and the central point of each anchor point divided is a reference point.
4. the method according to claim 1, wherein each characteristic point of the correspondence, determines multiple references respectively
Point determines at least one anchor point respectively centered on each reference point, comprising:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point determine multiple ginsengs respectively
Examination point determines at least one anchor point centered on each reference point respectively.
5. the method according to claim 1, wherein the characteristic pattern of the determining target image, comprising:
Determine the characteristic pattern of the multiple and different scales of target image.
6. the method according to claim 1, wherein described based on each anchor point determined, to the characteristic pattern
Object detection is carried out, after obtaining the corresponding location information of each detectable substance for including in the target image and detectable substance classification,
Further include:
It shows the target image, the corresponding location information of each detectable substance and detectable substance classification is based on, in the target figure
Each detectable substance is added as in and is marked.
7. the method according to claim 1, wherein described based on each anchor point determined, to the characteristic pattern
Object detection is carried out, the corresponding location information of each detectable substance for including in the target image and detectable substance classification are obtained, comprising:
The feature graph region for including by each anchor point determined is input in the corresponding detection model of different testing sample classification, obtains
Each anchor point corresponds to the testing result of different detection models out;
The testing result that different detection models are corresponded to based on each anchor point determines each detectable substance pair for including in the target image
The location information and detectable substance classification answered.
8. a kind of device of object detection characterized by comprising
Determination unit is configured to determine that the characteristic pattern of target image, determines multiple characteristic points in the characteristic pattern, corresponding every
A characteristic point determines multiple reference points respectively, determines at least one anchor point respectively centered on each reference point;
Detection unit, is configured as based on each anchor point determined, carries out object detection to the characteristic pattern, obtains the target
The corresponding location information of each detectable substance and detectable substance classification for including in image.
9. a kind of computer equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Perform claim requires the described in any item methods of 1-7.
10. a kind of non-transitorycomputer readable storage medium, which is characterized in that when the instruction in the storage medium is by calculating
When the processor of machine equipment executes, so that computer equipment is able to carry out the described in any item methods of claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910137428.3A CN109886208B (en) | 2019-02-25 | 2019-02-25 | Object detection method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910137428.3A CN109886208B (en) | 2019-02-25 | 2019-02-25 | Object detection method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109886208A true CN109886208A (en) | 2019-06-14 |
CN109886208B CN109886208B (en) | 2020-12-18 |
Family
ID=66929163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910137428.3A Active CN109886208B (en) | 2019-02-25 | 2019-02-25 | Object detection method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109886208B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866478A (en) * | 2019-11-06 | 2020-03-06 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for identifying object in image |
CN111476306A (en) * | 2020-04-10 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Object detection method, device, equipment and storage medium based on artificial intelligence |
CN112199987A (en) * | 2020-08-26 | 2021-01-08 | 北京贝思科技术有限公司 | Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment |
CN113076955A (en) * | 2021-04-14 | 2021-07-06 | 上海云从企业发展有限公司 | Target detection method, system, computer equipment and machine readable medium |
CN114596706A (en) * | 2022-03-15 | 2022-06-07 | 阿波罗智联(北京)科技有限公司 | Detection method and device of roadside sensing system, electronic equipment and roadside equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529527A (en) * | 2016-09-23 | 2017-03-22 | 北京市商汤科技开发有限公司 | Object detection method and device, data processing deice, and electronic equipment |
CN107316001A (en) * | 2017-05-31 | 2017-11-03 | 天津大学 | Small and intensive method for traffic sign detection in a kind of automatic Pilot scene |
CN108304808A (en) * | 2018-02-06 | 2018-07-20 | 广东顺德西安交通大学研究院 | A kind of monitor video method for checking object based on space time information Yu depth network |
CN108681718A (en) * | 2018-05-20 | 2018-10-19 | 北京工业大学 | A kind of accurate detection recognition method of unmanned plane low target |
-
2019
- 2019-02-25 CN CN201910137428.3A patent/CN109886208B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529527A (en) * | 2016-09-23 | 2017-03-22 | 北京市商汤科技开发有限公司 | Object detection method and device, data processing deice, and electronic equipment |
CN107316001A (en) * | 2017-05-31 | 2017-11-03 | 天津大学 | Small and intensive method for traffic sign detection in a kind of automatic Pilot scene |
CN108304808A (en) * | 2018-02-06 | 2018-07-20 | 广东顺德西安交通大学研究院 | A kind of monitor video method for checking object based on space time information Yu depth network |
CN108681718A (en) * | 2018-05-20 | 2018-10-19 | 北京工业大学 | A kind of accurate detection recognition method of unmanned plane low target |
Non-Patent Citations (3)
Title |
---|
LIU W 等: ""SSD: Single shot multibox detector"", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 * |
翁昕: ""目标检测网络SSD的区域候选框的设置问题研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈康: ""基于深度卷积神经网络的汽车驾驶场景目标检测算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866478A (en) * | 2019-11-06 | 2020-03-06 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for identifying object in image |
CN110866478B (en) * | 2019-11-06 | 2022-04-29 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for identifying object in image |
CN111476306A (en) * | 2020-04-10 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Object detection method, device, equipment and storage medium based on artificial intelligence |
CN111476306B (en) * | 2020-04-10 | 2023-07-28 | 腾讯科技(深圳)有限公司 | Object detection method, device, equipment and storage medium based on artificial intelligence |
CN112199987A (en) * | 2020-08-26 | 2021-01-08 | 北京贝思科技术有限公司 | Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment |
CN113076955A (en) * | 2021-04-14 | 2021-07-06 | 上海云从企业发展有限公司 | Target detection method, system, computer equipment and machine readable medium |
CN114596706A (en) * | 2022-03-15 | 2022-06-07 | 阿波罗智联(北京)科技有限公司 | Detection method and device of roadside sensing system, electronic equipment and roadside equipment |
CN114596706B (en) * | 2022-03-15 | 2024-05-03 | 阿波罗智联(北京)科技有限公司 | Detection method and device of road side perception system, electronic equipment and road side equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109886208B (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829456A (en) | Image-recognizing method, device and terminal | |
CN109886208A (en) | Method, apparatus, computer equipment and the storage medium of object detection | |
CN110210571A (en) | Image-recognizing method, device, computer equipment and computer readable storage medium | |
CN110148178B (en) | Camera positioning method, device, terminal and storage medium | |
CN109712224A (en) | Rendering method, device and the smart machine of virtual scene | |
CN109815150A (en) | Application testing method, device, electronic equipment and storage medium | |
CN108363982B (en) | Method and device for determining number of objects | |
CN109558837A (en) | Face critical point detection method, apparatus and storage medium | |
CN110570460A (en) | Target tracking method and device, computer equipment and computer readable storage medium | |
CN109522863A (en) | Ear's critical point detection method, apparatus and storage medium | |
CN108304506A (en) | Search method, device and equipment | |
CN110334736A (en) | Image-recognizing method, device, electronic equipment and medium | |
CN109948581A (en) | Picture and text rendering method, device, equipment and readable storage medium storing program for executing | |
CN109583370A (en) | Human face structure grid model method for building up, device, electronic equipment and storage medium | |
CN110288689A (en) | The method and apparatus that electronic map is rendered | |
CN109992685A (en) | A kind of method and device of retrieving image | |
CN110290426A (en) | Method, apparatus, equipment and the storage medium of showing resource | |
CN112308103B (en) | Method and device for generating training samples | |
CN111784841A (en) | Method, apparatus, electronic device, and medium for reconstructing three-dimensional image | |
CN112989198B (en) | Push content determination method, device, equipment and computer-readable storage medium | |
CN110348318A (en) | Image-recognizing method, device, electronic equipment and medium | |
CN110109770A (en) | Adjustment method, device, electronic equipment and medium | |
CN109754439A (en) | Scaling method, device, electronic equipment and medium | |
CN110147796A (en) | Image matching method and device | |
CN109189290A (en) | Click on area recognition methods, device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |