CN110443097A - A kind of video object extract real-time optimization method and system - Google Patents

A kind of video object extract real-time optimization method and system Download PDF

Info

Publication number
CN110443097A
CN110443097A CN201810413067.6A CN201810413067A CN110443097A CN 110443097 A CN110443097 A CN 110443097A CN 201810413067 A CN201810413067 A CN 201810413067A CN 110443097 A CN110443097 A CN 110443097A
Authority
CN
China
Prior art keywords
target area
image frame
area
minimum
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810413067.6A
Other languages
Chinese (zh)
Inventor
刘畅
赵潇
韩雪
周一青
石晶林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Super Media Information Technology Co Ltd
Original Assignee
Beijing Zhongke Super Media Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Super Media Information Technology Co Ltd filed Critical Beijing Zhongke Super Media Information Technology Co Ltd
Priority to CN201810413067.6A priority Critical patent/CN110443097A/en
Publication of CN110443097A publication Critical patent/CN110443097A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention relates to a kind of video object extract real-time optimization method, the optimization method includes: the interesting target region in traversal search current image frame;Obtain the external regular figure of minimum in the interesting target region;The target area that the area of the external regular figure of minimum is less than preset first threshold value is determined as noise and is deleted, the target area by denoising is obtained;If occurring target area in the current image frame all to lack, the first threshold is corrected, and re-executes noise screening and deletes.

Description

A kind of video object extract real-time optimization method and system
Technical field
The present invention relates to technical field of video processing, in particular to a kind of video object extract real-time optimization method and it is System.
Background technique
With the development of science and technology the development of smart city is also maked rapid progress, artificial intelligence technology has been applied to various rows Among industry, for example, in field of video processing, computer vision technique is used for image procossing and target analysis etc. and is had become to work as Modern development trend is especially used for the target identification technology of video monitoring system, should can be in complex environment Also it is capable of the operation of accurate stable, guarantees its real-time again.
In the prior art, mainly there are optical flow method, frame difference method and Background difference etc. for the method for target identification in video, Wherein, optical flow method is the optical flow field by calculating picture frame, will be a movement mesh with the region recognition of identical light stream vectors Mark, this method calculation amount is larger and real-time is poor, and light stream field distribution is also easier to be effected by environmental factors;Frame is poor Method is the motion target area that current image frame is calculated using two frame differences in video flowing frame sequence, and this method is common One of moving object detection and dividing method, but due to that cannot detect all pixels point in motion target area, it can only Detection suitable for simple motion;Background difference be it is a kind of utilize the difference of current image frame and background image frame obtain prospect mesh Calibration method, this method are also common object detection method, but environment more complicated for background, can increase target inspection The difficulty surveyed and extracted.
Therefore, the video object extract real-time that a kind of real-time is good, identifies accurately and is suitable for complex environment is needed at present Method and system.
Summary of the invention
The present invention provides a kind of video object extract real-time optimization method, comprising:
Interesting target region in step 1) traversal search current image frame;
Step 2) obtains the external regular figure of minimum in the interesting target region;
The target area that the area of the external regular figure of minimum is less than preset first threshold value is judged to making an uproar by step 3) It puts and deletes, obtain the target area by denoising;
If occurring target area in the step 4) current image frame all to lack, the first threshold is corrected, is laid equal stress on Newly execute the step 3).
Preferably, the step 4) further comprises:
The current image frame is compared step 41) with adjacent image frame, analyze in the current image frame whether In the presence of missing target area;
Step 42) the missing target area if it exists, identifies and records the missing target area in the neighbor map As the location information and size in frame;
The location information and size that step 43) is obtained using the step 42) adjust the first threshold, and again The step 3) is executed, the mesh for meeting the size of the principle of optimality is obtained at the position in the current image frame Mark region.
Preferably, the step 41) further comprises: comparing the mesh in the current image frame and the adjacent image frame Region quantity is marked, if the target area number of the current image frame is less than the target area number in the adjacent image frame, Then determine there is missing target area in the current image frame.
Preferably, the step 43) further comprises:
Step 431) reduces the first threshold, re-executes the step 3), and obtain the target area by denoising;
Step 432) identifies the target area in the current image frame at the location information that the step 42) obtains To increase target area newly, and will the newly-increased target area area with the area that the step 42) obtains compared with to simultaneously further Adjust the first threshold.
Preferably, the step 432) further comprises:
If the area of the newly-increased target area is greater than the area that the step 42) obtains, illustrate the step 431) The threshold value th reset1It is less than normal, it needs to increase;
If the area of the newly-increased target area is less than the area or the newly-increased target area that the step 42) obtains The area in domain is zero, then illustrates the threshold value th that step 431) is reset1It is bigger than normal, it needs to reduce again.
Preferably, the optimization method further comprises:
Step 5) judges whether the target area in the current image frame partial loss situation occurs, if occurring, Then the part of the loss is restored, is specifically included:
If the area of the external regular figure of minimum in the presently described picture frame of step 51), with the adjacent image The area ratio of the corresponding external regular figure of minimum is less than preset third threshold value in frame, then determines the external rule of minimum Then there is mutation in the corresponding interesting target region of figure;
Step 52) compare all external regular figures of minimum in presently described picture frame and Record Comparison number with It is mutated number;
If step 53) the mutation number accounts for the ratio of the comparison number not less than preset 4th threshold value, determine There is partial loss in interesting target region in presently described picture frame;
Step 54) utilizes target area corresponding with the external regular figure of minimum of loss in the adjacent image frame Location information is to presently described picture frame assignment.
Preferably, the optimization method further comprises:
Step 6) obtains the location information of the current all described external regular figure of minimum;
Step 7) judges whether the presently described external regular figure of minimum is Chong Die with other external regular figures of minimum; If there is overlapping, the corresponding external regular figure of minimum is merged;
Step 8) judges unreasonable segmentation whether occur between the adjacent external regular figure of minimum;It, will if occurring The corresponding external regular figure of minimum clusters again.
Preferably, the step 8) further comprises: by with the target position in the presently described external regular figure of minimum Distance is less than the adjacent external regular figure of minimum of preset second threshold, is determined as unreasonable segmentation.
Preferably, the step 1) further comprises pre-processing to the current image frame.
According to another aspect of the present invention, a kind of video object extract real-time optimization system is also provided, including memory, Processor and storage are on a memory and the computer program that can run on a processor, wherein described in the processor is run Step as described above is executed when program.
Compared with the existing technology, the present invention achieves following advantageous effects: video object provided by the invention is real-time Optimization method and system are extracted, the interesting target region that Background difference obtains is carried out using the visual attention location principle of HVS excellent Change, the noise in interesting target region can not only be effectively removed and repeat to cluster, moreover it is possible to avoid unreasonable segmentation and target The problems such as missing;Using optimization method provided by the invention and system, enable to the integrality of Objective extraction more preferable, accuracy More preferably, while also it is able to satisfy real-time demand, the application environment especially suitable for background complexity.
Detailed description of the invention
Fig. 1 is video object extract real-time optimization method flow chart provided by the invention.
Fig. 2 is the method schematic diagram that solution target area provided by the invention is all lost.
Fig. 3 is the method schematic diagram provided by the invention for solving target area partial loss.
Specific embodiment
In order to which the purpose of the present invention, technical solution and advantage is more clearly understood, below in conjunction with attached drawing, to according to this The video object extract real-time optimization method provided in the embodiment of invention is further described.
Human visual system (HVS) is a highly complex process, and the mechanism for being used for information processing commonly referred to as regards Feel concern (Visual Attention), typically refers in numerous and complicated external environment, human vision can quickly determine Position goes out important target area and carries out careful analysis, and other regions are only carried out with coarse analysis and is even ignored, this Mechanism embodies the visual characteristic that HVS can actively select attentinal contents and be focused on.Inventor passes through numerous studies Think, this information processing mechanism is used for field of video processing, can effectively promote the recognition capability to target.
Under normal conditions, the video processing procedures such as the classification, tracking for target in video and behavior understanding only relate to figure As the motion target area in frame, therefore, for the subsequent processing of video, moving target is correctly detected from background It is particularly significant with splitting, especially when background environment is complex, the difficulty of detection and the extraction of target but will be increased. Inventor to existing video object recognition methods by analysing and comparing and combining feature (such as the field of video monitoring scene Scape background is relatively fixed and the motion conditions of target area are relatively simple etc.) discovery, the calculating of Background difference it is relatively easy and Detection effect is preferable, but for background complex environment, such as occur situations such as intense light irradiation, background is unsmooth in background, it is preceding Scape target detection will appear error detection, repeat the problems such as detecting.
To solve the above-mentioned problems, inventor after study, it is real to propose a kind of video based on HVS information processing principle When processing method, this method can for Background difference obtain area-of-interest be analyzed, and according to position therein believe Breath judges the error section in area-of-interest and optimizes, to obtain the target area of more complete and accurate.
Video object extract real-time optimization method provided by the invention is for the video primarily determined by Background difference The optimization that interesting target region in frame carries out.
If Bn(i, j) is background image, fn(i, j) is current frame image, and two field pictures are carried out difference processing, then are detected Rule is as follows:
Wherein, T is detection threshold value, SnIt is determined as the region of background, M after (i, j) expression differencenAfter (i, j) expression difference really It is set to the interesting target region of motion change.
Inventor has found by many experiments, when the background area of video is complex, obtains above by Background difference The interesting target region obtained will appear the region of partial error detection;And between the interesting target region extracted also The case where in the presence of segmentation is repeated;Meanwhile above-mentioned interesting target region will appear discontinuous, erroneous segmentation and unreasonable cluster The problems such as, this will lead to target identification result, and there are a large amount of redundancies, increase the calculation amount of subsequent algorithm.
For this purpose, inventors herein proposing the method that the area-of-interest obtained for above-mentioned Background difference optimizes, below It will be described in more detail.
In one embodiment of the invention, Fig. 1 is the target extract real-time optimization side that the preferred embodiment of the present invention provides Method flow chart, as shown in Figure 1, this method specifically includes the following steps:
S10, minimum external regular figure is obtained
Interesting target region is obtained using Background difference, that is, judges that each pixel belongs to back in current image frame Scape belongs to moving target, and the pixel for wherein belonging to moving target is clustered, the two-value in traversal search cluster result Region, and the external regular figure (such as rectangle) of minimum of cluster result is obtained, meanwhile, obtain the position letter in corresponding region Breath, the size including position coordinates and region.
Noise in S20, screening and delete target region
According to the visual attention location principle of HVS, human eye is limited the resolution capability of brightness, can only differentiate with certain bright The target object of difference is spent, and luminance difference is smaller, can be considered consistent;Meanwhile the mankind obtain often through marginal information The concrete shape of target object interprets target object etc., therefore only when article size, details light and shade and when presenting in eye Between length it is all suitable when, could have more clear perception to object detail.
Based on the above principles, inventor thinks when occurring lesser region in object detection results, can be by the region Area-of-interest caused by being determined as noise, rather than in image scene.Therefore when background environment complexity, if above-mentioned acquisition Area-of-interest in occur as environment (such as the factors such as illumination, weather) influence caused by noise, not due to these noises Belong to interesting target region, screened and is deleted.
Assuming that the width for the minimum circumscribed rectangle that step S10 is obtained and high respectively w and h, by making an uproar in analysis target area Point feature, for example, due to noise be generally distributed it is at random, can by the lesser regional determination of rectangular area be noise region, setting Threshold value th1, the location information in all target areas of acquisition is traversed, and judged according to following regular one:
W < th1;Or h < th1
If the minimum circumscribed rectangle region obtained meets above-mentioned regular one, it can determine that the region is noise in complex background Caused noise, rather than the interesting target region in current image frame, can be deleted.
Using above-mentioned regular one, the area of error detection in the interesting target region of Background difference acquisition can be effectively removed Domain, so that the recognition result obtained to Background difference optimizes.
S30, the whether reasonable simultaneously corrected threshold parameter th of denoising is examined1
When executing noise screening, if it is too small target area occur, it may appear that be mistakenly considered noise and be deleted, cause current Target missing in picture frame needs the threshold value to above-mentioned setting to be corrected, to select in order to avoid there is above-mentioned mistake Suitable threshold value th1
When carrying out threshold correction, it is assumed that current image frame n, by current image frame and with its several adjacent picture frame (for example, n-1, n-2, n-3, n-4) is compared respectively, and Fig. 2 is threshold correction method schematic diagram provided by the invention, such as Fig. 2 It is shown, if existing target area in four consecutive frames, and do not have in current image frame, then it is assumed that go out in current image frame Show target all to lack;It then needs through adaptive mode to threshold value th1It is corrected, specific bearing calibration is as follows:
S301, by current image frame n and adjacent image frame n-1, n-2, n-3, n-4 is compared respectively, according to target area number With the presence or absence of missing target in amount analysis current image frame, for example, if mesh present in adjacent image frame n-1, n-2, n-3, n-4 The target number that areal is greater than in current image frame is marked, then can determine that there are target missings in current image frame;
S302, the location information by comparing target area, find the target area of missing, and record in adjacent image frame The size and location information of the target area;
The noise-removed threshold value th of S303, adjustment (timing is to reduce for the first time) current image frame1, for example, being arranged according to step-length The amplitude adjusted every time, and re-execute the steps S20 and obtain the current image frame comprising multiple target areas through denoising;
The current image frame that S304, analytical procedure S303 are obtained, in the position for the desired target area that step S302 is obtained The size of target area is compared at information;
If the target area area in current image frame is greater than the target area area in consecutive frame, illustrate step S303 The threshold value th reset1It is less than normal, it needs to increase, for example, according to the default amplitude for needing to increase of step-length;
If the target area area in current image frame is less than the target area area in consecutive frame or believes in the position There is no above-mentioned target area (i.e. area is 0) at breath, then illustrate the threshold value th that step S303 is reset1It is bigger than normal, need into One step reduces, for example, needing reduced amplitude according to step-length is default;
After completing above-mentioned adjusting thresholds, repeat step S303 and execute above-mentioned comparison again, until obtain meet it is excellent The appropriate threshold th of change standard1
In one embodiment of the invention, in order to reduce the extraneous factors such as illumination and noise to above-mentioned object detection results Influence, execute above-mentioned steps S10 difference processing before, can for obtain background image and foreground image be located in advance Reason operation, the garbage in image is reduced by the operation as far as possible, restores true information, and it is outer to reduce illumination, noise etc. Influence of boundary's factor to object detection results.It simultaneously include that the bianry image for making to obtain after difference to present frame and background frames does form Processing, eliminates the isolated point in bianry image, improves the accuracy of Objective extraction result.
In one embodiment of the invention, above-mentioned optimisation criteria can be pre-set error range, for example, different It is determined as that the center position deviation of identical target area should meet certain numberical range in picture frame.
In one embodiment of the invention, when selecting the picture frame adjacent with current image frame, in addition to above-mentioned optional N-1, n-2, n-3, the n-4 before current image frame timeline, if video data not acquire in real time, e.g. record The video of system, it is also an option that the adjacent image frame after current image frame timeline, for example, n+1, n+2 etc..
In one embodiment of the invention, inventor has found through many experiments, is obtained using Background difference interested Target area there is also the phenomenon that overlapping, and it is completely overlapped to be embodied in two target areas, that is, there is a region and include The case where another region;Either two target areas partly overlap, that is, there is a region partly belongs to another area The case where domain.For above-mentioned two situations, following optimizations can be executed to the interesting target region of acquisition.
Firstly, the positioning to interesting target region is realized using clustering algorithm, it, will by taking k-means clustering algorithm as an example For the central point of each target area as point set, randomly selecting several target's center's points is initial cluster center μ1, μ2..., μk∈Rn
Secondly, calculating separately remaining all target area central points to the cluster centre to each above-mentioned cluster centre Distance, and therefrom select the several samples nearest apart from the cluster centre as a kind of, specifically:
c(i)=argmin | | x(i)j||2
Wherein, x(i)For the center point set in current goal region.
Then, the center of the class of above-mentioned acquisition is recalculated:
Wherein, m represents the target numbers tentatively obtained, and 1 represents true value judgement.
Finally, repeating the above process until restraining, if can be by the interesting target region division obtained in current scene Dry cluster, for example, L cluster, then L is the number in interesting target region in present image, while also can get mesh interested therein Mark the position coordinates in region.
After completing the above-mentioned positioning to interesting target region, it is assumed that i-th of target coordinate position is xiAnd yi, wide and high Respectively wiAnd hi;J-th of target coordinate position is xjAnd yj, wide and high respectively wjAnd hj
The location information in above-mentioned two target area is traversed, and is judged according to following regular two:
If i-th of the target and j-th of goal satisfaction above-mentioned regular two obtained, can determine that target j is wrapped completely by target i Contain, i.e., completely overlapped situation occurs in the interesting target region obtained using Background difference, then target j can be deleted It removes.
Using above-mentioned regular two, area completely overlapped in the interesting target region of Background difference acquisition can be effectively removed Domain, so that the recognition result obtained to Background difference optimizes.
In addition to this, when traversing the location information in above-mentioned two target area, can also according to following regular three into Row judgement:
Or
If i-th of the target and j-th of goal satisfaction above-mentioned regular three obtained, can determine that region and the target i of target j Region have a partial intersection, i.e., there is the case where partly overlapping between the interesting target region obtained using Background difference, The location information in above-mentioned two target area then can be further analyzed, and is clustered or directly closes two regions again And.
Using above-mentioned regular three, partly overlapping area in the interesting target region of Background difference acquisition can be effectively removed Domain, so that the recognition result obtained to Background difference optimizes.
In one embodiment of the invention, occur in the interesting target region in order to solve to utilize Background difference acquisition Unreasonable segmentation the phenomenon that, inventor it is found through experiment that, can by the characteristic of HVS visual attention location mechanism applied to target know Not.
In general, the space of vision system and frequency characteristic are complementary, for moving image, there are it is a kind of when Between resolution ratio and spatial resolution exchange.For example, be difficult to see its details clearly when moving object quickly passes through before human eye, It can only see that rough profile, i.e. human eye " can track " resolution capability of moving object with human eye related.Therefore, according to The movement of target should have a successional characteristic, inventor think can will to be regarded as occurring between adjacent image frame compared with Big mutation.
According to this principle, following optimizations can be executed to the interesting target region obtained using Background difference.
After the location information for obtaining interesting target region using the localization method in above-described embodiment, it is assumed that i-th of mesh Mark coordinate position is xiAnd yi, wide and high respectively wiAnd hi;J-th of target coordinate position is xjAnd yj, wide and high respectively wj And hj.By the location information of erroneous segmentation target in analysis target area, threshold value th is set2, and carried out according to following regular four Judgement:
xi+wi-xj≤th2;Or yi+hi-yj≤th2
If i-th of the target and j-th of goal satisfaction above-mentioned regular four obtained, can determine that the region target j and mesh The region i is marked by unreasonable segmentation, therefore, can analyze the location information in above-mentioned two region, to wherein unreasonable portion Divide and re-starts cluster, the final reasonable segmentation for realizing area-of-interest.
Using above-mentioned regular four, the area of erroneous segmentation in the interesting target region of Background difference acquisition can be effectively removed Domain, so that the recognition result obtained to Background difference optimizes.
In one embodiment of the invention, occur in the interesting target region in order to solve to utilize Background difference acquisition Jump problem, inventor through experiments, it was found that, can by the characteristic of HVS visual attention location mechanism be applied to target identification.It is general next It says, for human eye when observing picture frame, attention can concentrate the close quarters in scape of showing up, while whether the mankind can accurately identify There are much relations in target area in scene out with whether human eye may be implemented tracking, in conjunction with the feature of video monitoring scene, In The target area moved in the scene generally has continuity, i.e., the position of target area should be continuous between consecutive frame, There is no biggish jumps.
According to above-mentioned principle, inventor has found through many experiments, and the jump that target area occurs is embodied in following two Kind of situation: one to will lead to the variation of consecutive frame target area when being target area partial loss excessive;The other is target area It will lead to target area between consecutive frame when losing completely to jump, that is, the interesting target region that detection obtains occur and do not connect It passes through.To solve the above-mentioned problems, following methods are inventors herein proposed.
Firstly, judging that current image frame is using the relationship between the interesting target region area in adjacent image frame It is no that there are target area jumps.
After the location information for obtaining interesting target region using the localization method in above-described embodiment, it is assumed that present image I-th of target coordinate position in frame n is xiAnd yi, wide and high respectively wiAnd hi;Some image adjacent with current image frame Corresponding i-th of target coordinate position is x in framei' and yi', wide and high respectively wi' and hi', according to object variations rule Threshold value th is set3, and judged according to following regular five:
Or
If the above-mentioned rule of i-th of goal satisfaction in i-th of target and adjacent n frame in the current image frame obtained Five, then it can determine that mutation occurs in i-th of target in present frame.
Fig. 3 is the method schematic diagram provided by the invention for solving target area partial loss, as shown in figure 3, according to above-mentioned Rule five all compares corresponding target between targets all in present frame n and multiple consecutive frames, record mutation number p With total comparison number m;Suitable threshold value th is arranged according to object variations rule simultaneously4, and judged according to following regular six:
p/m≥th4
If the comparing result of the current image frame and consecutive frame that obtain meets above-mentioned regular six, current image frame can determine that In the case where there are target area excalations;
Secondly, after completing above-mentioned judgement, if it is determined that there is excalation in the target in current image frame as the result is shown, It can use the correspondence target area location information obtained in the previous frame of current image frame to carry out the testing result of present frame Assignment, to guarantee that the target area detected shows coherence and continuity.
Using above-mentioned regular five and rule six, it is possible to prevente effectively from occurring in the interesting target region that Background difference obtains The case where excalation, so that the recognition result obtained to Background difference optimizes.
Although real to video object region provided by the invention in the above-described embodiments, using minimum circumscribed rectangle When extract optimization method and be illustrated, but it will be recognized by one of ordinary skill in the art that above-mentioned optimization method can also be by making It is realized with the external regular figure of other minimums, for example, minimum circumscribed circle shape etc..
Compared with the existing technology, video object extract real-time optimization method provided by the present invention is closed according to the vision of HVS Principle is infused, is optimized for the interesting target region obtained using Background difference, above-mentioned target area can be effectively removed In noise and repeat cluster;Simultaneously, moreover it is possible to the problems such as avoiding unreasonable segmentation and target from lacking.
Although the present invention has been described by means of preferred embodiments, the present invention is not limited to described here Embodiment, without departing from the present invention further include made various changes and variation.

Claims (10)

1. a kind of video object extract real-time optimization method, the optimization method include:
Interesting target region in step 1) traversal search current image frame;
Step 2) obtains the external regular figure of minimum in the interesting target region;
The target area that the area of the external regular figure of minimum is less than preset first threshold value is determined as noise simultaneously by step 3) It deletes, obtains the target area by denoising;
If occurring target area in the step 4) current image frame all to lack, the first threshold is corrected, and hold again The row step 3).
2. optimization method according to claim 1, which is characterized in that the step 4) further comprises:
The current image frame is compared step 41) with adjacent image frame, and analyzing whether there is in the current image frame Lack target area;
Step 42) the missing target area if it exists, identifies and records the missing target area in the adjacent image frame In location information and size;
The location information and size that step 43) is obtained using the step 42) adjust the first threshold, and re-execute The step 3) obtains the target area for meeting the size of the principle of optimality at the position in the current image frame Domain.
3. optimization method according to claim 2, which is characterized in that the step 41) further comprises: working as described in comparison Target area quantity in preceding picture frame and the adjacent image frame, if the target area number of the current image frame is less than institute The target area number in adjacent image frame is stated, then determines there is missing target area in the current image frame.
4. optimization method according to claim 2, which is characterized in that the step 43) further comprises:
Step 431) reduces the first threshold, re-executes the step 3), and obtain the target area by denoising;
Target area in the current image frame at the location information that the step 42) obtains is identified as newly by step 432) Increase target area, and by the newly-increased target area area compared with the area that the step 42) obtains to a successive step of going forward side by side The first threshold.
5. optimization method according to claim 4, which is characterized in that the step 432) further comprises:
If the area of the newly-increased target area is greater than the area that the step 42) obtains, illustrate the step 431) again The threshold value th being arranged1It is less than normal, it needs to increase;
If the area of the newly-increased target area is less than the area or the newly-increased target area that the step 42) obtains Area is zero, then illustrates the threshold value th that step 431) is reset1It is bigger than normal, it needs to reduce again.
6. optimization method according to claim 1, which is characterized in that the optimization method further comprises:
Step 5) judges whether the target area in the current image frame partial loss situation occurs, will if occurring The part of the loss is restored, and is specifically included:
If the area of the external regular figure of minimum in presently described picture frame, with corresponding institute in the adjacent image frame The area ratio for stating minimum external regular figure is less than preset third threshold value, then determines that the external regular figure of minimum is corresponding There is mutation in the interesting target region;
Compare all external regular figures of minimum in presently described picture frame and Record Comparison number and mutation number;
If the mutation number accounts for the ratio of the comparison number not less than preset 4th threshold value, presently described image is determined There is partial loss in interesting target region in frame;
Utilize target area location information pair corresponding with the external regular figure of minimum of loss in the adjacent image frame Presently described picture frame assignment.
7. optimization method according to claim 6, which is characterized in that the optimization method further comprises:
Step 6) obtains the location information of the current all described external regular figure of minimum;
Step 7) judges whether the presently described external regular figure of minimum is Chong Die with other external regular figures of minimum;If having Overlapping then merges the corresponding external regular figure of minimum;
Step 8) judges unreasonable segmentation whether occur between the adjacent external regular figure of minimum;It, will be opposite if occurring The external regular figure of minimum answered clusters again.
8. optimization method according to claim 7, which is characterized in that the step 8) further comprises: will be with current institute State the adjacent external rule schema of minimum that the target position distance in minimum external regular figure is less than preset second threshold Shape is determined as unreasonable segmentation.
9. optimization method according to claim 1, which is characterized in that the step 1) further comprises to the current figure As frame is pre-processed.
10. a kind of video object extract real-time optimization system, including memory, processor and storage on a memory and can located The computer program run on reason device, wherein execute when the processor operation described program such as any one of claim 1 to 9 The step.
CN201810413067.6A 2018-05-03 2018-05-03 A kind of video object extract real-time optimization method and system Pending CN110443097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810413067.6A CN110443097A (en) 2018-05-03 2018-05-03 A kind of video object extract real-time optimization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810413067.6A CN110443097A (en) 2018-05-03 2018-05-03 A kind of video object extract real-time optimization method and system

Publications (1)

Publication Number Publication Date
CN110443097A true CN110443097A (en) 2019-11-12

Family

ID=68427971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810413067.6A Pending CN110443097A (en) 2018-05-03 2018-05-03 A kind of video object extract real-time optimization method and system

Country Status (1)

Country Link
CN (1) CN110443097A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783076A (en) * 2010-02-04 2010-07-21 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN102289672A (en) * 2011-06-03 2011-12-21 天津大学 Infrared gait identification method adopting double-channel feature fusion
CN102509086A (en) * 2011-11-22 2012-06-20 西安理工大学 Pedestrian object detection method based on object posture projection and multi-features fusion
CN105868708A (en) * 2016-03-28 2016-08-17 锐捷网络股份有限公司 Image object identifying method and apparatus
CN106339690A (en) * 2016-08-30 2017-01-18 上海交通大学 Video object flow detecting method and system based on noise elimination and auxiliary determination line
US20170083753A1 (en) * 2015-09-22 2017-03-23 ImageSleuth, Inc. Automated methods and systems for identifying and characterizing face tracks in video
US20170154247A1 (en) * 2015-11-30 2017-06-01 Sony Interactive Entertainment Inc. Information processing device, information processing method, light-emitting device regulating apparatus, and drive current regulating method
CN107301388A (en) * 2017-06-16 2017-10-27 重庆交通大学 A kind of automatic vehicle identification method and device
CN107330465A (en) * 2017-06-30 2017-11-07 清华大学深圳研究生院 A kind of images steganalysis method and device
CN107480585A (en) * 2017-07-06 2017-12-15 西安电子科技大学 Object detection method based on DPM algorithms

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783076A (en) * 2010-02-04 2010-07-21 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN102289672A (en) * 2011-06-03 2011-12-21 天津大学 Infrared gait identification method adopting double-channel feature fusion
CN102509086A (en) * 2011-11-22 2012-06-20 西安理工大学 Pedestrian object detection method based on object posture projection and multi-features fusion
US20170083753A1 (en) * 2015-09-22 2017-03-23 ImageSleuth, Inc. Automated methods and systems for identifying and characterizing face tracks in video
US20170154247A1 (en) * 2015-11-30 2017-06-01 Sony Interactive Entertainment Inc. Information processing device, information processing method, light-emitting device regulating apparatus, and drive current regulating method
CN105868708A (en) * 2016-03-28 2016-08-17 锐捷网络股份有限公司 Image object identifying method and apparatus
CN106339690A (en) * 2016-08-30 2017-01-18 上海交通大学 Video object flow detecting method and system based on noise elimination and auxiliary determination line
CN107301388A (en) * 2017-06-16 2017-10-27 重庆交通大学 A kind of automatic vehicle identification method and device
CN107330465A (en) * 2017-06-30 2017-11-07 清华大学深圳研究生院 A kind of images steganalysis method and device
CN107480585A (en) * 2017-07-06 2017-12-15 西安电子科技大学 Object detection method based on DPM algorithms

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴思等: "动态场景视频序列中的前景区域自动提取", 《计算机辅助设计与图形学学报》 *
徐佩等: "视频中提取车辆的最小外接矩形框的改进算法", 《网友世界》 *
王琼等: "基于监控视频的运动车辆检测优化方法", 《计算机工程与设计》 *
赵潇等: ""基于HVS的视频监控目标提取方法研究"", 《计算机应用研究》 *

Similar Documents

Publication Publication Date Title
CN108985186B (en) Improved YOLOv 2-based method for detecting pedestrians in unmanned driving
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
US10696938B2 (en) Method and system for automated microbial colony counting from streaked sample on plated media
CN112926410B (en) Target tracking method, device, storage medium and intelligent video system
US9075026B2 (en) Defect inspection device and defect inspection method
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN108810620A (en) Identify method, computer equipment and the storage medium of the material time point in video
KR100612858B1 (en) Method and apparatus for tracking human using robot
CN107644418B (en) Optic disk detection method and system based on convolutional neural networks
CN101782526B (en) Method and device for automatically restoring, measuring and classifying steel dimple images
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN107230203A (en) Casting defect recognition methods based on human eye vision attention mechanism
CN111383244B (en) Target detection tracking method
CN101094413A (en) Real time movement detection method in use for video monitoring
CN106447701A (en) Methods and devices for image similarity determining, object detecting and object tracking
CN102346854A (en) Method and device for carrying out detection on foreground objects
CN113763424B (en) Real-time intelligent target detection method and system based on embedded platform
CN111161313A (en) Multi-target tracking method and device in video stream
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN113066064A (en) Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
CN110647836A (en) Robust single-target tracking method based on deep learning
CN110197494A (en) A kind of pantograph contact point real time detection algorithm based on monocular infrared image
CN116665095B (en) Method and system for detecting motion ship, storage medium and electronic equipment
CN109492647A (en) A kind of power grid robot barrier object recognition methods
Zhang Sr et al. A ship target tracking algorithm based on deep learning and multiple features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191112

WD01 Invention patent application deemed withdrawn after publication