CN103020992A - Video image significance detection method based on dynamic color association - Google Patents

Video image significance detection method based on dynamic color association Download PDF

Info

Publication number
CN103020992A
CN103020992A CN2012104506795A CN201210450679A CN103020992A CN 103020992 A CN103020992 A CN 103020992A CN 2012104506795 A CN2012104506795 A CN 2012104506795A CN 201210450679 A CN201210450679 A CN 201210450679A CN 103020992 A CN103020992 A CN 103020992A
Authority
CN
China
Prior art keywords
color
motion
video image
significance
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104506795A
Other languages
Chinese (zh)
Other versions
CN103020992B (en
Inventor
宋宝
邹腾跃
唐小琦
王金
叶伯生
凌文锋
熊烁
王小钊
李明磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201210450679.5A priority Critical patent/CN103020992B/en
Publication of CN103020992A publication Critical patent/CN103020992A/en
Application granted granted Critical
Publication of CN103020992B publication Critical patent/CN103020992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a video image significance detection method based on dynamic color association, which comprises the following steps of: obtaining a static significance chart of the video image; extracting the optical flow vector field of a scene; performing preliminary classification to the optical flow vector field and putting the maximum classification block away; converting the video image to HSV (hue, saturation, value) color space from RGB (red, green, blue) color space; generating a color histogram according to the frequency of the corresponding color in the H vector of the HSV color space appearing in the input image; aiming at each vector in the effective classification block of the optical flow vector field, projecting the norm into corresponding zones of the color histogram to obtain the movement scale variable of each color zone; obtaining the dynamic significance value of each color and projecting to the initial image to generate a dynamic significance chart; and summing the dynamic significance chart and the static significance chart, thereby obtaining the final significance chart. The method disclosed by the invention can effectively bring the dynamic characteristic into the significance consideration range, and can obtain the result on the basis of the existing dynamic video test set, which is more excellent than the result of the traditional method.

Description

A kind of video image conspicuousness detection method of based on motion color-associations
Technical field
The invention belongs to technical field of video image processing, be specifically related to a kind of video image conspicuousness detection method.
Background technology
Identifying important target from the scene of complexity is the neural basic function of human vision.For example, traffic lights can cause that human eye notes when driving, and the aircraft that flies on the blue sky can cause the attention of human eye, and the beacon on the night sea level can cause the attention of human eye.We can will concentrate on key position to reach better analytical effect to rely on this function.
It is to make computer system can imitate the notice mechanism of human eye that conspicuousness detects, by corresponding computation process, the pith in the video image is highlighted, and be the process of " discovery ".The result who utilizes conspicuousness to detect can the various narrow resources of priority allocation, for example show larger picture at less mobile phone display screen curtain, can preferentially show its part and parcel; When computational resource is not enough, can be preferentially to signal portion identify, the calculating such as tracking.The net result that conspicuousness detects is to generate conspicuousness map image (Saliency map), also claims Saliency maps.Saliency maps is a kind of description figure of probability distribution, and part probable value brighter among the figure is larger, and also namely the conspicuousness of this pixel is larger.Saliency maps can be applied to the every field of computer vision, such as self-adapting compressing, and image segmentation, image retrieval, target identification etc. also can be used for traffic administration, safety monitoring, the real-time scenes such as robot environment's perception.
The western scholar such as Itti have proposed the rapid scene analytical model based on vision noticing mechanism in 1998, included the concept of conspicuousness in field of machine vision first.After this, flourish for the static conspicuousness detection method of rest image.Static conspicuousness is formed by image attributes combined actions such as color, edge, gradient, shapes, has uniqueness, unpredictability and singularity, and its mechanism of perception and optic nerve are closely connected.The people such as Achanta proposed frequency domain in 2009 and adjust the salient region analytical approach, and the method is come acquisition center contrast on every side with color and illumination information, and then obtained the conspicuousness mapping result from the frequency-domain analysis angle.Cheng equals the salient region detection method that has proposed based on global contrast in 2011, thereby the method is utilized the Color Statistical feature of input picture to carry out the histogram contrast and is obtained the conspicuousness target, and the method also can further be weighted by space length and obtain the region contrast detection method.
Static conspicuousness detection method is comparative maturity at present, detects for the static conspicuousness of video image, and its static Saliency maps can be by the detection method acquisition of various maturations.Chinese patent literature 201010623832.0 discloses a kind of target identification method based on significant characteristics, and the method obtains the conspicuousness value by the geometric properties of evaluating objects; Chinese patent literature 201110335538.4 discloses a kind of Quick detection method for salient object, and the method obtains conspicuousness information by wavelet transformation and Core-Periphery histogramming algorithm
The above-mentioned static conspicuousness detection method only information such as the color of dependency graph picture or contrast metric is analyzed, can process preferably static single image front, that the background color contrast is clearly demarcated, but for the continuous videos image that has complicated moving scene, particularly the foreground moving target video image comparatively similar to background color usually can't obtain correct result.In addition, human eye has higher attention rate for the object of motion, and only considers that the analytical approach of the static nature such as color often can not obtain the result of objective and fair when processing video.Therefore, by target in the video being carried out the analysis of motion feature, can greatly improve the correctness of video significance analysis.With that in mind, dynamic conspicuousness detection method for video has appearred.
The people such as Wixson are in moving party in 2000 to the steady flow detection method, but the constraint that its hypothetical target moves along a straight line is difficult to be adapted to most of application scenarioss.The people such as Mahadevan proposed center ring around the time and space significance detection method in 2010, and its result depends on the size of detection window strongly, and larger foreground object is easily produced the detection inefficacy.Gopalakrishnan proposed the motion conspicuousness detection method of linear dynamic profile in 2012, the general location that the method only can the perception target can not generate complete profile, and degree of accuracy is relatively poor.
Summary of the invention
The object of the present invention is to provide a kind of video image conspicuousness detection method of based on motion color-associations, it can overcome interference that camera shake brings so that the important objects of moving in the video scene can be highlighted effectively, thereby obtain the conspicuousness mapping graph, for further tracing and monitoring or video compress provide the basis.
For solving the problems of the technologies described above, the invention provides a kind of video image conspicuousness detection method of based on motion color-associations, may further comprise the steps:
S1: the static Saliency maps that obtains video image according to static conspicuousness detection method;
S2: the light stream vectors field of extracting scene according to continuous frame of video;
S3: preliminary classification is carried out in the light stream vectors field and abandon maximum classification block by clustering method;
S4: with video image from the RGB color space conversion to the hsv color space;
S5: the frequency according to corresponding color in the H component of hsv color space occurs in input picture generates color histogram;
S6: for effectively classify each vector in the block of light stream vectors field, according to the color of its place pixel its norm is projected in the respective bins of color histogram and go, obtain the motion yardstick variable between each chromatic zones;
S7: by standardization motion yardstick variable, obtain the motion conspicuousness value of every kind of color and project to original image generating the motion Saliency maps.
S8: motion Saliency maps and the addition of static Saliency maps linear weighted function obtain final Saliency maps.
Wherein, among the described step S3, preliminary classification is carried out in the light stream vectors field and abandon maximum classification block by clustering method.Can adopt existing arbitrary ripe clustering method to realize.Search maximum classification block and namely search the maximum classification block of vectorial number, this need to count and can realize the vector in each block of classifying.Find behind the maximum classification block will vector wherein to be made as invalidly, because this normally rocks the motion artifacts that brings by video camera, abandon these noises and can improve the accuracy that following step is calculated.And the vector in other residue classification blocks is effective vector, and these classification blocks are the block of effectively classifying.
Wherein, among the described step S5, according to the H component in the hsv color space, generate color histogram, formula is: h (r k)=n k, r wherein kK kind color interior between chromatic zones, n kThat color is r in the image kPixel count.
Wherein, among the described step S6, for effectively classify each vector in the block of light stream vectors field, color according to its place pixel, its norm is projected in the respective bins of color histogram and go, the mould that is about to it is added on the motion yardstick variable of respective bins, and formula is:
m ( r k ) = Σ p = r k ( mv x ) 2 + ( mv y ) 2
Wherein, p is vector (mv x, mv y) color value of pixel at place, m (r k) be r between chromatic zones kScaling vector.
Wherein, among the described step S7, by standardization motion yardstick variable, obtain the motion conspicuousness value of every kind of color.The motion conspicuousness value of every kind of color is the result after the motion yardstick variable between each chromatic zones on the whole histogram is standardized, and its normalizing is as follows:
MS ( r k ) = m r - m min m max - m min
M wherein rFor treating the motion yardstick variable between normalized chromatic zones, m MinBe the minimum value in the motion yardstick variable, m MaxBe the maximal value in the motion yardstick variable.Value MS (r after the standardization k) can be used as the motion conspicuousness value of this color, the pixel that then has this color among the former figure all has this motion conspicuousness value.The motion conspicuousness value of every kind of color is projected to original image namely generate motion Saliency maps S M
Wherein, among the described step S8, motion Saliency maps and the addition of static Saliency maps linear weighted function obtain final Saliency maps.The addition formula is as follows:
S R=α·S M+(1-α)S S
S wherein RBe final Saliency maps, S MBe motion Saliency maps, S SBe static Saliency maps, α is corresponding weighting coefficient.Weighting coefficient α has controlled motion feature and static nature shared weight in net result, and the shared proportion of the larger then motion feature of α is larger.The value of α can be selected by decision making algorithm, also can set by empirical value and applied environment, and this value is preferably 0.5 usually.
Method synthesis for detection of vision significance zone in the video image proposed by the invention utilizes the static significant characteristics of video scene and motion feature to obtain the conspicuousness testing result, particularly the incidence relation with color and motion calculates motion conspicuousness value, can fast and effeciently motion feature be included in the conspicuousness limit of consideration.This method all can obtain the result who is better than classic method on existing sport video test set, and can further be implemented in many related application of machine vision and go.
Description of drawings
Fig. 1 is the video conspicuousness detection method process flow diagram of the based on motion color-associations narrated of the present invention;
Fig. 2 is the exemplary plot that obtains the light stream vectors field from successive video frames;
Fig. 3 is with the exemplary plot of video image from the RGB color space conversion to the hsv color space;
Fig. 4 generates histogram according to the H component in the hsv color space, and vector field is shone upon the exemplary plot that generates the color motion association on it;
Fig. 5 is the final conspicuousness value diagram illustration of three successive video frames.
Embodiment
Below in conjunction with drawings and Examples, specific implementation method of the present invention is described in further detail.Following examples are used for explanation the present invention, but are not used for limiting the scope of the invention.
Step S1 obtains static Saliency maps according to static conspicuousness detection method, can obtain static Saliency maps S with the static conspicuousness detection method of existing arbitrary maturation SUse the salient region detection method based on global contrast in the present embodiment.
Step S2 extracts the light stream vectors field of scene according to continuous two frame of video in the video, the extraction of vector field can be used existing arbitrary dense optical flow field extracting method, such as the Lucas-Kanade method, and the Horn-Schunck method.Adopt Lucas-Kanade light stream extracting method in the present embodiment, the optical flow field that extracts can represent the displacement relation of pixel between two continuous frames.
Fig. 2 is the demonstration effect figure of implementation step S2, and Fig. 2 (a) is two continuous frame of video, and Fig. 2 (b) is the scene motion light stream vectors field that extracts by the Computation of Optical Flow.
Step S3 carries out preliminary classification to the light stream vectors field and abandons maximum classification block by clustering method.Can adopt existing arbitrary ripe clustering method to realize, adopt the k-means clustering method in the present embodiment.K-means estimates as similarity with Euclidean distance, as the clustering criteria function, utilizes function to ask the method for extreme value to carry out the regulation rule of interative computation with the error sum of squares criterion function.The number k of its input cluster is met k classification of variance minimum sandards, and the number of k is classified meticulousr more at most, but computing time is also longer.This parameter is chosen as between 5 ~ 8 usually in the present embodiment, can obtain preferably effect.Search maximum classification block and namely search the maximum classification block of vectorial number, this need to count and can realize the vector in each block of classifying.Find behind the maximum classification block will vector wherein to be made as invalidly, because this normally rocks the motion artifacts that brings by video camera, abandon these noises and can improve the accuracy that following step is calculated.And the vector in other residue classification blocks is effective vector, and these classification blocks are the block of effectively classifying.
Step S4 from the RGB color space conversion to the hsv color space, can get rid of the factors such as illumination variation to the interference of color with image, and the RGB color component of three dimensions is projected on the one-dimensional vector H of HSV.If (r, g, b) is respectively the red, green and blue coordinate of a color, to establish max and be equivalent to r, the maximum among g and the b, min equal the reckling in these values.Then conversion formula is as follows:
Figure BDA00002390550400071
s = 0 , if max = 0 max - min max = 1 - min max , otherwise
v=max
Fig. 3 is the demonstration effect figure of implementation step S4, and Fig. 3 (a) is the former figure under the RGB color space, and Fig. 3 (b) is converted to demonstration figure behind the hsv color space with former figure.
Step S5, the frequency according to corresponding color in the H component of hsv color space occurs in input picture generates color histogram, and formula is: h (r k)=n k, r wherein kK kind color interior between chromatic zones, n kThat color is r in the image kPixel count.Often the color in the H component is divided in order i block in order to save space raising tolerance during use, can improves like this efficient of calculating, the larger then degree of accuracy of i is higher but the larger tolerance of consumption storage space is less.Fig. 4 (b) is depicted as the H component color histogram of i=30.
Step S6 for effectively classify each vector in the block of light stream vectors field, projects its norm according to the color of its place pixel in the respective bins of color histogram and goes, and obtains the motion yardstick variable between each chromatic zones.The mould that is about to vector is added on the motion yardstick variable of respective bins, and its formula is:
m ( r k ) = Σ p = r k ( mv x ) 2 + ( mv y ) 2
Wherein, p is vector (mv x, mv y) color value of pixel at place, m (r k) be r between chromatic zones kScaling vector.Be divided into i block owing to the H component during actual use, the norm of vector also correspondingly is added on the motion yardstick variable m of these blocks, so that the color on the same block obtains same motion conspicuousness value the most at last, can increase system's tolerance like this, ignores color error among a small circle.
Fig. 4 is the demonstration effect figure of implementation step S5, S6, and Fig. 4 (b) is the color histogram by the H component generation of the HSV image of Fig. 4 (a), and it is i=30 interval that this color space is divided into.Fig. 4 (d) is by the result of the vector in the vector field of Fig. 4 (c) after the color histogram projection shown in Fig. 4 (b).Red curve has represented the result after vectorial mould value adds up among the figure, and its numerical value is larger, illustrates that the amount of exercise of pixel in video scene of this color is larger, and its motion conspicuousness value is also higher.
Among the step S7, by standardization motion yardstick variable, obtain the motion conspicuousness value of every kind of color.The motion conspicuousness value of every kind of color is the result after the motion yardstick variable between each chromatic zones on the whole histogram is standardized, and its normalizing is as follows:
MS ( r k ) = m r - m min m max - m min
M wherein rFor treating the motion yardstick variable between normalized chromatic zones, m MinBe the minimum value in the motion yardstick variable, m MaxBe the maximal value in the motion yardstick variable.Value MS (r after the standardization k) can be used as the motion conspicuousness value of this color, the pixel that then has this color among the former figure all has this motion conspicuousness value.The motion conspicuousness value of every kind of color is projected to original image namely generate motion Saliency maps S M
Among the described step S8, motion Saliency maps and the addition of static Saliency maps linear weighted function obtain final Saliency maps.The addition formula is as follows:
S R=α·S M+(1-α)S S
S wherein RBe final Saliency maps, S MBe motion Saliency maps, S SBe static Saliency maps, α is corresponding weighting coefficient.Weighting coefficient α has controlled motion feature and static nature shared weight in net result, and the shared proportion of the larger then motion feature of α is larger.The value of α can be selected by decision making algorithm, also can set by empirical value and applied environment, and this value elects 0.5 usually as in the present embodiment.
Fig. 5 is the demonstration effect figure of invention step S8, the first behavior original image among the figure, the final Saliency maps of the second behavior.This figure has showed that the conspicuousness of three successive video frames detects net result.
Video image conspicuousness detection method disclosed by the invention extracts the motion significant characteristics in the image and generates final conspicuousness mapping result in conjunction with static significant characteristics by motion and the related information of color.The present invention has obtained the result who is better than classic method on the existing universal test set in the world.The vision significance zone of the present invention in can the automatic analysis image, analysis result can be applicable to the applications such as important goal is cut apart, object identification, adaptive video compression, video scaling, image retrieval and the safety monitoring of content erotic, military guard.
Above embodiment only is used for explanation the present invention; and be not limitation of the present invention; the those of ordinary skill in relevant technologies field; in the situation that do not break away from the spirit and scope of the present invention; can also make a variety of changes and be out of shape; therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (5)

1. the video image conspicuousness detection method of a based on motion color-associations may further comprise the steps:
S1: the static Saliency maps that obtains video image;
S2: utilize frame of video continuous in the described video image to extract the light stream vectors field of scene;
S3: classified in the light stream vectors field and abandon maximum classification block by clustering method;
S4: with described video image from the RGB color space conversion to the hsv color space;
S5: the frequency according to corresponding color in the H component of described hsv color space occurs in video image generates color histogram;
S6: for each vector in the described light stream vectors field classification block, according to the color of its place pixel its norm is projected in the respective bins of described color histogram and go, obtain the motion yardstick variable between each chromatic zones;
S7: the described motion yardstick variable that standardizes obtains the motion conspicuousness value of every kind of color and projects to original image generating the motion Saliency maps;
S8: with described motion Saliency maps and the addition of static Saliency maps linear weighted function, obtain final Saliency maps, can realize the conspicuousness of video image is detected.
2. the video image conspicuousness detection method of a kind of based on motion color-associations according to claim 1 is characterized in that, among the described step S8, the formula of addition is as follows:
S R=α·S M+(1-α)S S
S wherein RBe final Saliency maps, S MBe motion Saliency maps, S SBe static Saliency maps, α is weighting coefficient.
3. the video image conspicuousness detection method of a kind of based on motion color-associations according to claim 1 and 2 is characterized in that, among the described step S5, according to the H component in the hsv color space, the formula that generates color histogram is:
h(r k)=n k
Wherein, r kK kind color interior between chromatic zones, n kThat color is r in the image kPixel count, k is chromatic number.
4. one of according to claim 1-3 the video image conspicuousness detection method of described a kind of based on motion color-associations is characterized in that,
Among the described step S7, described normalized formula is as follows:
MS ( r k ) = m r - m min m max - m min
M wherein rFor treating the motion yardstick variable between normalized chromatic zones, m MinBe the minimum value in the motion yardstick variable, m MaxBe the maximal value in the motion yardstick variable, r kIt is k kind color interior between chromatic zones.
5. one of according to claim 1-4 the video image conspicuousness detection method of described a kind of based on motion color-associations is characterized in that, among the described step S6, the formula that obtains the motion yardstick variable between each chromatic zones is:
m ( r k ) = Σ p = r k ( mv x ) 2 + ( mv y ) 2
Wherein, p is vector (mv x, mv y) color value of pixel at place, m (r k) be r between chromatic zones kScaling vector.
CN201210450679.5A 2012-11-12 2012-11-12 A kind of video image conspicuousness detection method based on motion color-associations Active CN103020992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210450679.5A CN103020992B (en) 2012-11-12 2012-11-12 A kind of video image conspicuousness detection method based on motion color-associations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210450679.5A CN103020992B (en) 2012-11-12 2012-11-12 A kind of video image conspicuousness detection method based on motion color-associations

Publications (2)

Publication Number Publication Date
CN103020992A true CN103020992A (en) 2013-04-03
CN103020992B CN103020992B (en) 2016-01-13

Family

ID=47969558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210450679.5A Active CN103020992B (en) 2012-11-12 2012-11-12 A kind of video image conspicuousness detection method based on motion color-associations

Country Status (1)

Country Link
CN (1) CN103020992B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324705A (en) * 2013-06-17 2013-09-25 中国科学院深圳先进技术研究院 Large-scale vector field data processing method
CN103810707A (en) * 2014-01-28 2014-05-21 华东理工大学 Mobile visual focus based image vision salient detection method
CN104268508A (en) * 2014-09-15 2015-01-07 济南大学 Portable traffic light distinguishing method for color blindness and color amblyopia people
CN104794210A (en) * 2015-04-23 2015-07-22 山东工商学院 Image retrieval method combining visual saliency and phrases
CN105224914A (en) * 2015-09-02 2016-01-06 上海大学 A kind of based on obvious object detection method in the nothing constraint video of figure
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105915562A (en) * 2016-06-29 2016-08-31 韦醒妃 Identify verification system based on recognition technology
CN106529419A (en) * 2016-10-20 2017-03-22 北京航空航天大学 Automatic detection method for significant stack type polymerization object in video
CN107507225A (en) * 2017-09-05 2017-12-22 明见(厦门)技术有限公司 Moving target detecting method, device, medium and computing device
CN107578426A (en) * 2017-07-26 2018-01-12 浙江工业大学 A kind of real-time optical flow analysis tracking towards serious degraded video
CN109241342A (en) * 2018-07-23 2019-01-18 中国科学院计算技术研究所 Video scene search method and system based on Depth cue
CN111028263A (en) * 2019-10-29 2020-04-17 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN113591708A (en) * 2021-07-30 2021-11-02 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110251076B (en) * 2019-06-21 2021-10-22 安徽大学 Method and device for detecting significance based on contrast and fusing visual attention

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101464A1 (en) * 2006-10-27 2008-05-01 Shawmin Lei Methods and Systems for Low-Complexity Data Compression
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101464A1 (en) * 2006-10-27 2008-05-01 Shawmin Lei Methods and Systems for Low-Complexity Data Compression
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAOJIE CHEN等: "Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
蒋鹏等: "一种动态场景中的视觉注意区域检测方法", 《小型微型计算机***》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324705B (en) * 2013-06-17 2016-05-18 中国科学院深圳先进技术研究院 Extensive vector field data processing method
CN103324705A (en) * 2013-06-17 2013-09-25 中国科学院深圳先进技术研究院 Large-scale vector field data processing method
CN103810707A (en) * 2014-01-28 2014-05-21 华东理工大学 Mobile visual focus based image vision salient detection method
CN103810707B (en) * 2014-01-28 2016-08-17 华东理工大学 A kind of image vision significance detection method based on moving-vision focus
CN104268508A (en) * 2014-09-15 2015-01-07 济南大学 Portable traffic light distinguishing method for color blindness and color amblyopia people
CN104794210A (en) * 2015-04-23 2015-07-22 山东工商学院 Image retrieval method combining visual saliency and phrases
CN105224914A (en) * 2015-09-02 2016-01-06 上海大学 A kind of based on obvious object detection method in the nothing constraint video of figure
CN105224914B (en) * 2015-09-02 2018-10-23 上海大学 It is a kind of based on figure without constraint video in obvious object detection method
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105915562A (en) * 2016-06-29 2016-08-31 韦醒妃 Identify verification system based on recognition technology
CN106529419A (en) * 2016-10-20 2017-03-22 北京航空航天大学 Automatic detection method for significant stack type polymerization object in video
CN106529419B (en) * 2016-10-20 2019-07-26 北京航空航天大学 The object automatic testing method of saliency stacking-type polymerization
CN107578426A (en) * 2017-07-26 2018-01-12 浙江工业大学 A kind of real-time optical flow analysis tracking towards serious degraded video
CN107507225A (en) * 2017-09-05 2017-12-22 明见(厦门)技术有限公司 Moving target detecting method, device, medium and computing device
CN107507225B (en) * 2017-09-05 2020-10-27 明见(厦门)技术有限公司 Moving object detection method, device, medium and computing equipment
CN109241342A (en) * 2018-07-23 2019-01-18 中国科学院计算技术研究所 Video scene search method and system based on Depth cue
CN111028263A (en) * 2019-10-29 2020-04-17 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN111028263B (en) * 2019-10-29 2023-05-05 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN113591708A (en) * 2021-07-30 2021-11-02 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN113591708B (en) * 2021-07-30 2023-06-23 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image

Also Published As

Publication number Publication date
CN103020992B (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN103020985B (en) A kind of video image conspicuousness detection method based on field-quantity analysis
CN109740413B (en) Pedestrian re-identification method, device, computer equipment and computer storage medium
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
US10049293B2 (en) Pixel-level based micro-feature extraction
CN102332095B (en) Face motion tracking method, face motion tracking system and method for enhancing reality
CN110298297B (en) Flame identification method and device
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN109460764B (en) Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
CN104966085A (en) Remote sensing image region-of-interest detection method based on multi-significant-feature fusion
CN102298781A (en) Motion shadow detection method based on color and gradient characteristics
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
CN104715244A (en) Multi-viewing-angle face detection method based on skin color segmentation and machine learning
CN103634680A (en) Smart television play control method and device
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
CN108537181A (en) A kind of gait recognition method based on the study of big spacing depth measure
CN104657724A (en) Method for detecting pedestrians in traffic videos
CN104036250A (en) Video pedestrian detecting and tracking method
CN109492575A (en) A kind of staircase safety monitoring method based on YOLOv3
CN115661720A (en) Target tracking and identifying method and system for shielded vehicle
CN112926552A (en) Remote sensing image vehicle target recognition model and method based on deep neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant