CN106203428A - The image significance detection method merged based on blur estimation - Google Patents

The image significance detection method merged based on blur estimation Download PDF

Info

Publication number
CN106203428A
CN106203428A CN201610526947.5A CN201610526947A CN106203428A CN 106203428 A CN106203428 A CN 106203428A CN 201610526947 A CN201610526947 A CN 201610526947A CN 106203428 A CN106203428 A CN 106203428A
Authority
CN
China
Prior art keywords
image
vision
blur estimation
low level
significance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610526947.5A
Other languages
Chinese (zh)
Other versions
CN106203428B (en
Inventor
陈震中
丁晓颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201610526947.5A priority Critical patent/CN106203428B/en
Publication of CN106203428A publication Critical patent/CN106203428A/en
Application granted granted Critical
Publication of CN106203428B publication Critical patent/CN106203428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of image significance detection method merged based on blur estimation, stage and fuzziness feature application stage is obtained including Low Level Vision feature, described Low Level Vision feature obtains the image that stage input is to be detected, utilize classical bottom-up significance detection algorithm that image is detected, and significant characteristics figure detection obtained is as the Low Level Vision feature acquisition result of image to be detected.Described fuzziness feature application stage, first input organize training image blocks more, train sparse dictionary;Mode followed by blur estimation is simulated psychological peculiarity when photographer shoots and quantifies it, quantized result is used for instructing the fusion of the significance information under different mechanisms, obtains final saliency testing result, to improve significance accuracy of detection.Saliency testing result obtained by the present invention more conforms to human vision significance detection pattern.Meanwhile, there is more preferable robustness and higher accuracy of detection.

Description

The image significance detection method merged based on blur estimation
Technical field
The present invention relates to saliency detection field, be specifically related to the detection of a kind of saliency based on blur estimation Method.
Background technology
In recent years, along with the development of digital technology, photo size is increasing, and resolution is more and more higher, photo institute The information contained also becomes increasingly to enrich, it is desirable to process the full detail in image in time for computer image system It it is a no small challenge.Select mechanism to assist the information in image to carry out selectivity process, to subtract it would therefore be desirable to have a kind of The burden of light image processing system.The vision significance of the mankind can help to contain in human visual system's resolution image important letter The region of breath, reduces the interference of background area simultaneously.By vision significance model is applied to Computer Image Processing field, Can more quickly extract the information in image, improve image procossing precision.Therefore, the Visual Observations Observations mistake of how simulating human Journey, it is thus achieved that more efficiently accurate image significance detection method, becomes the problem that computer vision field is urgently to be resolved hurrily.
The mode being presently used for saliency detection can be divided into two kinds from principle, and one is to utilize Low Level Vision Feature, such as color characteristic, the bottom-up significance detection mode of direction character and contrast metric etc..This detection side Formula data characteristics based on image, has detection rates fast, the multiple advantages such as computation complexity is low.But this traditional significance Detection mode easily produces detection mistake, thus is seldom used alone.Another kind is to utilize high-rise visual signature, i.e. utilizes priori Knowledge assists to carry out image the top-down significance detection mode of significance detection.This significance detection mode detects Precision is high, and can adjust flexibly along with different Detection tasks, but computation complexity is higher, the longest. The research that current significance detection field is the most popular is bottom-up significance detection mode aobvious with top-down Work property detection mode blends, and has both utilized the Low Level Vision feature of image also to utilize the high-rise visual signature of image, has drawn above-mentioned The advantage of two kinds of significance detection modes, it is thus achieved that more efficiently accurate significance testing result.But the fusion of current popular is certainly The end, is upwards not directed to psychological peculiarity when utilizing photographer to shoot and assists to carry out significantly with top-down significance detection mode Property detection, and the shooting psychology of photographer is significant for the understanding of image.Therefore, the present invention is directed to significantly Property the deficiency that exists of detection method, the creative psychology quantified when photographer shoots that proposes is intended to and as priori Auxiliary carries out saliency detection.
Summary of the invention
The present invention is directed to the deficiency that traditional images significance detection method exists, propose a kind of based on blur estimation fusion The technical scheme of image significance detection method.
Technical solution of the present invention provides a kind of image significance detection method merged based on blur estimation, regards including low layer Feel that feature obtains stage and fuzziness feature application stage;
The described Low Level Vision feature acquisition stage comprises the steps of,
Step 1.1, inputs test image to be detected,
Step 1.2, according to the test image of step 1.1 input, utilizes bottom-up saliency detection algorithm to obtain Take the Low Level Vision feature of image, generate the significant characteristics figure of Low Level Vision corresponding to image to be detected, use SBRepresent;
The described fuzziness feature application stage comprises the steps of
Step 2.1, input is organized training image blocks more, is often organized the image block P that training image blocks includes focusing onfWith the figure defocused As block Pd
Step 2.2, for each training image group Y={y of input1,...,yn, carry out vectorization, y1,...,ynRepresent Concrete image block in one training image group, n represents the quantity of image block in a training image group;
And train sparse dictionary D to make to input data can be expressed as equation below,
m i n x i | | y i - Dx i | | 2 2 s . t . | | x i | | 0 ≤ k
Wherein, xiRepresent is to utilize different dictionary atom to blend fitted figure as yiTime the most homoatomic weight parameter, K represents xiThe value of sparse degree;
Step 2.3, for step 2.2 gained sparse dictionary D, the picture number to be detected of read step 1.1 input it is believed that Breath calculates, it is thus achieved that blur estimation figure SD
Step 2.4, plot step 2.3 gained blur estimation figure SDCorresponding grey level histogram HD, and according to intensity histogram Map analysis obtains the maximum I of blur estimation figuremaxWith minima Imin
Step 2.5, utilizes step 2.3 gained blur estimation figure SDMaximum ImaxWith minima IminCalculate image Michelson contrast C, computing formula is as follows,
C = I m a x - I min I m a x + I min
Step 2.6, Michelson contrast C calculated for step 2.5, utilize equation below to be mapped to default Within scope,
λ = 1 1 + e - ( a + b C )
Wherein, to be that photographer shoots the value of intent parameter, parameter a and parameter b corresponding to preset range for λ;
Step 2.7, according to step 2.3 gained blur estimation figure SD, obtain the focal zone in image to be detected, with focusing The local maximum point in region is as vision localization point Fi, Gaussian smoothing is carried out for each vision localization point, obtains View-based access control model anchor point FiVisual density figure Di, by all vision localization point FiCorresponding visual density figure DiSuperposition obtains base In the significant characteristics figure that blur estimation merges, it is designated as the significant characteristics figure S of high-rise visionT
Step 2.8, utilizes step 2.6 gained photographer to shoot intent parameter λ, as merging weight, instructs difference The fusion of the lower significant characteristics figure of mechanism, fusion formula is as follows,
S=(1-λ) SB+λST
Wherein, S represents the saliency testing result obtained, SBFor the significant characteristics figure of Low Level Vision, STFor high level The significant characteristics figure of vision.
And, in step 2.6, preset range is [0.2,0.8].
And, the value of parameter a is-1.38, and the value of parameter b is 2.76.
The present invention compared with prior art has the advantage that
1. the present invention takes into full account the impact for image taking effect of mental activity when photographer shoots, the heart of photographing Of science introducing assists the significance carrying out image to detect, and more conforms to the perception of human visual system.
The shooting intent parameter introducing photographer of novelty the most of the present invention, and use it for assisting different significance information Fusion, there is more preferable theoretical basis, result is more accurate.
3. the significance detection method that the present invention proposes can be obviously improved the significance accuracy of detection of image, and has relatively Strong robustness, has certain dissemination.
Detailed description of the invention
The method that the present invention proposes is by bottom-up significance detection mode and top-down significance detection mode Blend, it is proposed that psychology when shooting photographer is simulated and the framework quantified, and simulation is used with the result quantified In the fusion of the significance information instructed under different mechanisms, the Low Level Vision feature of comprehensive utilization image is carried out with fuzziness feature Detection.
The present invention is first depending on the view data to be detected of input, calculates the Low Level Vision characteristic pattern of image;Secondly mould Intend psychological peculiarity when photographer shoots and carry out the blur estimation of image, calculate the vision localization point of viewer, obtain correspondence Visual density figure is as the high-rise visual signature figure of image.Quantify when photographer shoots followed by image blurring estimated result Intent parameter, strengthening photographer is intended by the object of (that is to say that viewer is easiest to be concerned about), utilizes quantized result to instruct High-rise visual signature figure and the fusion of Low Level Vision characteristic pattern, obtain final saliency testing result.The present invention simulates Psychological peculiarity when photographer shoots, meets the internal relation that image taking is intended between viewer's perception, obtained figure As significance testing result more conforms to human vision significance detection pattern.Meanwhile, the Low Level Vision of image is fully utilized Feature and high-rise visual signature, have more preferable robustness and higher accuracy of detection.
Technical solution of the present invention can use computer software mode to support automatic operational process, detailed below in conjunction with embodiment Technical solution of the present invention is described.
Embodiment includes that Low Level Vision feature obtains stage and fuzziness feature application stage.
The described Low Level Vision feature acquisition stage comprises the steps of
Step 1.1, inputs test image to be detected;
Step 1.2, according to the test image of step 1.1 input, utilizes classical bottom-up saliency detection Algorithm obtains the Low Level Vision feature of image, generates the Low Level Vision significant characteristics figure that image to be detected is corresponding, and uses SBTable Show.
Obtain the stage by above Low Level Vision feature, test image to be detected in the present embodiment is all located Reason, obtains corresponding Low Level Vision significant characteristics figure, the fusion of significant characteristics under follow-up different mechanisms.
Classical bottom-up saliency detection algorithm can be found in documents below, and it will not go into details for the present invention:
[1]X.Hou and L.Zhang,“Saliency detection:a spectral residual approach,”in Proc.CVPR,2007.
[2]R.Achanta,S.Hemami,F.Estrada,and S.Susstrunk,“Frequency-tuned salient region detection,”in Proc.CVPR,2009.
The described fuzziness feature application stage comprises the steps of
Step 2.1, it is contemplated that photographer, during shooting image, strengthens often through change focal length and wanted to clap The object taken the photograph, so, the object more focused on more is likely to be photographer and wants the content of strengthening.Therefore, this method considers to utilize The mode of the blur estimation of image is assessed photographer in image and is wanted the content of strengthening.Input organizes different training images Block, often group training image blocks includes the image block P focused onfWith the image block P defocusedd;Embodiment uses the image of a size of 8 × 8 Block is trained.
Described training image blocks derives to be split the random thousands of image captured, and carries out image block A certain degree of Gaussian Blur processes.Can be pre-selected voluntarily by those skilled in the art when being embodied as focusing image block and The image block defocused.
Step 2.2, for each training image group Y={y of input1,…,yn, carry out vectorization, y1,…,ynRepresent one Concrete image block in individual training image group, n represents the quantity of image block in a training image group, in each training image group Including the image block focused on and the image block defocused.
And train sparse dictionary D that input data can be expressed as equation below.
m i n x i | | y i - Dx i | | 2 2 s . t . | | x i | | 0 ≤ k
Wherein, xiRepresent is to utilize different dictionary atom to blend fitted figure as yiTime the most homoatomic weight parameter, K is for representing xiThe value of sparse degree.Formula can help fitting result more approaching to reality view data so that image blurring estimates Meter result is more accurate.
Step 2.3, for obtaining focus de-focus sparse dictionary D, the picture number to be detected of read step 1.1 input it is believed that Breath calculates, it is thus achieved that blur estimation figure SD
At blur estimation figure SDIn, the gray value that each pixel is corresponding constitutes the dictionary atomic number table of this pixel Show.The dictionary atomic number constituted is the most, represents that the object corresponding to this pixel is the most clear, and the dictionary atomic number of composition is the fewest, The object representing corresponding is the fuzzyyest.
Calculating acquisition blur estimation figure and can be found in documents below, it will not go into details for the present invention:
[3]J.Shi,L.Xu,and J.Jia,“Just noticeable defocus blur detection and estimation,”in Proc.CVPR,2015.
Step 2.4, draws blur estimation figure SDCorresponding grey level histogram HD, and obtain according to intensity histogram map analysis The maximum I of blur estimation figuremaxWith minima Imin
Grey level histogram HDIn, gray scale the least expression picture material is the fuzzyyest, and gray scale the biggest expression picture material is the most clear. Same object is often made up of one group of pixel with close fog-level, and different objects is due to camera lens Distance difference and often there is different fog-levels.Therefore, grey level histogram presents different crests and trough.Ripple Peak represents the set of the pixel with close fog-level, thus corresponding same class object, trough then represents different objects Between fuzziness difference, may be used for distinguishing different objects.In the method, near the first of grey level histogram initial point The peak value that individual crest is corresponding is considered to represent average background pixel fuzziness, is recorded as Imin, and farthest from grey level histogram initial point Peak value corresponding to crest be considered in representative image the mean pixel fuzziness of the object focused on most, be recorded as Imax
Step 2.5, utilizes blur estimation figure SDMaximum ImaxWith minima IminCalculate the Michelson contrast of image Degree C (visibility), computing formula is as follows:
C = I m a x - I m i n I m a x + I m i n
Step 2.6, Michelson contrast C calculated for step 2.5, in the present embodiment by its numerical value profit It is mapped within the scope of [0.2,0.8] by equation below.When being embodied as, mapping range can be according to view data feature feelings Condition adjusts accordingly, and those skilled in the art can preset voluntarily, should control within the scope of [0,1].
λ = 1 1 + e - ( a + b C )
For calculated numerical value λ, it is defined as the shooting intent parameter of photographer, is used for describing photographer's shooting Time photography be intended in the picture represent situation.In the present embodiment, in order to Michelson contrast C value is controlled [0.2, 0.8], within, the value of parameter a is-1.38, and the value of parameter b is 2.76, and when being embodied as, those skilled in the art can be voluntarily The value of parameter preset a, b.
Step 2.7, calculates the blur estimation figure S obtained according to step 2.3D, threshold value can be set when being embodied as and obtain Focal zone in image to be detected, in the present embodiment, threshold value value is 35, and the gray value region more than 35 is identified as treating The focal zone of detection image.With blur estimation figure SDThe local maximum point of middle focal zone represents what this region focused on the most Local (the vision localization point being the most most possibly concerned about by the visual system of viewer), is recorded as Fi, for each vision Anchor point, carries out Gaussian smoothing to it, obtains based on this vision localization point FiVisual density figure (Density Map), It is recorded as Di, by all vision localization point FiCorresponding visual density figure DiSuperposition obtains the significance merged based on blur estimation Characteristic pattern, is recorded as ST
Step 2.8, the photographer utilizing step 2.6 to obtain shoots intent parameter λ, as merging weight, instructs not With the fusion of (Low Level Vision feature and fuzziness feature) significant characteristics figure under mechanism, fusion formula is as follows:
S=(1-λ) SB+λST
Wherein, S represents the saliency testing result obtained, SBFor Low Level Vision significant characteristics figure, STRegard for high level Feel significant characteristics figure, λ is the shooting intent parameter of photographer.
Specific embodiment described herein is only for example explanation spiritual to the present invention, technology neck belonging to the present invention Described specific embodiment can be made various amendment or supplements or use similar mode to substitute by the technical staff in territory, But do not deviate by the spirit of the present invention or surmount scope defined in appended claims.

Claims (3)

1. the image significance detection method merged based on blur estimation, it is characterised in that: include that Low Level Vision feature obtains Take stage and fuzziness feature application stage;
The described Low Level Vision feature acquisition stage comprises the steps of,
Step 1.1, inputs test image to be detected,
Step 1.2, according to the test image of step 1.1 input, utilizes bottom-up saliency detection algorithm acquisition figure The Low Level Vision feature of picture, generates the significant characteristics figure of Low Level Vision corresponding to image to be detected, uses SBRepresent;
The described fuzziness feature application stage comprises the steps of
Step 2.1, input is organized training image blocks more, is often organized the image block P that training image blocks includes focusing onfWith the image block defocused Pd
Step 2.2, for each training image group Y={y of input1,…,yn, carry out vectorization, y1,…,ynRepresent a training Concrete image block in image sets, n represents the quantity of image block in a training image group;
And train sparse dictionary D to make to input data can be expressed as equation below,
m i n x i | | y i - Dx i | | 2 2 s . t . | | x i | | 0 ≤ k
Wherein, xiRepresent is to utilize different dictionary atom to blend fitted figure as yiTime the most homoatomic weight parameter, k table Show xiThe value of sparse degree;
Step 2.3, for step 2.2 gained sparse dictionary D, the image data information to be detected of read step 1.1 input is entered Row calculates, it is thus achieved that blur estimation figure SD
Step 2.4, plot step 2.3 gained blur estimation figure SDCorresponding grey level histogram HD, and divide according to grey level histogram Analysis obtains the maximum I of blur estimation figuremaxWith minima Imin
Step 2.5, utilizes step 2.3 gained blur estimation figure SDMaximum ImaxWith minima IminCalculate image Michelson contrast C, computing formula is as follows,
C = I m a x - I m i n I m a x + I m i n
Step 2.6, Michelson contrast C calculated for step 2.5, utilize equation below to be mapped to preset range Within,
λ = 1 1 + e - ( a + b C )
Wherein, to be that photographer shoots the value of intent parameter, parameter a and parameter b corresponding to preset range for λ;
Step 2.7, according to step 2.3 gained blur estimation figure SD, obtain the focal zone in image to be detected, use focal zone Local maximum point as vision localization point Fi, Gaussian smoothing is carried out for each vision localization point, obtain based on Vision localization point FiVisual density figure Di, by all vision localization point FiCorresponding visual density figure DiSuperposition obtains based on mould Stick with paste the significant characteristics figure of estimation fusion, be designated as the significant characteristics figure S of high-rise visionT
Step 2.8, utilizes step 2.6 gained photographer to shoot intent parameter λ, as merging weight, instructs different mechanisms The fusion of lower significant characteristics figure, fusion formula is as follows,
S=(1-λ) SB+λST
Wherein, S represents the saliency testing result obtained, SBFor the significant characteristics figure of Low Level Vision, STFor high-rise vision Significant characteristics figure.
The image significance detection method merged based on blur estimation the most according to claim 1, it is characterised in that: step In 2.6, preset range is [0.2,0.8].
The image significance detection method merged based on blur estimation the most according to claim 2, it is characterised in that: parameter a Value be-1.38, the value of parameter b is 2.76.
CN201610526947.5A 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion Active CN106203428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610526947.5A CN106203428B (en) 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610526947.5A CN106203428B (en) 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion

Publications (2)

Publication Number Publication Date
CN106203428A true CN106203428A (en) 2016-12-07
CN106203428B CN106203428B (en) 2019-04-26

Family

ID=57466428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610526947.5A Active CN106203428B (en) 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion

Country Status (1)

Country Link
CN (1) CN106203428B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805850A (en) * 2018-06-05 2018-11-13 天津师范大学 A kind of frame image interfusion method merging trap based on atom
CN110099207A (en) * 2018-01-31 2019-08-06 成都极米科技股份有限公司 A kind of effective image calculation method for overcoming camera unstable
CN113269808A (en) * 2021-04-30 2021-08-17 武汉大学 Video small target tracking method and device
CN114155208A (en) * 2021-11-15 2022-03-08 中国科学院深圳先进技术研究院 Atrial fibrillation assessment method and device based on deep learning
CN115965844A (en) * 2023-01-04 2023-04-14 哈尔滨工业大学 Multi-focus image fusion method based on visual saliency priori knowledge

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916379A (en) * 2010-09-03 2010-12-15 华中科技大学 Target search and recognition method based on object accumulation visual attention mechanism
CN102063623A (en) * 2010-12-28 2011-05-18 中南大学 Method for extracting image region of interest by combining bottom-up and top-down ways
CN102693426A (en) * 2012-05-21 2012-09-26 清华大学深圳研究生院 Method for detecting image salient regions
US20120328161A1 (en) * 2011-06-22 2012-12-27 Palenychka Roman Method and multi-scale attention system for spatiotemporal change determination and object detection
CN103996198A (en) * 2014-06-04 2014-08-20 天津工业大学 Method for detecting region of interest in complicated natural environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916379A (en) * 2010-09-03 2010-12-15 华中科技大学 Target search and recognition method based on object accumulation visual attention mechanism
CN102063623A (en) * 2010-12-28 2011-05-18 中南大学 Method for extracting image region of interest by combining bottom-up and top-down ways
US20120328161A1 (en) * 2011-06-22 2012-12-27 Palenychka Roman Method and multi-scale attention system for spatiotemporal change determination and object detection
CN102693426A (en) * 2012-05-21 2012-09-26 清华大学深圳研究生院 Method for detecting image salient regions
CN103996198A (en) * 2014-06-04 2014-08-20 天津工业大学 Method for detecting region of interest in complicated natural environment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099207A (en) * 2018-01-31 2019-08-06 成都极米科技股份有限公司 A kind of effective image calculation method for overcoming camera unstable
CN110099207B (en) * 2018-01-31 2020-12-01 成都极米科技股份有限公司 Effective image calculation method for overcoming camera instability
CN108805850A (en) * 2018-06-05 2018-11-13 天津师范大学 A kind of frame image interfusion method merging trap based on atom
CN113269808A (en) * 2021-04-30 2021-08-17 武汉大学 Video small target tracking method and device
CN114155208A (en) * 2021-11-15 2022-03-08 中国科学院深圳先进技术研究院 Atrial fibrillation assessment method and device based on deep learning
CN115965844A (en) * 2023-01-04 2023-04-14 哈尔滨工业大学 Multi-focus image fusion method based on visual saliency priori knowledge
CN115965844B (en) * 2023-01-04 2023-08-18 哈尔滨工业大学 Multi-focus image fusion method based on visual saliency priori knowledge

Also Published As

Publication number Publication date
CN106203428B (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN106203428A (en) The image significance detection method merged based on blur estimation
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
CN110929566B (en) Human face living body detection method based on visible light and near infrared binocular camera
US10186041B2 (en) Apparatus and method for analyzing golf motion
CN108647663B (en) Human body posture estimation method based on deep learning and multi-level graph structure model
Gibelli et al. The identification of living persons on images: A literature review
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
WO2021052208A1 (en) Auxiliary photographing device for movement disorder disease analysis, control method and apparatus
CN110991266A (en) Binocular face living body detection method and device
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN110688929A (en) Human skeleton joint point positioning method and device
CN110298279A (en) A kind of limb rehabilitation training householder method and system, medium, equipment
CN104182970A (en) Souvenir photo portrait position recommendation method based on photography composition rule
CN109274883A (en) Posture antidote, device, terminal and storage medium
CN112568898A (en) Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN109977815A (en) Image quality evaluating method and device, electronic equipment, storage medium
CN105488780A (en) Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
Guo et al. PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training
CN112070181B (en) Image stream-based cooperative detection method and device and storage medium
CN114463663A (en) Method and device for calculating height of person, electronic equipment and storage medium
CN110070036B (en) Method and device for assisting exercise motion training and electronic equipment
CN106919924A (en) A kind of mood analysis system based on the identification of people face
CN113643363A (en) Pedestrian positioning and trajectory tracking method based on video image
CN106874689A (en) A kind of telecommunication network diagnosis aid system
CN111353367A (en) Face attendance checking method, device, equipment and storage medium based on thermal imaging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant